Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Nvidia targets inference as AI’s next battleground with Groq 3 LPX

It’s a big cost play, he pointed out, and it “has to happen everywhere, all the time, for all users.” The next phase of inferencing The new Groq 3 language processing units (LPUs) are based on intellectual property (IP) from Groq, which signed a $20 billion licensing agreement with Nvidia late last year. According to the chip company, a fleet of LPUs can function as a “giant single processor.” While Rubin GPUs will continue to handle prefill (prompt processing), Groq’s LPX will now handle latency-sensitive portions of decode (response). Together, they can deliver a “new class of inference performance,” Nvidia says.  Each LPX rack features 256 LPUs with 128 GB of on-chip static random-access memory (SRAM), 150 terabyte per second (TB/s) bandwidth, chip-to-chip links and high-speed connections to NVL72, Nvidia’s liquid-cooled AI supercomputer. Combined, these can reduce latency to “near zero,” Nvidia claims. The LPX integration with Vera Rubin AI factories will be available in the second half of this year. Training versus inferencing Training and inference stress infrastructure in very different ways, noted Sanchit Vir Gogia, chief analyst at Greyhound Research. While training rewards “massive parallelism and brute-force scale,” inferencing (especially for long context and interactive reasoning) is far more sensitive to latency, memory movement, cache behavior, concurrency, and cost per delivered token.

Read More »

The Pentagon is planning for AI companies to train on classified data, defense official says

The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, MIT Technology Review has learned.  AI models like Anthropic’s Claude are already used to answer questions in classified settings; applications include analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development that presents unique security risks. It would mean sensitive intelligence like surveillance reports or battlefield assessments become embedded into the models themselves, and it would bring AI firms into closer contact with classified data than before.  Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with MIT Technology Review. The news comes as demand for more powerful models is high: the Pentagon has reached agreements with OpenAI and Elon Musk’s xAI to operate their models in classified settings and is implementing a new agenda to become an “an ‘AI-first’ warfighting force” as the conflict with Iran escalates. (The Pentagon did not comment on its AI training plans as of publication time.) Training would be done in a secure data center that’s accredited to host classified government projects, and where a copy of an AI model is paired with classified data, according to two people familiar with how such operations work. Though the Department of Defense would remain the owner of the data, personnel from AI companies might in rare cases access the data if they have appropriate security clearance, the official said. 
Before allowing this new training, though, the official said, the Pentagon intends to evaluate how accurate and effective models are when trained on nonclassified data, like commercially available satellite imagery.  The military has long used computer vision models, an older form of AI, to identify objects in images and footage it collects from drones and airplanes, and federal agencies have awarded contracts to companies to train AI models on such content. And AI companies building large language models (LLMs) and chatbots have created versions of their models fine-tuned for government work, like Anthropic’s Claude Gov, which are designed to operate across more languages and in secure environments. But the official’s comments are the first indication that AI companies building LLMs, like OpenAI and xAI, could train government-specific versions of their models directly on classified data.
Aalok Mehta, who directs the Wadhwani AI Center at the Center for Strategic and International Studies and previously led AI policy efforts at Google and OpenAI, says training on classified data, as opposed to just answering questions about it, would present new risks.  The biggest of these, he says, is that classified information these models train on could be resurfaced to anyone using the model. That would be a problem if lots of different military departments, all with different classification levels and needs for information, were to share the same AI.  “You can imagine, for example, a model that has access to some sort of sensitive human intelligence—like the name of an operative—leaking that information to a part of the Defense Department that isn’t supposed to have access to that information,” Mehta says. That could create a security risk for the operative, one that’s difficult to perfectly mitigate if a particular model is used by more than one group within the military. However, Mehta says, it’s not as hard to keep information contained from the broader world: “If you set this up right, you will have very little risk of that data being surfaced on the general internet or back to OpenAI.” The government has some of the infrastructure for this already; the security giant Palantir has won sizable contracts for building a secure environment through which officials can ask AI models about classified topics without sending the information back to AI companies. But using these systems for training is still a new challenge.  The Pentagon, spurred by a memo from Defense Secretary Pete Hegseth in January, has been racing to incorporate more AI. It has been used in combat, where generative AI has ranked lists of targets and recommended which to strike first, and in more administrative roles, like drafting contracts and reports. There are lots of tasks currently handled by human analysts that the military might want to train leading AI models to perform and would require access to classified data, Mehta says. That could include learning to identify subtle clues in an image the way an analyst does, or connecting new information with historical context. The classified data could be pulled from the unfathomable amounts of text, audio, images, and video, in many languages, that intelligence services collect.  It’s really hard to say which specific military tasks would require AI models to train on such data, Mehta cautions, “because obviously the Defense Department has lots of incentives to keep that information confidential, and they don’t want other countries to know what kind of capabilities we have exactly in that space.”

Read More »

Energy Department Announces $500 Million to Strengthen Domestic Critical Materials Processing and Manufacturing

 Funding will expand domestic manufacturing of battery supply chains for defense, grid resilience, transportation, manufacturing and other industries WASHINGTON—The U.S. Department of Energy’s (DOE) Office of Critical Minerals and Energy Innovation (CMEI) today announced a Notice of Funding Opportunity (NOFO) for up to $500 million to expand U.S. critical mineral and materials processing and derivative battery manufacturing and recycling. Assistant Secretary of Energy (EERE) Audrey Robertson is currently in Japan meeting with regional allies at the Indo-Pacific Energy Security Ministerial and Business Forum (IPEM) to advance shared efforts on supply chain resilience and energy security issues. Her engagements at IPEM underscore the importance of close cooperation with partners as the United States strengthens its supply chain through this NOFO. “For too long, the United States has relied on hostile foreign actors to supply and process the critical materials that are essential in battery manufacturing and materials processing,” said U.S. Energy Secretary Chris Wright. “Thanks to President Trump’s leadership, the Department of Energy is playing a leading role in strengthening these domestic industries that will position the U.S. to win the AI race, meeting rising energy demand, and achieve energy dominance.” “I am delighted to be in Japan meeting with our allies, underscoring the important connection between critical materials and energy security,” said Assistant Secretary of Energy (EERE) Audrey Robertson. “Critical minerals processing is a vital component of our nation’s critical minerals supply base. Boosting domestic production, including through recycling, will bolster national security and ensure the United States and our partners are prepared to meet the energy challenges of the 21st century.” Funding awarded through this NOFO will support demonstration and/or commercial facilities for processing, recycling, or utilizing for manufacturing of critical materials which may include traditional battery minerals such as lithium, graphite, nickel, copper, aluminum, as well as other

Read More »

HPE, Nvidia expand AI partnership

In addition, the company announced the HPE Cray Supercomputing GX240 liquid-cooled compute blade for its GX5000 platform. The GX240 starts with 16 Nvidia Vera CPUs per blade and scales to 40 blades per rack, supporting up to 640 Nvidia Vera CPUs and 56,320 ARM cores per rack. In addition, HPE said new network connectivity—Nvidia Quantum-X800 InfiniBand—optimized for large-scale system connectivity is now available with HPE Cray Supercomputing GX5000. The Quantum-X800 InfiniBand switches provide 144 ports of 800 Gb/s connectivity per port with power efficiency features, the vendor stated. The vendor also rolled out the HPE Compute XD700, an AI server built on Nvidia HGX Rubin NVL8. The system is designed to deliver higher GPU density per rack and reduce space, power, and cooling costs while increasing AI training and inference throughput. Each rack of XD700 servers supports up to 128 Rubin GPUs, providing double the GPU density compared to the previous generation, according to HPE. During his GTC opening keynote, Nvidia CEO Jensen Huang said: “Vera is arriving at a turning point for AI. As intelligence becomes agentic—capable of reasoning and acting—the importance of the systems orchestrating that work is elevated. The CPU is no longer simply supporting the model; it’s driving it. With breakthrough performance and energy efficiency, Vera unlocks AI systems that think faster and scale further.”

Read More »

Energy Department Announces $293 Million in Funding to Support Genesis Mission National Science and Technology Challenges

WASHINGTON—The U.S. Department of Energy (DOE) today announced funding to advance the Genesis Mission’s efforts to tackle the nation’s most complex science and technology challenges. This includes a $293 million Request for Application (RFA),“The Genesis Mission: Transforming Science and Energy with AI.” Through this RFA, DOE invites interdisciplinary teams to leverage novel AI models and frameworks to address over 20 national challenges spanning advanced manufacturing, biotechnology, critical materials, nuclear energy, and quantum information science.    “The Genesis Mission has caught the imagination of our scientific and engineering communities to tackle national challenges in the age of AI,” said Under Secretary for Science Darío Gil and Genesis Mission Director. “With these investments we seek breakthrough ideas and novel collaborations leveraging the scientific prowess of our National Laboratories, the private sector, universities, and science philanthropies.”  The RFA is open to interdisciplinary teams from DOE National Laboratories, U.S. industry, and academia. Phase I awards will range from $500,000 to $750,000 and will support a nine month project period. Phase II awards will range from $6 million to $15 million over a three year project period. Teams may apply directly to either phase in FY 2026, and successful Phase I teams will be eligible to compete for larger Phase II awards in future cycles. Phase I applications and Phase II letters of intent are due April 28, 2026. Phase II applications are due May 19, 2026. DOE plans to hold an informational webinar about this RFA on March 26, 2026.  For full eligibility, application instructions, and challenge details, see the official NOFO: DE-FOA-0003612. Registration instructions and other details will be posted here.  ### 

Read More »

Chip wafer shortage will run through 2030 as AI demand overwhelms supply: SK Hynix chief

“This is no longer a cyclical imbalance. It is a structural reallocation of the memory market driven by AI infrastructure economics,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “The biggest mistake right now is to view this as a wafer or DRAM shortage. The constraint is systemic.” Shrish Pant, director analyst at Gartner, offered a more nuanced read. A 2030 horizon, he said, assumes AI demand grows without interruption — a scenario that is not guaranteed. “HBM wafer reallocation is very real and is definitely impacting the market till the end of 2027,” Pant said. “I see a sustained demand for HBM to continue to grow, with more complex, high-performance HBM keeping prices higher.” He added that some rationalisation in AI infrastructure spending cannot be ruled out, and that traditional DRAM prices could improve by 2028 as new fabs — including Samsung’s P5, SK Hynix’s Yongin facility, and Micron’s Boise expansion — come online, though prices would remain above 2025 levels. What makes this shortage different from previous memory cycles is supplier behaviour. Gogia pointed out that memory vendors are locking in multi-year agreements, committing future HBM output well in advance — a pattern inconsistent with cyclical markets. “This is how a strategic resource market behaves when demand visibility is high, and margins are concentrated in a specific segment,” he said. IDC, in a February analysis, projected that 2026 DRAM and NAND supply growth would come in at 16% and 17% year-on-year, respectively, well below historical norms, a consequence of Samsung, SK Hynix, and Micron reallocating cleanroom capacity toward higher-margin AI products. Enterprise buyers caught in the crossfire That capacity reallocation is now working its way through enterprise procurement, creating what Gogia described as a two-tier market: hyperscalers and sovereign-scale buyers who secure capacity early, and

Read More »

Nvidia targets inference as AI’s next battleground with Groq 3 LPX

It’s a big cost play, he pointed out, and it “has to happen everywhere, all the time, for all users.” The next phase of inferencing The new Groq 3 language processing units (LPUs) are based on intellectual property (IP) from Groq, which signed a $20 billion licensing agreement with Nvidia late last year. According to the chip company, a fleet of LPUs can function as a “giant single processor.” While Rubin GPUs will continue to handle prefill (prompt processing), Groq’s LPX will now handle latency-sensitive portions of decode (response). Together, they can deliver a “new class of inference performance,” Nvidia says.  Each LPX rack features 256 LPUs with 128 GB of on-chip static random-access memory (SRAM), 150 terabyte per second (TB/s) bandwidth, chip-to-chip links and high-speed connections to NVL72, Nvidia’s liquid-cooled AI supercomputer. Combined, these can reduce latency to “near zero,” Nvidia claims. The LPX integration with Vera Rubin AI factories will be available in the second half of this year. Training versus inferencing Training and inference stress infrastructure in very different ways, noted Sanchit Vir Gogia, chief analyst at Greyhound Research. While training rewards “massive parallelism and brute-force scale,” inferencing (especially for long context and interactive reasoning) is far more sensitive to latency, memory movement, cache behavior, concurrency, and cost per delivered token.

Read More »

The Pentagon is planning for AI companies to train on classified data, defense official says

The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, MIT Technology Review has learned.  AI models like Anthropic’s Claude are already used to answer questions in classified settings; applications include analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development that presents unique security risks. It would mean sensitive intelligence like surveillance reports or battlefield assessments become embedded into the models themselves, and it would bring AI firms into closer contact with classified data than before.  Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with MIT Technology Review. The news comes as demand for more powerful models is high: the Pentagon has reached agreements with OpenAI and Elon Musk’s xAI to operate their models in classified settings and is implementing a new agenda to become an “an ‘AI-first’ warfighting force” as the conflict with Iran escalates. (The Pentagon did not comment on its AI training plans as of publication time.) Training would be done in a secure data center that’s accredited to host classified government projects, and where a copy of an AI model is paired with classified data, according to two people familiar with how such operations work. Though the Department of Defense would remain the owner of the data, personnel from AI companies might in rare cases access the data if they have appropriate security clearance, the official said. 
Before allowing this new training, though, the official said, the Pentagon intends to evaluate how accurate and effective models are when trained on nonclassified data, like commercially available satellite imagery.  The military has long used computer vision models, an older form of AI, to identify objects in images and footage it collects from drones and airplanes, and federal agencies have awarded contracts to companies to train AI models on such content. And AI companies building large language models (LLMs) and chatbots have created versions of their models fine-tuned for government work, like Anthropic’s Claude Gov, which are designed to operate across more languages and in secure environments. But the official’s comments are the first indication that AI companies building LLMs, like OpenAI and xAI, could train government-specific versions of their models directly on classified data.
Aalok Mehta, who directs the Wadhwani AI Center at the Center for Strategic and International Studies and previously led AI policy efforts at Google and OpenAI, says training on classified data, as opposed to just answering questions about it, would present new risks.  The biggest of these, he says, is that classified information these models train on could be resurfaced to anyone using the model. That would be a problem if lots of different military departments, all with different classification levels and needs for information, were to share the same AI.  “You can imagine, for example, a model that has access to some sort of sensitive human intelligence—like the name of an operative—leaking that information to a part of the Defense Department that isn’t supposed to have access to that information,” Mehta says. That could create a security risk for the operative, one that’s difficult to perfectly mitigate if a particular model is used by more than one group within the military. However, Mehta says, it’s not as hard to keep information contained from the broader world: “If you set this up right, you will have very little risk of that data being surfaced on the general internet or back to OpenAI.” The government has some of the infrastructure for this already; the security giant Palantir has won sizable contracts for building a secure environment through which officials can ask AI models about classified topics without sending the information back to AI companies. But using these systems for training is still a new challenge.  The Pentagon, spurred by a memo from Defense Secretary Pete Hegseth in January, has been racing to incorporate more AI. It has been used in combat, where generative AI has ranked lists of targets and recommended which to strike first, and in more administrative roles, like drafting contracts and reports. There are lots of tasks currently handled by human analysts that the military might want to train leading AI models to perform and would require access to classified data, Mehta says. That could include learning to identify subtle clues in an image the way an analyst does, or connecting new information with historical context. The classified data could be pulled from the unfathomable amounts of text, audio, images, and video, in many languages, that intelligence services collect.  It’s really hard to say which specific military tasks would require AI models to train on such data, Mehta cautions, “because obviously the Defense Department has lots of incentives to keep that information confidential, and they don’t want other countries to know what kind of capabilities we have exactly in that space.”

Read More »

Energy Department Announces $500 Million to Strengthen Domestic Critical Materials Processing and Manufacturing

 Funding will expand domestic manufacturing of battery supply chains for defense, grid resilience, transportation, manufacturing and other industries WASHINGTON—The U.S. Department of Energy’s (DOE) Office of Critical Minerals and Energy Innovation (CMEI) today announced a Notice of Funding Opportunity (NOFO) for up to $500 million to expand U.S. critical mineral and materials processing and derivative battery manufacturing and recycling. Assistant Secretary of Energy (EERE) Audrey Robertson is currently in Japan meeting with regional allies at the Indo-Pacific Energy Security Ministerial and Business Forum (IPEM) to advance shared efforts on supply chain resilience and energy security issues. Her engagements at IPEM underscore the importance of close cooperation with partners as the United States strengthens its supply chain through this NOFO. “For too long, the United States has relied on hostile foreign actors to supply and process the critical materials that are essential in battery manufacturing and materials processing,” said U.S. Energy Secretary Chris Wright. “Thanks to President Trump’s leadership, the Department of Energy is playing a leading role in strengthening these domestic industries that will position the U.S. to win the AI race, meeting rising energy demand, and achieve energy dominance.” “I am delighted to be in Japan meeting with our allies, underscoring the important connection between critical materials and energy security,” said Assistant Secretary of Energy (EERE) Audrey Robertson. “Critical minerals processing is a vital component of our nation’s critical minerals supply base. Boosting domestic production, including through recycling, will bolster national security and ensure the United States and our partners are prepared to meet the energy challenges of the 21st century.” Funding awarded through this NOFO will support demonstration and/or commercial facilities for processing, recycling, or utilizing for manufacturing of critical materials which may include traditional battery minerals such as lithium, graphite, nickel, copper, aluminum, as well as other

Read More »

HPE, Nvidia expand AI partnership

In addition, the company announced the HPE Cray Supercomputing GX240 liquid-cooled compute blade for its GX5000 platform. The GX240 starts with 16 Nvidia Vera CPUs per blade and scales to 40 blades per rack, supporting up to 640 Nvidia Vera CPUs and 56,320 ARM cores per rack. In addition, HPE said new network connectivity—Nvidia Quantum-X800 InfiniBand—optimized for large-scale system connectivity is now available with HPE Cray Supercomputing GX5000. The Quantum-X800 InfiniBand switches provide 144 ports of 800 Gb/s connectivity per port with power efficiency features, the vendor stated. The vendor also rolled out the HPE Compute XD700, an AI server built on Nvidia HGX Rubin NVL8. The system is designed to deliver higher GPU density per rack and reduce space, power, and cooling costs while increasing AI training and inference throughput. Each rack of XD700 servers supports up to 128 Rubin GPUs, providing double the GPU density compared to the previous generation, according to HPE. During his GTC opening keynote, Nvidia CEO Jensen Huang said: “Vera is arriving at a turning point for AI. As intelligence becomes agentic—capable of reasoning and acting—the importance of the systems orchestrating that work is elevated. The CPU is no longer simply supporting the model; it’s driving it. With breakthrough performance and energy efficiency, Vera unlocks AI systems that think faster and scale further.”

Read More »

Energy Department Announces $293 Million in Funding to Support Genesis Mission National Science and Technology Challenges

WASHINGTON—The U.S. Department of Energy (DOE) today announced funding to advance the Genesis Mission’s efforts to tackle the nation’s most complex science and technology challenges. This includes a $293 million Request for Application (RFA),“The Genesis Mission: Transforming Science and Energy with AI.” Through this RFA, DOE invites interdisciplinary teams to leverage novel AI models and frameworks to address over 20 national challenges spanning advanced manufacturing, biotechnology, critical materials, nuclear energy, and quantum information science.    “The Genesis Mission has caught the imagination of our scientific and engineering communities to tackle national challenges in the age of AI,” said Under Secretary for Science Darío Gil and Genesis Mission Director. “With these investments we seek breakthrough ideas and novel collaborations leveraging the scientific prowess of our National Laboratories, the private sector, universities, and science philanthropies.”  The RFA is open to interdisciplinary teams from DOE National Laboratories, U.S. industry, and academia. Phase I awards will range from $500,000 to $750,000 and will support a nine month project period. Phase II awards will range from $6 million to $15 million over a three year project period. Teams may apply directly to either phase in FY 2026, and successful Phase I teams will be eligible to compete for larger Phase II awards in future cycles. Phase I applications and Phase II letters of intent are due April 28, 2026. Phase II applications are due May 19, 2026. DOE plans to hold an informational webinar about this RFA on March 26, 2026.  For full eligibility, application instructions, and challenge details, see the official NOFO: DE-FOA-0003612. Registration instructions and other details will be posted here.  ### 

Read More »

Chip wafer shortage will run through 2030 as AI demand overwhelms supply: SK Hynix chief

“This is no longer a cyclical imbalance. It is a structural reallocation of the memory market driven by AI infrastructure economics,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “The biggest mistake right now is to view this as a wafer or DRAM shortage. The constraint is systemic.” Shrish Pant, director analyst at Gartner, offered a more nuanced read. A 2030 horizon, he said, assumes AI demand grows without interruption — a scenario that is not guaranteed. “HBM wafer reallocation is very real and is definitely impacting the market till the end of 2027,” Pant said. “I see a sustained demand for HBM to continue to grow, with more complex, high-performance HBM keeping prices higher.” He added that some rationalisation in AI infrastructure spending cannot be ruled out, and that traditional DRAM prices could improve by 2028 as new fabs — including Samsung’s P5, SK Hynix’s Yongin facility, and Micron’s Boise expansion — come online, though prices would remain above 2025 levels. What makes this shortage different from previous memory cycles is supplier behaviour. Gogia pointed out that memory vendors are locking in multi-year agreements, committing future HBM output well in advance — a pattern inconsistent with cyclical markets. “This is how a strategic resource market behaves when demand visibility is high, and margins are concentrated in a specific segment,” he said. IDC, in a February analysis, projected that 2026 DRAM and NAND supply growth would come in at 16% and 17% year-on-year, respectively, well below historical norms, a consequence of Samsung, SK Hynix, and Micron reallocating cleanroom capacity toward higher-margin AI products. Enterprise buyers caught in the crossfire That capacity reallocation is now working its way through enterprise procurement, creating what Gogia described as a two-tier market: hyperscalers and sovereign-scale buyers who secure capacity early, and

Read More »

IEA launches record strategic oil release as Middle East war disrupts supply

The International Energy Agency (IEA) on Mar. 11 approved the largest emergency oil stock release in its history, making 400 million bbl available from member-country reserves in response to market disruptions tied to the war in the Middle East. The coordinated action, agreed unanimously by the IEA’s 32 member countries, is intended to ease supply pressure and temper price volatility as crude markets react to disrupted flows through the Strait of Hormuz. “The conflict in the Middle East is having significant impacts on global oil and gas markets, with major implications for energy security, energy affordability and the global economy for oil,” IEA executive director Fatih Birol said. The release more than doubles the previous IEA record set in 2022, when member countries collectively made 182.7 million bbl available following Russia’s invasion of Ukraine. Under the IEA system, member countries are required to maintain emergency oil stocks equal to at least 90 days of net imports, giving the agency a mechanism to respond when severe disruptions threaten global supply. The move comes after crude prices surged amid concerns that the US-Iran war could lead to prolonged disruption of exports from the Gulf. Despite the planned stock release, traders remain uncertain about whether reserve barrels alone will be enough to offset losses if the disruption persists. IEA said the emergency barrels will be supplied to the market from government-controlled and obligated industry stocks held across member countries. The action marks the sixth coordinated stock release in the agency’s history and underscores the seriousness of the current supply shock. Earlier the day, Japanese Prime Minister Sanae Takaichi said that Japan might start using its strategic oil reserves as early as next week, citing Japan’s unusually high dependence on Middle Eastern crude oil.

Read More »

Infographic: Strait of Hormuz energy trade 2025

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Coordinated attacks Feb. 28 by the US and Israel on Iran and the since-escalated conflict have nearly halted shipping traffic through the Strait of Hormuz, which typically carries about 20% of the world’s crude oil and natural gas. OGJ Statistics Editor Laura Bell-Hammer compiled data to showcase 2025 energy trade through the critical transit chokepoint.   <!–> –> <!–> ]–> <!–> ]–>

Read More »

BOEM: US OCS holds 65.8 billion bbl of technically recoverable reserves

The US Outer Continental Shelf (OCS) holds mean undiscovered technically recoverable resources (UTRR) of 65.8 billion bbl of oil and 218.43 tcf of natural gas, the US Bureau of Ocean Energy Management (BOEM) said Mar. 9. Based on current production trends, these undiscovered resources represent the potential for 100 or more years of energy production from the US Outer Continental Shelf (OCS), BOEM said. A large portion of undiscovered OSC resources is located offshore the Gulf of Mexico and Alaska, according to the report. The offshore Gulf holds 26.9 million bbl of oil and 45.59 tcf of gas, while offshore Alaska holds an estimated mean 24.1 million bbl of oil and 122.29 tcf of gas. Offshore Pacific holds a mean UTRR of 10.3 million barrels of oil and 16.2 trillion cubic feet of gas, the report said. Offshore Atlantic holds a mean UTRR of 10.3 billion barrels of oil and 16.2 trillion cubic feet of gas. The assessment also evaluates the impact of prices on hydrocarbon recovery. Alaska is particularly price-sensitive, with mean undiscovered economically recoverable resources (UERR) negligible until prices average $100/bbl and $17.79/Mcf. At those levels, the mean UERR stands at 6.25 billion bbl and 13.25 tcf. At $160/bbl and $28.47/Mcf, recoverable resources jump to 14.67 billion bbl and 58.78 tcf. In the Gulf of Mexico, the mean UERR is 17.51 billion bbl of oil and 13.71 tcf at average prices of $60/bbl and $3.20/Mcf, increasing to 20.51 billion bbl and 17.49 tcf at average prices of $100/bbl and $5.34/Mcf, respectively. BOEM conducts a national resource assessment every 4 years to understand the “distribution of undiscovered oil and gas resources on the OCS” and identify opportunities for additional oil and gas exploration and development. “The Outer Continental Shelf holds tremendous resource potential,” said BOEM Acting Director Matt Giacona. “This

Read More »

Assala Energy encounters hydrocarbons onshore Gabon

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Assala Energy encountered hydrocarbons in an exploration well in Gabon and will now work to interpret well results and additional appraisal activity.   The company encountered hydrocarbons in the Magoga-A exploration well in the Mutamba Iroru license II and subsequent sidetrack into the Atora license in Gabon. Both Magog wells drilled the full reservoir interval. Preliminary evaluation of data acquired during drilling indicates the presence of 8 m of hydrocarbon within the Gamba Sandstone formation. The company will work to integrate and interpret the well results and assess reservoir properties, fluid characteristics, volumetric potential and possible next steps, including any appropriate additional appraisal activity. A determination has not yet been made regarding commerciality, and no decision has been taken regarding development. Assala Gabon holds six onshore production licenses: Rabi Kounga II, Toucan II, Bende M’Bassou Totou II, Koula/Damier, Gamba/lvinga, and Atora II.

Read More »

Gulf of Mexico lease sale draws just under $47 million in high bids

The BBG2 lease sale for drilling rights in the US Gulf of Mexico resulted in $46.98 million in high bids from oil and gas companies, the US Bureau of Ocean Energy Management said Mar. 11. Results of BBG2, the second of 30 US Gulf lease sales required under the One Big Beautiful Bill Act (OBBBA), stand in contrast to the most recent lease sale held in December 2025 (BBG1) that drew $279.4 million in apparent high bids. BOEM applied a 12.5% royalty rate for both shallow and deepwater leases.  On offer were 15,019 unleased blocks covering about 80.4 million acres on the US Outer Continental Shelf. The blocks lie 3-231 miles offshore, spanning water depths from 9 ft to more than 11,100 ft. BOEM received a total of 38 bids totaling $69.8 million from the 13 companies participating. Twenty five blocks spanning 140,753 acres received high bids. The majority of the blocks that received bids—18 of 25—were for those in deep water of 800-1,600 m. Four blocks in ultradeep waters over 1,600 m received bids. BP Exploration & Production Inc. submitted the lease sale’s highest bid—a $21-million bid for Block 404 in the Green Canyon area. Chevron followed with a submission of $5.89 million for Green Canyon Block 492.  The deepest block to receive a bid was Walker Ridge Block 751 in 2,660 m of water. Woodside Energy (Deepwater) Inc. bid $806,290 for the block. BOEM said Anadarko US Offshore LLC submitted the most high bids with 6 for a total of $4.01 million. LLOG Exploration Offshore LLC took second place with 5 high bids totaling $2.15 million. Houston Energy LP also had 5 total high bids for a total of $1.16 million. The top three companies based on the sum of high bids submitted are BP Exploration & Production

Read More »

Petrobras starts gas injection at Búzios field

Petróleo Brasileiro SA (Petrobras) started gas injection for the P-78 FPSO in Búzios field, Santos basin, about 180-230 km off the coast of Rio de Janeiro, Brazil. The vessel is permanently spread moored at a water depth of about 2,100 m. It is designed to produce up to 180,000 b/d of oil, 7.2 million cu m/d of gas, and features a minimum storage capacity of 2 million bbl. Seatrium Ltd. performed topside fabrication, integration, and commissioning activities for the FPSO. The company readied critical systems for gas injection including the main process compressors, export compressors, and gas injection compressors. First gas injection occurred within 61 days of achieving first oil on Dec. 31, 2025. The next major project milestone is completion of the delivery phase and final acceptance of the vessel by Petrobras. With the P-78 coming online, installed capacity of the field will expand to about 1.15 million b/d. The project also enables gas exports to the continent via connection to the Rota 3 gas pipeline, increasing Brazil’s gas supply by up to 3 million cu m/d.

Read More »

LG rolls out new AI services to help consumers with daily tasks

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More LG kicked off the AI bandwagon today with a new set of AI services to help consumers in their daily tasks at home, in the car and in the office. The aim of LG’s CES 2025 press event was to show how AI will work in a day of someone’s life, with the goal of redefining the concept of space, said William Joowan Cho, CEO of LG Electronics at the event. The presentation showed LG is fully focused on bringing AI into just about all of its products and services. Cho referred to LG’s AI efforts as “affectionate intelligence,” and he said it stands out from other strategies with its human-centered focus. The strategy focuses on three things: connected devices, capable AI agents and integrated services. One of things the company announced was a strategic partnership with Microsoft on AI innovation, where the companies pledged to join forces to shape the future of AI-powered spaces. One of the outcomes is that Microsoft’s Xbox Ultimate Game Pass will appear via Xbox Cloud on LG’s TVs, helping LG catch up with Samsung in offering cloud gaming natively on its TVs. LG Electronics will bring the Xbox App to select LG smart TVs. That means players with LG Smart TVs will be able to explore the Gaming Portal for direct access to hundreds of games in the Game Pass Ultimate catalog, including popular titles such as Call of Duty: Black Ops 6, and upcoming releases like Avowed (launching February 18, 2025). Xbox Game Pass Ultimate members will be able to play games directly from the Xbox app on select LG Smart TVs through cloud gaming. With Xbox Game Pass Ultimate and a compatible Bluetooth-enabled

Read More »

Big tech must stop passing the cost of its spiking energy needs onto the public

Julianne Malveaux is an MIT-educated economist, author, educator and political commentator who has written extensively about the critical relationship between public policy, corporate accountability and social equity.  The rapid expansion of data centers across the U.S. is not only reshaping the digital economy but also threatening to overwhelm our energy infrastructure. These data centers aren’t just heavy on processing power — they’re heavy on our shared energy infrastructure. For Americans, this could mean serious sticker shock when it comes to their energy bills. Across the country, many households are already feeling the pinch as utilities ramp up investments in costly new infrastructure to power these data centers. With costs almost certain to rise as more data centers come online, state policymakers and energy companies must act now to protect consumers. We need new policies that ensure the cost of these projects is carried by the wealthy big tech companies that profit from them, not by regular energy consumers such as family households and small businesses. According to an analysis from consulting firm Bain & Co., data centers could require more than $2 trillion in new energy resources globally, with U.S. demand alone potentially outpacing supply in the next few years. This unprecedented growth is fueled by the expansion of generative AI, cloud computing and other tech innovations that require massive computing power. Bain’s analysis warns that, to meet this energy demand, U.S. utilities may need to boost annual generation capacity by as much as 26% by 2028 — a staggering jump compared to the 5% yearly increases of the past two decades. This poses a threat to energy affordability and reliability for millions of Americans. Bain’s research estimates that capital investments required to meet data center needs could incrementally raise consumer bills by 1% each year through 2032. That increase may

Read More »

Final 45V hydrogen tax credit guidance draws mixed response

Dive Brief: The final rule for the 45V clean hydrogen production tax credit, which the U.S. Treasury Department released Friday morning, drew mixed responses from industry leaders and environmentalists. Clean hydrogen development within the U.S. ground to a halt following the release of the initial guidance in December 2023, leading industry participants to call for revisions that would enable more projects to qualify for the tax credit. While the final rule makes “significant improvements” to Treasury’s initial proposal, the guidelines remain “extremely complex,” according to the Fuel Cell and Hydrogen Energy Association. FCHEA President and CEO Frank Wolak and other industry leaders said they look forward to working with the Trump administration to refine the rule. Dive Insight: Friday’s release closed what Wolak described as a “long chapter” for the hydrogen industry. But industry reaction to the final rule was decidedly mixed, and it remains to be seen whether the rule — which could be overturned as soon as Trump assumes office — will remain unchanged. “The final 45V rule falls short,” Marty Durbin, president of the U.S. Chamber’s Global Energy Institute, said in a statement. “While the rule provides some of the additional flexibility we sought, … we believe that it still will leave billions of dollars of announced projects in limbo. The incoming Administration will have an opportunity to improve the 45V rules to ensure the industry will attract the investments necessary to scale the hydrogen economy and help the U.S. lead the world in clean manufacturing.” But others in the industry felt the rule would be sufficient for ending hydrogen’s year-long malaise. “With this added clarity, many projects that have been delayed may move forward, which can help unlock billions of dollars in investments across the country,” Kim Hedegaard, CEO of Topsoe’s Power-to-X, said in a statement. Topsoe

Read More »

Texas, Utah, Last Energy challenge NRC’s ‘overburdensome’ microreactor regulations

Dive Brief: A 69-year-old Nuclear Regulatory Commission rule underpinning U.S. nuclear reactor licensing exceeds the agency’s statutory authority and creates an unreasonable burden for microreactor developers, the states of Texas and Utah and advanced nuclear technology company Last Energy said in a lawsuit filed Dec. 30 in federal court in Texas. The plaintiffs asked the Eastern District of Texas court to exempt Last Energy’s 20-MW reactor design and research reactors located in the plaintiff states from the NRC’s definition of nuclear “utilization facilities,” which subjects all U.S. commercial and research reactors to strict regulatory scrutiny, and order the NRC to develop a more flexible definition for use in future licensing proceedings. Regardless of its merits, the lawsuit underscores the need for “continued discussion around proportional regulatory requirements … that align with the hazards of the reactor and correspond to a safety case,” said Patrick White, research director at the Nuclear Innovation Alliance. Dive Insight: Only three commercial nuclear reactors have been built in the United States in the past 28 years, and none are presently under construction, according to a World Nuclear Association tracker cited in the lawsuit. “Building a new commercial reactor of any size in the United States has become virtually impossible,” the plaintiffs said. “The root cause is not lack of demand or technology — but rather the [NRC], which, despite its name, does not really regulate new nuclear reactor construction so much as ensure that it almost never happens.” More than a dozen advanced nuclear technology developers have engaged the NRC in pre-application activities, which the agency says help standardize the content of advanced reactor applications and expedite NRC review. Last Energy is not among them.  The pre-application process can itself stretch for years and must be followed by a formal application that can take two

Read More »

Qualcomm unveils AI chips for PCs, cars, smart homes and enterprises

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Qualcomm unveiled AI technologies and collaborations for PCs, cars, smart homes and enterprises at CES 2025. At the big tech trade show in Las Vegas, Qualcomm Technologies showed how it’s using AI capabilities in its chips to drive the transformation of user experiences across diverse device categories, including PCs, automobiles, smart homes and into enterprises. The company unveiled the Snapdragon X platform, the fourth platform in its high-performance PC portfolio, the Snapdragon X Series, bringing industry-leading performance, multi-day battery life, and AI leadership to more of the Windows ecosystem. Qualcomm has talked about how its processors are making headway grabbing share from the x86-based AMD and Intel rivals through better efficiency. Qualcomm’s neural processing unit gets about 45 TOPS, a key benchmark for AI PCs. The Snapdragon X family of AI PC processors. Additionally, Qualcomm Technologies showcased continued traction of the Snapdragon X Series, with over 60 designs in production or development and more than 100 expected by 2026. Snapdragon for vehicles Qualcomm demoed chips that are expanding its automotive collaborations. It is working with Alpine, Amazon, Leapmotor, Mobis, Royal Enfield, and Sony Honda Mobility, who look to Snapdragon Digital Chassis solutions to drive AI-powered in-cabin and advanced driver assistance systems (ADAS). Qualcomm also announced continued traction for its Snapdragon Elite-tier platforms for automotive, highlighting its work with Desay, Garmin, and Panasonic for Snapdragon Cockpit Elite. Throughout the show, Qualcomm will highlight its holistic approach to improving comfort and focusing on safety with demonstrations on the potential of the convergence of AI, multimodal contextual awareness, and cloudbased services. Attendees will also get a first glimpse of the new Snapdragon Ride Platform with integrated automated driving software stack and system definition jointly

Read More »

Oil, Gas Execs Reveal Where They Expect WTI Oil Price to Land in the Future

Executives from oil and gas firms have revealed where they expect the West Texas Intermediate (WTI) crude oil price to be at various points in the future as part of the fourth quarter Dallas Fed Energy Survey, which was released recently. The average response executives from 131 oil and gas firms gave when asked what they expect the WTI crude oil price to be at the end of 2025 was $71.13 per barrel, the survey showed. The low forecast came in at $53 per barrel, the high forecast was $100 per barrel, and the spot price during the survey was $70.66 per barrel, the survey pointed out. This question was not asked in the previous Dallas Fed Energy Survey, which was released in the third quarter. That survey asked participants what they expect the WTI crude oil price to be at the end of 2024. Executives from 134 oil and gas firms answered this question, offering an average response of $72.66 per barrel, that survey showed. The latest Dallas Fed Energy Survey also asked participants where they expect WTI prices to be in six months, one year, two years, and five years. Executives from 124 oil and gas firms answered this question and gave a mean response of $69 per barrel for the six month mark, $71 per barrel for the year mark, $74 per barrel for the two year mark, and $80 per barrel for the five year mark, the survey showed. Executives from 119 oil and gas firms answered this question in the third quarter Dallas Fed Energy Survey and gave a mean response of $73 per barrel for the six month mark, $76 per barrel for the year mark, $81 per barrel for the two year mark, and $87 per barrel for the five year mark, that

Read More »

Nurturing agentic AI beyond the toddler stage

Provided byIntel Parents of young children face a lot of fears about developmental milestones, from infancy through adulthood. The number of months it takes a baby to learn to talk or walk is often used as a benchmark for wellness, or an indicator of additional tests needed to properly diagnose a potential health condition. A parent rejoices over the child’s first steps and then realizes how much has changed when the child can quickly walk outside, instead of slowly crawling in a safe area inside. Suddenly safety, including childproofing, takes a completely different lens and approach. Generative AI hit toddlerhood between December 2025 and January 2026 with the introduction of no code tools from multiple vendors and the debut of OpenClaw, an open source personal agent posted on GitHub. No more crawling on the carpet—the generative AI tech baby broke into a sprint, and very few governance principles were operationally prepared. The accountability challenge: It’s not them, it’s you Until now, governance has been focused on model output risks with humans in the loop before consequential decisions were made—such as with loan approvals or job applications. Model behavior, including drift, alignment, data exfiltration, and poisoning, was the focus. The pace was set by a human prompting a model in a chatbot format with plenty of back and forth interactions between machine and human. Today, with autonomous agents operating in complex workflows, the vision and the benefits of applied AI require significantly fewer humans in the loop. The point is to operate a business at machine pace by automating manual tasks that have clear architecture and decision rules. The goal, from a liability standpoint, is no reduction in enterprise or business risk between a machine operating a workflow and a human operating a workflow. CX Today summarizes the situation succinctly: “AI does the work, humans own the risk,” and   California state law (AB 316), went into effect January 1, 2026, which removes the “AI did it; I didn’t approve it” excuse.  This is similar to parenting when an adult is held responsible for a child’s actions that negatively impacts the larger community.
The challenge is that without building in code that enforces operational governance aligned to different levels of risk and liability along the entire workflow, the benefit of autonomous AI agents is negated. In the past, governance had been static and aligned to the pace of interaction typical for a chatbot. However, autonomous AI by design removes humans from many decisions, which can affect governance.   Considering permissions Much like handing a three-year-old child a video game console that remotely controls an Abrams tank or an armed drone, leaving a probabilistic system operating without real-time guardrails that can change critical enterprise data carries significant risks.  For instance, agents that integrate and chain actions across multiple corporate systems can drift beyond privileges that a single human user would be granted. To move forward successfully, governance must shift beyond policy set by committees to operational code built into the workflows from the start.  
A humorous meme around the behavior of toddlers with toys starts with all the reasons that whatever toy you have is mine and ends with a broken toy that is definitely yours.  For example, OpenClaw delivered a user experience closer to working with a human assistant;, but the excitement shifted as security experts realized inexperienced users could be easily compromised by using it. For decades, enterprise IT has lived with shadow IT and the reality that skilled technical teams must take over and clean up assets they did not architect or install, much like the toddler giving back a broken toy. With autonomous agents, the risks are larger: persistent service account credentials, long-lived API tokens, and permissions to make decisions over core file systems. To meet this challenge, it’s imperative to allocate upfront appropriate IT budget and labor to sustain central discovery, oversight, and remediation for the thousands of employee or department-created agents. Having a retirement plan Recently, an acquaintance mentioned that she saved a client hundreds of thousands of dollars by identifying and then ending a “zombie project” —a neglected or failed AI pilot left running on a GPU cloud instance. There are potentially thousands of agents that risk becoming a zombie fleet inside a business. Today, many executives encourage employees to use AI—or else—and employees are told to create their own AI-first workflows or AI assistants. With the utility of something like OpenClaw and top-down directives, it is easy to project that the number of build-my-own agents coming to the office with their human employee will explode. Since an AI agent is a program that would fall under the definition of company-owned IP, as a employee changes departments or companies, those agents may be orphaned. There needs to be proactive policy and governance to decommission and retire any agents linked to a specific employee ID and permissions. Financial optimization is governance out of the gate While for some executives, autonomous AI sounds like a way to improve their operating margins by limiting human capital, many are finding that the ROI for human labor replacement is the wrong angle to take. Adding AI capabilities to the enterprise does not mean purchasing a new software tool with predictable instance-per-hour or per-seat pricing. A December 2025 IDC survey sponsored by Data Robot indicated that 96% of organizations deploying generative AI and 92% of those implementing agentic AI reported costs were higher or much higher than expected. The survey separates the concepts of governance and ROI, but as AI systems scale across large enterprises, financial and liability governance should be architected into the workflows from the beginning. Part of enterprise class governance stems from predicting and adhering to allocated budgeting. Unlike the software financial models of per-seat costs with support and maintenance fees, use of AI is consumption and usage costs scale as the workflow scales across the enterprise: the more users, the more tokens or the more compute time, and the higher the bill. Think of it as a tab left open, or an online retailer’s digital shopping cart button unlocked on a toddler’s electronic game device. Cloud FinOps was deterministic, but generative AI and agentic AI systems built on generative AI are probabilistic. Some AI-first founders are realizing that a single agents’ token costs can be as high as $100,000 per session. Without guardrails built in from the start, chaining complex autonomous agents that run unsupervised for long periods of time can easily blow past the budget for hiring a junior developer. Keeping humans in the loop remains critical The promise of autonomous agentic AI is acceleration of business operations, product introductions, customer experience, and customer retention. Shifting to machine-speed decisions without humans in and or on the loop for these key functions significantly changes the governance landscape. While many of the principles around proactive permissions, discovery, audit, remediation, and financial operations/optimizations are the same, how they are executed has to shift to keep pace with autonomous agentic AI. This content was produced by Intel. It was not written by MIT Technology Review’s editorial staff.

Read More »

The Download: glass chips and “AI-free” logos

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Future AI chips could be built on glass  Human-made glass is thousands of years old. But it’s now poised to find its way into the AI chips used in the world’s newest and largest data centers.   This year, a South Korean company called Absolics will start producing special glass panels that make next-generation computing hardware more powerful and efficient. Other companies, including Intel, are also pushing forward in this area.   If all goes well, the technology could reduce the energy demands of chips in AI data centers—and even consumer laptops and mobile devices. Read the full story. 
—Jeremy Hsu The must-reads 
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 The race is on to establish a globally recognized “AI-free” logo Organizations are rushing to develop a universal label for human-made products. (BBC) + A “QuitGPT” campaign is urging people to ditch ChatGPT. (MIT Technology Review)  2 Elizabeth Warren wants answers on xAI’s access to military data The Pentagon reportedly gave it access to classified networks. (NBC News) + Here’s how chatbots could be used for targeting decisions. (MIT Technology Review) + The DoD is struggling to upgrade software for fighter jets. (Bloomberg $)  3 Models are applying to be the faces of AI romance scams The “AI face models” are duping victims out of their money. (Wired $) + Survivors have revealed how the “pig butchering” scams work. (MIT Technology Review)  4 Meta is planning layoffs that could affect over 20% of staff The job cuts could offset its costly bet on AI. (Reuters $) + There’s a long history of fears about AI’s impact on jobs. (MIT Technology Review)  5 ByteDance delayed launching a video AI model after copyright disputes It famously generated footage of Tom Cruise and Brad Pitt fighting. (The Information $)  6 Cybersecurity investigators have exposed a huge North Korean con The scammers secured remote jobs in the US, then stole money and sensitive information. (NBC News)  7 A Chinese AI startup is set for a whopping $18 billion valuation That’s more than quadruple its valuation just three months ago. (Bloomberg $) + Chinese open models are spreading fast—here’s why that matters. (MIT Technology Review)  

8 Peter Thiel has started a lecture series about the antichrist in Rome His plans have drawn attention from the Catholic Church. (Reuters $)  9 Norway is fighting back against internet enshittification It’s joined a global campaign against the online world’s decay. (The Guardian) + We may need to move beyond the big platforms. (MIT Technology Review)  10 How a startup plans to resurrect the dodo Humans wiped them out nearly 400 years ago—can gene editing bring them back now? (Guardian)  Quote of the day  “I would build fission weapons. I would build fusion weapons. Nuclear weapons have been one of the most stabilizing forces in history—ever.”  —Anduril founder Palmer Luckey shares his love of nukes with Axios.  One More Thing  We need a moonshot for computing  TIM HERMAN/INTEL The US government is organizing itself for the next era of computing. Ultimately, it has one big choice to make: adopt a conservative strategy that aims to preserve its lead for the next five years—or orient itself toward genuine computing moonshots.  There is no shortage of candidates, including quantum computing, neuromorphic computing and reversible computing. And there are plenty of novel materials and devices. These possibilities could even be combined to form hybrid computing systems. 
The National Semiconductor Technology Center can drive these ideas forward. To be successful, it would do well to follow DARPA’s lead by focusing on moonshot programs. Read the full story.  —Brady Helwig & PJ Maykish  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + A UPS delivery driver heroically escaped from two murderous turkeys. + Art’s love affair with cats is charmingly depicted in a new book. + The humble pea and six other forgotten superfoods promise accessible nutritional power. + MF DOOM: Long Island to Leeds is the Transatlantic tale of your favorite rapper’s favorite rapper. 

Read More »

Why physical AI is becoming manufacturing’s next advantage

In partnership withMicrosoft and NVIDIA For decades, manufacturers have pursued automation to drive efficiency, reduce costs, and stabilize operations. That approach delivered meaningful gains, but it is no longer enough. Today’s manufacturing leaders face a different challenge: how to grow amid labor constraints, rising complexity, and increasing pressure to innovate faster without sacrificing safety, quality, or trust. The next phase of transformation will not be defined by isolated AI tools or individual robots, but by intelligence that can operate reliably in the physical world. This is where physical AI—intelligence that can sense, reason, and act in the real world—marks a decisive shift. And it is why Microsoft and NVIDIA are working together to help manufacturers move from experimentation to production at industrial scale. The industrial frontier: Intelligence and trust, not just automation Most early AI adoption focused on narrow optimization: automating tasks, improving utilization, and cutting costs. While valuable, that phase often created new friction, including skills gaps, governance concerns, and uncertainty about long‑term impact. Furthermore, the use cases were plentiful but not as strategic.
The industrial frontier represents a different approach. Rather than asking how much work machines can replace, frontier manufacturers ask how AI can expand human capability, accelerate innovation, and unlock new forms of value while remaining trustworthy and controllable. Across industries, companies that successfully move into this frontier phase share two non‑negotiables:
Intelligence: AI systems must understand how the business actually handles its data, workflows, and institutional knowledge. Trust: As AI begins to act in high‑stakes environments, organizations must retain security, governance, and observability at every layer. Without intelligence, AI becomes generic. Without trust, adoption stalls. Why manufacturing is the proving ground for physical AI Manufacturing is uniquely positioned at the center of this shift. AI is no longer confined to planning or analytics. It is moving into physical execution: coordinating machines, adapting to real‑world variability, and working alongside people on the factory floor. Robotics, autonomous systems, and AI agents must now perceive, reason, and act in dynamic environments. This transition exposes a critical gap. Traditional automation excels at repetition but struggles with adaptability. Human workers bring judgment and context but are constrained by scale. Physical AI closes that gap by enabling human‑led, AI‑operated systems, where people set intent and intelligent systems execute, learn, and improve over time. Humans are essential for scaled success. Microsoft and NVIDIA: Accelerating physical AI at scale Physical AI cannot be delivered through point solutions. It requires agentic-driven, enterprise-grade development, deployment, and operations toolchains and workflows that connect simulation, data, AI models, robotics, and governance into a coherent system. NVIDIA is building the AI infrastructure that makes physical AI possible, including accelerated computing, open models, simulation libraries, and robotics frameworks and blueprints that enable the ecosystem to build autonomous robotics systems that can perceive, reason, plan, and take action in the physical world. Microsoft complements this with a cloud and data platform designed to operate physical AI securely, at scale, and across the enterprise. Together, Microsoft and NVIDIA are enabling manufacturers to move beyond pilots toward production‑ready physical AI systems that can be developed, tested, deployed, and continuously improved across heterogeneous environments spanning the product lifecycle, factory operations, and supply chain. From intelligence to action: Human-agent teams in the factory At the industrial frontier, AI is not a standalone system, but a digital teammate.

When AI agents are grounded in the proper operational data, embedded in human workflows, and governed end to end, they can assist with tasks such as: Optimizing production lines in real time Coordinating maintenance and quality decisions Adapting operations to supply or demand disruptions Accelerating engineering and product lifecycle decisions For example, manufacturers are beginning to use simulation‑grounded AI agents to evaluate production changes virtually before deploying them on the factory floor, reducing risk while accelerating decision‑making. Crucially, frontier manufacturers design these systems so humans remain in control. AI executes, monitors, and recommends, while people provide intent, oversight, and judgment. This balance allows organizations to move faster without losing confidence or control. The role of trust in scaling physical AI As physical AI systems scale, trust becomes the limiting factor. Manufacturers must ensure that AI systems are secure, observable, and operating within policy, especially when they influence safety‑critical or mission‑critical processes. Governance cannot be an afterthought; It must be engineered into the platform itself. This is why frontier manufacturers treat trust as a first‑class requirement, pairing innovation with visibility, compliance, and accountability. Only then can physical AI move from promising demonstrations to enterprise‑wide deployment. Why this moment matters—and what’s next The convergence of AI agents, robotics, simulation, and real‑time data marks an inflection point for manufacturing. What was once experimental is becoming operational. What was once siloed is becoming connected. At NVIDIA GTC 2026, Microsoft and NVIDIA will demonstrate how this collaboration supports physical AI systems that manufacturers can deploy today and scale responsibly tomorrow. From simulation‑driven development to real‑world execution, the focus is on helping manufacturers cross the industrial frontier with confidence.
For manufacturing leaders, the question is no longer whether physical AI will reshape operations, but how quickly they can adopt it responsibly, at scale, and with trust built in from the start. Discover more with Microsoft at NVIDIA GTC 2026. This content was produced by Microsoft. It was not written by MIT Technology Review’s editorial staff.

Read More »

The Download: how AI is used for military targeting, and the Pentagon’s war on Claude

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Defense official reveals how AI chatbots could be used for targeting decisions  The US military might use generative AI systems to rank targets and recommend which to strike first, according to a Defense Department official.  A list of possible targets could first be fed into a generative AI system that the Pentagon is fielding for classified settings. Humans might then ask the system to analyze the information and prioritize the targets. They would then be responsible for checking and evaluating the results and recommendations.  OpenAI’s ChatGPT and xAI’s Grok could soon be at the center of exactly these sorts of high-stakes military decisions. Read the full story. 
—James O’Donnell  The must-reads 
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 The Pentagon’s CTO claims Claude would “pollute” the defense supply chain He blamed a “policy preference” that’s baked into the model. (CNBC) + Anthropic is reeling from OpenAI’s “compromise” with the DoD. (MIT Technology Review)  2 An ex-DOGE staffer has been accused of stealing social security data Then taking the information to his new job in the IT division of a government contractor. (Wired) + He allegedly used a thumb drive to steal the data. (Washington Post)  3 Ukraine is offering its battlefield data for AI training Allies can access the data to train drones and other UAVs. (Reuters)  + Europe has a drone-filled vision for the future of war. (MIT Technology Review)   4 Meta has postponed its latest AI launch over performance issues It fell short of rival models from Google, OpenAI, and Anthropic. (NYT $) + The company’s former AI chief is betting against LLMs. (MIT Technology Review).  5 X could be breaching sanctions on Iran An account for Iran’s new supreme leader may break US rules. (Engadget) + Hacker group Handala has become the face of Iranian cyberwarfare. (Wired) + AI is turning the conflict into theater. (MIT Technology Review)   6 A landmark social media addiction trial is wrapping up It’ll decide whether the platforms are liable for harms caused to children. (The Guardian)  + AI companions are the next stage of digital addiction. (MIT Technology Review)  7 Western AI models have “failed spectacularly” on agriculture in the Global South The biggest problem? They’re not trained on local data. (Rest of World) 

8 Internet outages in Moscow are sparking surging sales of pagers The disruptions have been blamed on new tests of web controls. (Bloomberg $)  9 Why is China obsessed with OpenClaw? Lobster-mania is spreading to the general public. (SCMP) + Tech-savvy “tinkerers” are cashing in on the craze. (MIT Technology Review)  10 Hollywood has soured on Silicon Valley Movies and TV shows have swapped eccentric founders for megalomaniac moguls. (NYT $)  Quote of the day  “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.”  —OpenAI CEO Sam Altman makes a new pitch to investors at a BlackRock event, Gizmodo reports.  One More Thing  How the Ukraine-Russia war is reshaping the tech sector in Eastern Europe  Latvia’s annual national defense exercises took place in September and October, as the Ukraine-Russia war nears its third anniversary.GATIS INDRēVICS/ LATVIAN MINISTRY OF DEFENSE When Latvian startup Global Wolf Motors first pitched the idea of a military scooter, it was met with skepticism—and a wall of bureaucracy. Then Russia launched its full-scale invasion of Ukraine in February 2022, and everything changed.   Suddenly, Ukrainian combat units wanted any equipment they could get their hands on, and they were willing to try out ideas that might not have made the cut in peacetime. 
Within weeks, the scooters were on the front line—and even behind it, being used on daring reconnaissance missions. It signaled that a new product category for companies along Ukraine’s borders had opened: civilian technologies repurposed for military needs. Read the full story.  —Peter Guest 
We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + A new mini magnet could slash the costs of MRIs and nuclear fusion.  + This interactive map of Earth offers new routes to facts about our planet. + Escape the news cycle with this deep dive into the power of fantasy and nature. (Big thanks to reader and MIT alum Vicki for the find!) + Reports of reading’s death are greatly exaggerated. 

Read More »

Future AI chips could be built on glass

Human-made glass is thousands of years old. But it’s now poised to find its way into the AI chips used in the world’s newest and largest data centers. This year, a South Korean company called Absolics is planning to start commercial production of special glass panels designed to make next-generation computing hardware more powerful and energy efficient. Other companies, including Intel, are also pushing forward in this area. If all goes well, such glass technology could reduce the energy demands of the sorts of high-performance computing chips used in AI data centers—and it could eventually do the same for consumer laptops and mobile devices if production costs fall. The idea is to use glass as the substrate, or layer, on which multiple silicon chips are connected. This form of “packaging” is an increasingly popular way to build computing hardware, because it lets engineers combine specialized chips designed for specific functions into a single system. But it presents challenges, including the fact that hardworking chips can run so hot they physically warp the substrate they’re built on. This can lead to misaligned components and may reduce how efficiently the chips can be cooled, leading to damage or premature failure.  “As AI workloads surge and package sizes expand, the industry is confronting very real mechanical constraints that impact the trajectory of high-performance computing,” says Deepak Kulkarni, a senior fellow at the chip design company Advanced Micro Devices (AMD). “One of the most fundamental is warpage.” That’s where glass comes in. It can handle the added heat better than existing substrates, and it will let engineers keep shrinking chip packages—which will make them faster and more energy efficient. It “unlocks the ability to keep scaling package footprints without hitting a mechanical wall,” says Kulkarni. 
Momentum is building behind the shift. Absolics has finished building a factory in the US that is dedicated to producing glass substrates for advanced chips and expects to begin commercial manufacturing this year. The US semiconductor manufacturer Intel is working toward incorporating glass in its next-generation chip packages, and its research has spurred other companies in the chip packaging supply chain to invest in it as well. South Korean and Chinese companies are among the early adopters. “Historically, this is not the first attempt to adopt glass in semiconductor packaging,” says Bilal Hachemi, senior technology and market analyst at the market research firm Yole Group. “But this time, the ecosystem is more solid and wider; the need for glass-based [technology] is sharper.”  Fragile but mighty
Chip packaging has relied on organic substrates such as fiberglass-reinforced epoxy since the 1990s, says Rahul Manepalli, vice president of advanced packaging at Intel. But electrochemical complications limit how closely designers can place drilled holes to create copper-coated signal and power connections between the chips and the rest of the system. Chip designers must also account for the unpredictable shrinkage and distortion that organic substrates undergo as chips heat up and cool down. “We realized about a decade ago that we are going to have some limitations with organic substrates,” says Manepalli. These glass substrate test units were photographed at an Intel facility in Chandler, Arizona, in 2023.INTEL CORPORATION Glass may help overcome a lot of these limitations. Its thermal stability could allow engineers to create 10 times more connections per millimeter than organic substrates, says Manepalli. With denser connections, Intel’s designers can then stuff 50% more silicon chips into the same package area, improving computational capability. The denser connections also enable more efficient routing for the copper wires that deliver power to the chip. And the fact that glass dissipates heat more efficiently allows for chip designs that reduce overall power consumption.  “The benefits of glass core substrates are undeniable,” says Manepalli. “It’s clear that the benefits will drive the industry to make this happen sooner rather than later, and we want to be one of the first ones who do it.”  However, working with glass creates its own challenges. For one thing, it’s fragile. Glass substrates for data center chip packages are made from panels that are only about 700 micrometers to 1.4 millimeters thick, which leaves them susceptible to cracking or even shattering, says Manepalli. Researchers at Intel and other organizations have spent years figuring out how to use other materials and special tools to integrate the glass panels safely into semiconductor manufacturing processes.  Now, Manepalli says, Intel’s research and development teams are reliably fabricating glass panels and churning out test chip packages that incorporate glass—and in early 2025 they demonstrated that a functional device with a glass core substrate could boot up the Windows operating system. It’s a significant improvement from the early testing days, when hundreds of glass panels got cracked every couple of days, he says. Semiconductor manufacturers already use glass for more limited purposes, such as temporary support structures for silicon wafers. But the independent market research firm IDTechEx estimates there’s a big market for glass substrates, one that could boost the semiconductor market for glass from $1 billion in 2025 to as much as $4.4 billion by 2036.  The material could have additional benefits if it takes off. Glass can be made astoundingly smooth—5,000 times smoother than organic substrates. This would eliminate defects that can arise as metal gets layered onto semiconductors, says Xiaoxi He, a research analyst at IDTechEx. Defects in these layers can worsen chips’ performance or even render them unusable.   Glass could also help speed the movement of data. The material can guide light, which means chip designers could use it to build high-speed signal pathways directly into the substrate. Glass “holds enormous potential for the future of energy-efficient AI compute,” says Kulkarni at AMD, because a light-based system could move signals around with far less energy than the “power-hungry” copper pathways that are currently used to carry signals between chips in a package.

A panel pivot Early research on glass packaging started at the 3D Systems Packaging Research Center at the Georgia Institute of Technology in 2009. The university eventually partnered with Absolics, a subsidiary of SKC, a South Korean company that produces chemicals and advanced materials. SKC constructed a semiconductor facility for manufacturing glass substrates in Covington, Georgia, in 2024, and the glass substrate partnership between Absolics and Georgia Tech was eventually awarded two grants in the same year—worth a combined $175 million—throughthe US government’s CHIPS for America program, established under the administration of President Joe Biden. An Absolics employee monitors production of an early version of the company’s glass substrate.COURTESY OF ABSOLICS INC Now Absolics is moving toward commercialization; it plans to start manufacturing small quantities of glass substrates for customers this year. The company has led the way in commercializing glass substrates, says Yongwon Lee, a research engineer at Georgia Tech who is not directly involved in the commercial partnership with Absolics. Absolics says its facility can currently produce a maximum of 12,000 square meters of glass panels a year. That’s enough, Lee estimates, to provide glass substrates for between 2 million and 3 million chip packages the size of Nvidia’s H100 GPU. But the company isn’t alone. Lee says that multiple large manufacturers, including Samsung Electronics, Samsung Electro-Mechanics, and LG Innotek, have “significantly accelerated” their research and pilot production efforts in glass packaging over the past year. “This trend suggests that the glass substrate ecosystem is evolving from a single early mover to a broader industrial race,” he says. Other companies are pivoting to play more specialized roles in the glass substrate supply chain. In 2025, JNTC, a company that makes electrical connectors and tempered glass for electronics, established a facility in South Korea that’s capable of producing 10,000 semi-finished glass panels per month. Such panels include drilled holes for vertical electrical connections and thin metal layers coating the glass, but they require additional manufacturing work for installation in chip packages.  Last year, that South Korean facility began taking orders to supply semi-finished glass to both specialized substrate companies and semiconductor manufacturers. The company plans to expand the facility’s production in 2026 and open an additional manufacturing line in Vietnam in 2027.  Such industry actions show how quickly glass substrate technology is moving from prototype to commercialization—and how many tech players are betting that glass could be a surprisingly strong foundation for the future of computing and AI.

Read More »

Defense official reveals how AI chatbots could be used for targeting decisions

The US military might use generative AI systems to rank lists of targets and make recommendations—which would be vetted by humans—about which to strike first, according to a Defense Department official with knowledge of the matter. The disclosure about how the military may use AI chatbots comes as the Pentagon faces scrutiny over a strike on an Iranian school, which it is still investigating.   A list of possible targets might be fed into a generative AI system that the Pentagon is fielding for classified settings. Then, said the official, who requested to speak on background with MIT Technology Review to discuss sensitive topics, humans might ask the system to analyze the information and prioritize the targets while accounting for factors like where aircraft are currently located. Humans would then be responsible for checking and evaluating the results and recommendations. OpenAI’s ChatGPT and xAI’s Grok could, in theory, be the models used for this type of scenario in the future, as both companies recently reached agreements for their models to be used by the Pentagon in classified settings. The official described this as an example of how things might work but would not confirm or deny whether it represents how AI systems are currently being used. Other outlets have reported that Anthropic’s Claude has been integrated into existing military AI systems and used in operations in Iran and Venezuela, but the official’s comments add insight into the specific role chatbots may play, particularly in accelerating the search for targets. They also shed light on the way the military is deploying two different AI technologies, each with distinct limitations.
Since at least 2017, the US military has been working on a “big data” initiative called Maven. It uses older types of AI, particularly computer vision, to analyze the oceans of data and imagery collected by the Pentagon. Maven might take thousands of hours of aerial drone footage, for example, and algorithmically identify targets. A 2024 report from Georgetown University showed soldiers using the system to select targets and vet them, which sped up the process to get approval for these targets. Soldiers interacted with Maven through an interface with a battlefield map and dashboard, which might highlight potential targets in one color and friendly forces in another. The official’s comments suggest that generative AI is now being added as a conversational chatbot layer—one the military may use to find and analyze data more quickly as it makes decisions like which targets to prioritize. 
Generative AI systems, like those that underpin ChatGPT, Claude, and Grok, are a fundamentally different technology from the AI that has primarily powered Maven. Built on large language models, they are much less battle-tested. And while Maven’s interface forced users to directly inspect and interpret data on the map, the outputs produced by generative AI models are easier to access but harder to verify.  The use of generative AI for such decisions is reducing the time required in the targeting process, added the official, who did not provide details when asked how much additional speed is possible if humans are required to spend time double-checking a model’s outputs. The use of military AI systems is under increased public scrutiny following the recent strike on a girls’ school in Iran in which more than 100 children died. Multiple news outlets have reported that the strike was from a US missile, though the Pentagon has said it is still under investigation. And while the Washington Post has reported that Claude and Maven have been involved in targeting decisions in Iran, there is no evidence yet to explain what role generative AI systems played, if any. The New York Times reported on Wednesday that a preliminary investigation found outdated targeting data to be partly responsible for the strike.  The Pentagon has been ramping up its use of AI across operations in recent months. It started offering nonclassified use of generative AI models, for tasks like analyzing contracts or writing presentations, to millions of service members back in December through an effort called GenAI.mil. But only a few generative AI models have been approved by the Pentagon for classified use.  The first was Anthropic’s Claude, which in addition to its use in Iran was reportedly used in the operations to capture Venezuelan leader Nicolas Maduro in January. But following recent disagreements between the Pentagon and Anthropic over whether Anthropic could restrict the military’s use of its AI, the Defense Department designated the company a supply chain risk and President Trump demanded on social media that the government stop using its AI products within six months. Anthropic is fighting the designation in court.  OpenAI announced an agreement on February 28 for the military to use its technologies in classified settings. Elon Musk’s company xAI has also reached a deal for the Pentagon to use its model Grok in such settings. OpenAI has said its agreement with the Pentagon came with limitations, though the practical effectiveness of those limitations is not clear.  If you have information about the military’s use of AI, you can share it securely via Signal (username jamesodonnell.22).

Read More »

Nvidia targets inference as AI’s next battleground with Groq 3 LPX

It’s a big cost play, he pointed out, and it “has to happen everywhere, all the time, for all users.” The next phase of inferencing The new Groq 3 language processing units (LPUs) are based on intellectual property (IP) from Groq, which signed a $20 billion licensing agreement with Nvidia late last year. According to the chip company, a fleet of LPUs can function as a “giant single processor.” While Rubin GPUs will continue to handle prefill (prompt processing), Groq’s LPX will now handle latency-sensitive portions of decode (response). Together, they can deliver a “new class of inference performance,” Nvidia says.  Each LPX rack features 256 LPUs with 128 GB of on-chip static random-access memory (SRAM), 150 terabyte per second (TB/s) bandwidth, chip-to-chip links and high-speed connections to NVL72, Nvidia’s liquid-cooled AI supercomputer. Combined, these can reduce latency to “near zero,” Nvidia claims. The LPX integration with Vera Rubin AI factories will be available in the second half of this year. Training versus inferencing Training and inference stress infrastructure in very different ways, noted Sanchit Vir Gogia, chief analyst at Greyhound Research. While training rewards “massive parallelism and brute-force scale,” inferencing (especially for long context and interactive reasoning) is far more sensitive to latency, memory movement, cache behavior, concurrency, and cost per delivered token.

Read More »

The Pentagon is planning for AI companies to train on classified data, defense official says

The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, MIT Technology Review has learned.  AI models like Anthropic’s Claude are already used to answer questions in classified settings; applications include analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development that presents unique security risks. It would mean sensitive intelligence like surveillance reports or battlefield assessments become embedded into the models themselves, and it would bring AI firms into closer contact with classified data than before.  Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with MIT Technology Review. The news comes as demand for more powerful models is high: the Pentagon has reached agreements with OpenAI and Elon Musk’s xAI to operate their models in classified settings and is implementing a new agenda to become an “an ‘AI-first’ warfighting force” as the conflict with Iran escalates. (The Pentagon did not comment on its AI training plans as of publication time.) Training would be done in a secure data center that’s accredited to host classified government projects, and where a copy of an AI model is paired with classified data, according to two people familiar with how such operations work. Though the Department of Defense would remain the owner of the data, personnel from AI companies might in rare cases access the data if they have appropriate security clearance, the official said. 
Before allowing this new training, though, the official said, the Pentagon intends to evaluate how accurate and effective models are when trained on nonclassified data, like commercially available satellite imagery.  The military has long used computer vision models, an older form of AI, to identify objects in images and footage it collects from drones and airplanes, and federal agencies have awarded contracts to companies to train AI models on such content. And AI companies building large language models (LLMs) and chatbots have created versions of their models fine-tuned for government work, like Anthropic’s Claude Gov, which are designed to operate across more languages and in secure environments. But the official’s comments are the first indication that AI companies building LLMs, like OpenAI and xAI, could train government-specific versions of their models directly on classified data.
Aalok Mehta, who directs the Wadhwani AI Center at the Center for Strategic and International Studies and previously led AI policy efforts at Google and OpenAI, says training on classified data, as opposed to just answering questions about it, would present new risks.  The biggest of these, he says, is that classified information these models train on could be resurfaced to anyone using the model. That would be a problem if lots of different military departments, all with different classification levels and needs for information, were to share the same AI.  “You can imagine, for example, a model that has access to some sort of sensitive human intelligence—like the name of an operative—leaking that information to a part of the Defense Department that isn’t supposed to have access to that information,” Mehta says. That could create a security risk for the operative, one that’s difficult to perfectly mitigate if a particular model is used by more than one group within the military. However, Mehta says, it’s not as hard to keep information contained from the broader world: “If you set this up right, you will have very little risk of that data being surfaced on the general internet or back to OpenAI.” The government has some of the infrastructure for this already; the security giant Palantir has won sizable contracts for building a secure environment through which officials can ask AI models about classified topics without sending the information back to AI companies. But using these systems for training is still a new challenge.  The Pentagon, spurred by a memo from Defense Secretary Pete Hegseth in January, has been racing to incorporate more AI. It has been used in combat, where generative AI has ranked lists of targets and recommended which to strike first, and in more administrative roles, like drafting contracts and reports. There are lots of tasks currently handled by human analysts that the military might want to train leading AI models to perform and would require access to classified data, Mehta says. That could include learning to identify subtle clues in an image the way an analyst does, or connecting new information with historical context. The classified data could be pulled from the unfathomable amounts of text, audio, images, and video, in many languages, that intelligence services collect.  It’s really hard to say which specific military tasks would require AI models to train on such data, Mehta cautions, “because obviously the Defense Department has lots of incentives to keep that information confidential, and they don’t want other countries to know what kind of capabilities we have exactly in that space.”

Read More »

Energy Department Announces $500 Million to Strengthen Domestic Critical Materials Processing and Manufacturing

 Funding will expand domestic manufacturing of battery supply chains for defense, grid resilience, transportation, manufacturing and other industries WASHINGTON—The U.S. Department of Energy’s (DOE) Office of Critical Minerals and Energy Innovation (CMEI) today announced a Notice of Funding Opportunity (NOFO) for up to $500 million to expand U.S. critical mineral and materials processing and derivative battery manufacturing and recycling. Assistant Secretary of Energy (EERE) Audrey Robertson is currently in Japan meeting with regional allies at the Indo-Pacific Energy Security Ministerial and Business Forum (IPEM) to advance shared efforts on supply chain resilience and energy security issues. Her engagements at IPEM underscore the importance of close cooperation with partners as the United States strengthens its supply chain through this NOFO. “For too long, the United States has relied on hostile foreign actors to supply and process the critical materials that are essential in battery manufacturing and materials processing,” said U.S. Energy Secretary Chris Wright. “Thanks to President Trump’s leadership, the Department of Energy is playing a leading role in strengthening these domestic industries that will position the U.S. to win the AI race, meeting rising energy demand, and achieve energy dominance.” “I am delighted to be in Japan meeting with our allies, underscoring the important connection between critical materials and energy security,” said Assistant Secretary of Energy (EERE) Audrey Robertson. “Critical minerals processing is a vital component of our nation’s critical minerals supply base. Boosting domestic production, including through recycling, will bolster national security and ensure the United States and our partners are prepared to meet the energy challenges of the 21st century.” Funding awarded through this NOFO will support demonstration and/or commercial facilities for processing, recycling, or utilizing for manufacturing of critical materials which may include traditional battery minerals such as lithium, graphite, nickel, copper, aluminum, as well as other

Read More »

HPE, Nvidia expand AI partnership

In addition, the company announced the HPE Cray Supercomputing GX240 liquid-cooled compute blade for its GX5000 platform. The GX240 starts with 16 Nvidia Vera CPUs per blade and scales to 40 blades per rack, supporting up to 640 Nvidia Vera CPUs and 56,320 ARM cores per rack. In addition, HPE said new network connectivity—Nvidia Quantum-X800 InfiniBand—optimized for large-scale system connectivity is now available with HPE Cray Supercomputing GX5000. The Quantum-X800 InfiniBand switches provide 144 ports of 800 Gb/s connectivity per port with power efficiency features, the vendor stated. The vendor also rolled out the HPE Compute XD700, an AI server built on Nvidia HGX Rubin NVL8. The system is designed to deliver higher GPU density per rack and reduce space, power, and cooling costs while increasing AI training and inference throughput. Each rack of XD700 servers supports up to 128 Rubin GPUs, providing double the GPU density compared to the previous generation, according to HPE. During his GTC opening keynote, Nvidia CEO Jensen Huang said: “Vera is arriving at a turning point for AI. As intelligence becomes agentic—capable of reasoning and acting—the importance of the systems orchestrating that work is elevated. The CPU is no longer simply supporting the model; it’s driving it. With breakthrough performance and energy efficiency, Vera unlocks AI systems that think faster and scale further.”

Read More »

Energy Department Announces $293 Million in Funding to Support Genesis Mission National Science and Technology Challenges

WASHINGTON—The U.S. Department of Energy (DOE) today announced funding to advance the Genesis Mission’s efforts to tackle the nation’s most complex science and technology challenges. This includes a $293 million Request for Application (RFA),“The Genesis Mission: Transforming Science and Energy with AI.” Through this RFA, DOE invites interdisciplinary teams to leverage novel AI models and frameworks to address over 20 national challenges spanning advanced manufacturing, biotechnology, critical materials, nuclear energy, and quantum information science.    “The Genesis Mission has caught the imagination of our scientific and engineering communities to tackle national challenges in the age of AI,” said Under Secretary for Science Darío Gil and Genesis Mission Director. “With these investments we seek breakthrough ideas and novel collaborations leveraging the scientific prowess of our National Laboratories, the private sector, universities, and science philanthropies.”  The RFA is open to interdisciplinary teams from DOE National Laboratories, U.S. industry, and academia. Phase I awards will range from $500,000 to $750,000 and will support a nine month project period. Phase II awards will range from $6 million to $15 million over a three year project period. Teams may apply directly to either phase in FY 2026, and successful Phase I teams will be eligible to compete for larger Phase II awards in future cycles. Phase I applications and Phase II letters of intent are due April 28, 2026. Phase II applications are due May 19, 2026. DOE plans to hold an informational webinar about this RFA on March 26, 2026.  For full eligibility, application instructions, and challenge details, see the official NOFO: DE-FOA-0003612. Registration instructions and other details will be posted here.  ### 

Read More »

Chip wafer shortage will run through 2030 as AI demand overwhelms supply: SK Hynix chief

“This is no longer a cyclical imbalance. It is a structural reallocation of the memory market driven by AI infrastructure economics,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “The biggest mistake right now is to view this as a wafer or DRAM shortage. The constraint is systemic.” Shrish Pant, director analyst at Gartner, offered a more nuanced read. A 2030 horizon, he said, assumes AI demand grows without interruption — a scenario that is not guaranteed. “HBM wafer reallocation is very real and is definitely impacting the market till the end of 2027,” Pant said. “I see a sustained demand for HBM to continue to grow, with more complex, high-performance HBM keeping prices higher.” He added that some rationalisation in AI infrastructure spending cannot be ruled out, and that traditional DRAM prices could improve by 2028 as new fabs — including Samsung’s P5, SK Hynix’s Yongin facility, and Micron’s Boise expansion — come online, though prices would remain above 2025 levels. What makes this shortage different from previous memory cycles is supplier behaviour. Gogia pointed out that memory vendors are locking in multi-year agreements, committing future HBM output well in advance — a pattern inconsistent with cyclical markets. “This is how a strategic resource market behaves when demand visibility is high, and margins are concentrated in a specific segment,” he said. IDC, in a February analysis, projected that 2026 DRAM and NAND supply growth would come in at 16% and 17% year-on-year, respectively, well below historical norms, a consequence of Samsung, SK Hynix, and Micron reallocating cleanroom capacity toward higher-margin AI products. Enterprise buyers caught in the crossfire That capacity reallocation is now working its way through enterprise procurement, creating what Gogia described as a two-tier market: hyperscalers and sovereign-scale buyers who secure capacity early, and

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE