Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Securing digital assets against future threats

In partnership withLedger
.cst-large,
.cst-default {
width: 100%;
}

@media (max-width: 767px) {
.cst-block {
overflow-x: hidden;
}
}

@media (min-width: 630px) {
.cst-large {
margin-left: -25%;
width: 150%;
}

@media (min-width: 960px) {
.cst-large {
margin-left: -16.666666666666664%;
width: 140.26%;
}
}

@media (min-width: 1312px) {
.cst-large {
width: 145.13%;
}
}
}

Read More »

The Gigawatt Bottleneck: Power Constraints Define AI Data Center Growth

Power is rapidly becoming the defining constraint on the next phase of data center growth. Across the industry, developers and hyperscalers are discovering that the biggest obstacle to deploying AI infrastructure is no longer capital, land, or connectivity. It’s electricity. In major markets from Northern Virginia to Texas, grid interconnection timelines are stretching out for years as utilities struggle to keep pace with a surge in large-load requests from AI-driven infrastructure. A new industry analysis from Bloom Energy reinforces that emerging reality. The company’s 2026 Data Center Power Report finds that electricity availability has moved from a planning consideration to a defining boundary on data center expansion, transforming site selection, power strategies, and the design of next-generation AI campuses. Based on surveys of hyperscalers, colocation providers, utilities, and equipment suppliers conducted through 2025, the report concludes that the determinants of data center growth are changing in the AI era. Across the industry, the result is a structural shift in how data centers are planned, financed, and powered. Industry executives interviewed for the report say the shift is already visible in real-world development decisions. “We’re seeing a geographic shift as certain regions become more power-friendly and therefore more attractive for data center construction,” said a hyperscaler energy executive quoted in the report, noting that developers are increasingly prioritizing markets where large blocks of electricity can be secured quickly and predictably. AI Load Is Accelerating Faster Than the Grid Bloom’s analysis suggests that U.S. data center IT load could grow from roughly 80 gigawatts in 2025 to about 150 gigawatts by 2028, effectively doubling within three years as AI training clusters and inference infrastructure expand. That surge is already showing up in grid planning models. The Electric Reliability Council of Texas (ERCOT), which oversees the Texas power market, now forecasts that statewide

Read More »

PJM Moves to Redefine Behind-the-Meter Power for AI Data Centers

PJM Interconnection is moving to rewrite how behind-the-meter power is treated across its grid, signaling a major shift as AI-scale data centers push electricity demand into territory the current regulatory framework was never designed to handle. For years, PJM’s retail behind-the-meter generation rules allowed customers with onsite generation to “net” their load, reducing the amount of demand counted for transmission and other grid-related charges. The framework dates back to 2004, when behind-the-meter generation was typically associated with smaller industrial facilities or campus-style energy systems. PJM now argues that those assumptions no longer hold. The arrival of very large co-located loads, particularly hyperscale and AI data centers seeking hundreds of megawatts of power on accelerated timelines, has exposed gaps in how the system accounts for and plans around those facilities. In February 2026, PJM asked the Federal Energy Regulatory Commission to approve a tariff rewrite that would sharply limit how new large loads can rely on legacy netting rules. The move reflects a broader challenge facing grid operators as the rapid expansion of AI infrastructure begins to collide with planning frameworks built for a far slower era of demand growth. The proposal follows directly from a December 18, 2025 order from FERC finding that PJM’s existing tariff was “unjust and unreasonable” because it lacked clear rates, terms, and conditions governing co-location arrangements between large loads and generating facilities. Rather than prohibiting co-location, the commission directed PJM to create transparent rules allowing data centers and other large consumers to pair with generation while still protecting system reliability and other ratepayers. In essence, FERC told PJM not to shut the door on these arrangements, but to stop improvising and build a formal framework capable of supporting them. Why Behind-the-Meter Power Matters Behind-the-meter arrangements have become one of the most attractive strategies for hyperscale

Read More »

Meta’s Expanded MTIA Roadmap Signals a New Phase in AI Data Center Architecture

Silicon as a Data Center Design Tool Custom silicon also allows hyperscale operators to shape the physical characteristics of the infrastructure around it. Traditional GPU platforms often arrive with fixed power envelopes and thermal constraints. But internally designed accelerators allow companies like Meta to tailor chips to the rack-level power and cooling budgets of their own data center architecture. That flexibility becomes increasingly important as AI infrastructure pushes power densities far beyond traditional enterprise deployments. Custom accelerators like MTIA can be engineered to fit within the liquid-to-chip cooling frameworks now emerging in hyperscale AI racks. These systems circulate coolant directly across cold plates attached to processors, removing heat far more efficiently than air cooling and enabling higher compute densities. For operators running thousands of racks across multiple campuses, small improvements in performance-per-watt can translate into enormous reductions in total power demand. Software-Defined Power One of the subtler advantages of custom silicon lies in how it interacts with data center power systems. By controlling chip-level power management features such as power capping and workload throttling, operators can fine-tune how servers consume electricity inside each rack. This creates opportunities to safely run racks closer to their electrical limits without triggering breaker trips or thermal overloads. In practice, that means data center operators can extract more useful compute from the same electrical infrastructure. At hyperscale, where campuses may draw hundreds of megawatts, these efficiencies have a direct impact on capital planning and grid interconnection requirements. The Interconnect Layer AI accelerators do not operate in isolation. Their effectiveness depends heavily on how they connect to memory, storage, and other compute nodes across the cluster. Industry analysts expect next-generation inference platforms to rely increasingly on high-speed interconnect technologies such as CXL (Compute Express Link) and advanced networking fabrics to support disaggregated memory architectures and low-latency

Read More »

From Real Estate to AI Factories: 7×24 Exchange’s Michael Siteman on Power, Politics, and the New Logic of Data Center Development

The data center industry’s explosive growth in the AI era is transforming how projects are conceived, financed, and built. What was once a real estate-driven business has become something far more complex: an engineering and infrastructure challenge defined by power availability, network topology, and local politics. That was one of the key themes in this recent episode of the Data Center Frontier Show podcast, where Editor-in-Chief Matt Vincent spoke with Michael Siteman, President of Prodigious Proclivities and a longtime leader and board member within 7×24 Exchange International. Drawing on decades of experience spanning brokerage, development, connectivity strategy, and infrastructure advisory, Siteman offered a field-level view of how the industry is adapting to the demands of AI-driven infrastructure. “The business used to be a pure real estate play,” Siteman said. “Now it’s a systems engineering problem. It’s power, network topology, the real estate itself, and political risk—all of these factors that have to work together.” Site Selection Becomes Systems Engineering For much of the early data center era, location decisions revolved around traditional real estate considerations: available buildings, proximity to customers, and nearby fiber connectivity. That logic has fundamentally changed. “Years ago, the question was: Is there a building? Are there carriers nearby?” Siteman recalled. “Now it’s completely different. Power availability, network topology, community acceptance—these are the variables that define whether a site works.” Utilities themselves have become gatekeepers in the process. “You go to a utility and ask if there’s power,” he explained. “They might say, ‘We might have power, but you have to pay us to study whether we actually have power.’” In many regions experiencing rapid digital infrastructure expansion, the answer increasingly comes back the same: there simply isn’t enough grid capacity available. Power Becomes the Project In the gigawatt-scale era of AI infrastructure, power strategy has moved

Read More »

Community Opposition Emerges as New Gatekeeper for AI Data Center Expansion

The rapid global buildout of AI infrastructure is colliding with a new constraint that hyperscalers cannot solve with capital or GPUs: local opposition. In the first months of 2026, community resistance has already begun reshaping the development pipeline. A February analysis by Sightline Climate estimates that 30–50 percent of the data center capacity expected to come online in 2026 may not be delivered on schedule, reflecting a growing set of constraints that now include power availability, permitting challenges, and increasingly organized local opposition. The financial stakes are already substantial. Recent reporting indicates that tens of billions of dollars in planned data center development have been delayed or halted amid community pushback, including an estimated $98 billion worth of projects delayed or blocked in a single quarter of 2025, according to research cited by Data Center Watch. What had been framed throughout 2024 and 2025 as an inevitable expansion of hyperscale campuses, gigawatt-scale power agreements, and AI “factory” clusters is now encountering a different kind of gatekeeper: the communities expected to host the infrastructure. The shift is already visible in project outcomes. Across the United States, multiple projects were canceled, blocked, or fundamentally reshaped in the opening months of 2026 due to organized local opposition. Reporting from The Guardian found that 26 data center projects were canceled in December and January, compared with just one cancellation in October, suggesting that community resistance campaigns are increasingly capable of stopping projects before construction begins. At the same time, local governments are responding to community pressure with moratoriums, zoning restrictions, and permitting delays that can stall projects long enough to jeopardize financing or push developers to seek more favorable jurisdictions. While opposition to data center development is not new, the scale, coordination, and success rate of these efforts suggest a structural shift in how

Read More »

Securing digital assets against future threats

In partnership withLedger
.cst-large,
.cst-default {
width: 100%;
}

@media (max-width: 767px) {
.cst-block {
overflow-x: hidden;
}
}

@media (min-width: 630px) {
.cst-large {
margin-left: -25%;
width: 150%;
}

@media (min-width: 960px) {
.cst-large {
margin-left: -16.666666666666664%;
width: 140.26%;
}
}

@media (min-width: 1312px) {
.cst-large {
width: 145.13%;
}
}
}

Read More »

The Gigawatt Bottleneck: Power Constraints Define AI Data Center Growth

Power is rapidly becoming the defining constraint on the next phase of data center growth. Across the industry, developers and hyperscalers are discovering that the biggest obstacle to deploying AI infrastructure is no longer capital, land, or connectivity. It’s electricity. In major markets from Northern Virginia to Texas, grid interconnection timelines are stretching out for years as utilities struggle to keep pace with a surge in large-load requests from AI-driven infrastructure. A new industry analysis from Bloom Energy reinforces that emerging reality. The company’s 2026 Data Center Power Report finds that electricity availability has moved from a planning consideration to a defining boundary on data center expansion, transforming site selection, power strategies, and the design of next-generation AI campuses. Based on surveys of hyperscalers, colocation providers, utilities, and equipment suppliers conducted through 2025, the report concludes that the determinants of data center growth are changing in the AI era. Across the industry, the result is a structural shift in how data centers are planned, financed, and powered. Industry executives interviewed for the report say the shift is already visible in real-world development decisions. “We’re seeing a geographic shift as certain regions become more power-friendly and therefore more attractive for data center construction,” said a hyperscaler energy executive quoted in the report, noting that developers are increasingly prioritizing markets where large blocks of electricity can be secured quickly and predictably. AI Load Is Accelerating Faster Than the Grid Bloom’s analysis suggests that U.S. data center IT load could grow from roughly 80 gigawatts in 2025 to about 150 gigawatts by 2028, effectively doubling within three years as AI training clusters and inference infrastructure expand. That surge is already showing up in grid planning models. The Electric Reliability Council of Texas (ERCOT), which oversees the Texas power market, now forecasts that statewide

Read More »

PJM Moves to Redefine Behind-the-Meter Power for AI Data Centers

PJM Interconnection is moving to rewrite how behind-the-meter power is treated across its grid, signaling a major shift as AI-scale data centers push electricity demand into territory the current regulatory framework was never designed to handle. For years, PJM’s retail behind-the-meter generation rules allowed customers with onsite generation to “net” their load, reducing the amount of demand counted for transmission and other grid-related charges. The framework dates back to 2004, when behind-the-meter generation was typically associated with smaller industrial facilities or campus-style energy systems. PJM now argues that those assumptions no longer hold. The arrival of very large co-located loads, particularly hyperscale and AI data centers seeking hundreds of megawatts of power on accelerated timelines, has exposed gaps in how the system accounts for and plans around those facilities. In February 2026, PJM asked the Federal Energy Regulatory Commission to approve a tariff rewrite that would sharply limit how new large loads can rely on legacy netting rules. The move reflects a broader challenge facing grid operators as the rapid expansion of AI infrastructure begins to collide with planning frameworks built for a far slower era of demand growth. The proposal follows directly from a December 18, 2025 order from FERC finding that PJM’s existing tariff was “unjust and unreasonable” because it lacked clear rates, terms, and conditions governing co-location arrangements between large loads and generating facilities. Rather than prohibiting co-location, the commission directed PJM to create transparent rules allowing data centers and other large consumers to pair with generation while still protecting system reliability and other ratepayers. In essence, FERC told PJM not to shut the door on these arrangements, but to stop improvising and build a formal framework capable of supporting them. Why Behind-the-Meter Power Matters Behind-the-meter arrangements have become one of the most attractive strategies for hyperscale

Read More »

Meta’s Expanded MTIA Roadmap Signals a New Phase in AI Data Center Architecture

Silicon as a Data Center Design Tool Custom silicon also allows hyperscale operators to shape the physical characteristics of the infrastructure around it. Traditional GPU platforms often arrive with fixed power envelopes and thermal constraints. But internally designed accelerators allow companies like Meta to tailor chips to the rack-level power and cooling budgets of their own data center architecture. That flexibility becomes increasingly important as AI infrastructure pushes power densities far beyond traditional enterprise deployments. Custom accelerators like MTIA can be engineered to fit within the liquid-to-chip cooling frameworks now emerging in hyperscale AI racks. These systems circulate coolant directly across cold plates attached to processors, removing heat far more efficiently than air cooling and enabling higher compute densities. For operators running thousands of racks across multiple campuses, small improvements in performance-per-watt can translate into enormous reductions in total power demand. Software-Defined Power One of the subtler advantages of custom silicon lies in how it interacts with data center power systems. By controlling chip-level power management features such as power capping and workload throttling, operators can fine-tune how servers consume electricity inside each rack. This creates opportunities to safely run racks closer to their electrical limits without triggering breaker trips or thermal overloads. In practice, that means data center operators can extract more useful compute from the same electrical infrastructure. At hyperscale, where campuses may draw hundreds of megawatts, these efficiencies have a direct impact on capital planning and grid interconnection requirements. The Interconnect Layer AI accelerators do not operate in isolation. Their effectiveness depends heavily on how they connect to memory, storage, and other compute nodes across the cluster. Industry analysts expect next-generation inference platforms to rely increasingly on high-speed interconnect technologies such as CXL (Compute Express Link) and advanced networking fabrics to support disaggregated memory architectures and low-latency

Read More »

From Real Estate to AI Factories: 7×24 Exchange’s Michael Siteman on Power, Politics, and the New Logic of Data Center Development

The data center industry’s explosive growth in the AI era is transforming how projects are conceived, financed, and built. What was once a real estate-driven business has become something far more complex: an engineering and infrastructure challenge defined by power availability, network topology, and local politics. That was one of the key themes in this recent episode of the Data Center Frontier Show podcast, where Editor-in-Chief Matt Vincent spoke with Michael Siteman, President of Prodigious Proclivities and a longtime leader and board member within 7×24 Exchange International. Drawing on decades of experience spanning brokerage, development, connectivity strategy, and infrastructure advisory, Siteman offered a field-level view of how the industry is adapting to the demands of AI-driven infrastructure. “The business used to be a pure real estate play,” Siteman said. “Now it’s a systems engineering problem. It’s power, network topology, the real estate itself, and political risk—all of these factors that have to work together.” Site Selection Becomes Systems Engineering For much of the early data center era, location decisions revolved around traditional real estate considerations: available buildings, proximity to customers, and nearby fiber connectivity. That logic has fundamentally changed. “Years ago, the question was: Is there a building? Are there carriers nearby?” Siteman recalled. “Now it’s completely different. Power availability, network topology, community acceptance—these are the variables that define whether a site works.” Utilities themselves have become gatekeepers in the process. “You go to a utility and ask if there’s power,” he explained. “They might say, ‘We might have power, but you have to pay us to study whether we actually have power.’” In many regions experiencing rapid digital infrastructure expansion, the answer increasingly comes back the same: there simply isn’t enough grid capacity available. Power Becomes the Project In the gigawatt-scale era of AI infrastructure, power strategy has moved

Read More »

Community Opposition Emerges as New Gatekeeper for AI Data Center Expansion

The rapid global buildout of AI infrastructure is colliding with a new constraint that hyperscalers cannot solve with capital or GPUs: local opposition. In the first months of 2026, community resistance has already begun reshaping the development pipeline. A February analysis by Sightline Climate estimates that 30–50 percent of the data center capacity expected to come online in 2026 may not be delivered on schedule, reflecting a growing set of constraints that now include power availability, permitting challenges, and increasingly organized local opposition. The financial stakes are already substantial. Recent reporting indicates that tens of billions of dollars in planned data center development have been delayed or halted amid community pushback, including an estimated $98 billion worth of projects delayed or blocked in a single quarter of 2025, according to research cited by Data Center Watch. What had been framed throughout 2024 and 2025 as an inevitable expansion of hyperscale campuses, gigawatt-scale power agreements, and AI “factory” clusters is now encountering a different kind of gatekeeper: the communities expected to host the infrastructure. The shift is already visible in project outcomes. Across the United States, multiple projects were canceled, blocked, or fundamentally reshaped in the opening months of 2026 due to organized local opposition. Reporting from The Guardian found that 26 data center projects were canceled in December and January, compared with just one cancellation in October, suggesting that community resistance campaigns are increasingly capable of stopping projects before construction begins. At the same time, local governments are responding to community pressure with moratoriums, zoning restrictions, and permitting delays that can stall projects long enough to jeopardize financing or push developers to seek more favorable jurisdictions. While opposition to data center development is not new, the scale, coordination, and success rate of these efforts suggest a structural shift in how

Read More »

Brent retreats from highs after Trump signals Iran war nearing end

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Oil futures eased from recent highs Tuesday as markets reacted to comments from US President Donald Trump suggesting the war with Iran may be nearing its conclusion, easing concerns about prolonged disruptions to Middle East crude supplies. Brent crude had climbed above $100/bbl amid escalating tensions in the region and fears that the war could prolong disruptions to shipments through the Strait of Hormuz—one of the world’s most critical energy chokepoints and a transit route for roughly one-fifth of global oil supply. Prices pulled back after Pres. Trump said the war was “almost done,” prompting traders to reassess the risk premium that had built into crude markets during the latest escalation. The earlier gains were driven by the fact that the war had disrupted tanker traffic in the Strait of Hormuz, raising concerns about wider supply disruptions from major Gulf oil producers. While the latest remarks helped calm markets, analysts note that geopolitical risks remain elevated and price volatility is likely to persist as traders monitor developments in the region. Any renewed escalation could quickly send crude prices higher again.

Read More »

Southwest Arkansas lithium project moves toward FID with 10-year offtake deal

Smackover Lithium, a joint venture between Standard Lithium Ltd. and Equinor, through subsidiaries of Equinor ASA, signed the first commercial offtake agreement for the South West Arkansas Project (SWA Project) with commodities group Trafigura Trading LLC. Under the terms of a binding take-or-pay offtake agreement, the JV will supply Trafigura with 8,000 metric tonnes/year (tpy) of battery-quality lithium carbonate (Li2CO3) over a 10-year period, beginning at the start of commercial production. Smackover Lithium is expected to achieve final investment decision (FID) for the project, which aims to use direct lithium extraction technology to produce lithium from brine resources in the Smackover formation in southern Arkansas, in 2026, with first production anticipated in 2028. The project encompasses about 30,000 acres of brine leases in the region, with the initial phase of project development focused on production from the 20,854-acre Reynolds Brine Unit.   Front-end engineering design was completed in support of a definitive feasibility study with a principal recommendation that the project is ready to progress to FID.  While pricing terms of the Trafigura deal were kept confidential, Standard Lithium said they are “structured to support the anticipated financing for the project.” The JV is seeking to finalize customer offtake agreements for roughly 80% of the 22,500 tonnes of annual nameplate lithium carbonate capacity for the initial phase of the project. This agreement represents over 40% of the targeted offtake commitments. Formed in 2024, Smackover Lithium is developing multiple DLE projects in Southwest Arkansas and East Texas. Standard Lithium is operator of the projecs with 55% interest. Equinor holds the remaining 45% interest.

Read More »

Equinor makes oil and gas discoveries in the North Sea

Equinor Energy AS discovered oil in the Troll area and gas and condensate in the Sleipner area of the North Sea. Byrding C discovery well 35/11-32 S in production license (PL) 090 HS was made 5 km northwest of Fram field in Troll. The well was drilled by the COSL Innovator rig in 373 m of water to 3,517 m TVD subsea. It was terminated in the Heather formation from the Middle Jurassic. The primary exploration target was to prove petroleum in reservoir rocks from the Late Jurassic deep marine equivalent to the Sognefjord formation. The secondary target was to prove petroleum and investigate the presence of potential reservoir rocks in two prospective intervals from the Middle Jurassic in deep marine equivalents to the Fensfjord formation. The well encountered a 22-m oil column in sandstone layers in the Sognefjord formation with a total thickness of 82 m, of which 70 m was sandstone with moderate to good reservoir properties. The oil-water contact was encountered. The secondary exploration target in the Fensfjord formation did not prove reservoir rocks or hydrocarbons. The well was not formation-tested, but data and samples were collected. The well has been permanently plugged. Preliminary estimates indicate the size of the discovery is 4.4–8.2 MMboe. Oil discovered in Byrding C will be produced using existing or future infrastructure in the area. The Frida Kahlo discovery was drilled from the Sleipner B platform in production license PL 046 northwest of Sleipner Vest and is estimated to contain 5–9 MMboe of gas and condensate. The well will be brought on stream as early as April. The four most recent exploration wells in the Sleipner area, drilled over a 3-month period, include Lofn, Langemann, Sissel, and Frida Kahlo. All have all proven gas and condensate in the Hugin formation, with combined estimated

Read More »

IEA launches record strategic oil release as Middle East war disrupts supply

The International Energy Agency (IEA) on Mar. 11 approved the largest emergency oil stock release in its history, making 400 million bbl available from member-country reserves in response to market disruptions tied to the war in the Middle East. The coordinated action, agreed unanimously by the IEA’s 32 member countries, is intended to ease supply pressure and temper price volatility as crude markets react to disrupted flows through the Strait of Hormuz. “The conflict in the Middle East is having significant impacts on global oil and gas markets, with major implications for energy security, energy affordability and the global economy for oil,” IEA executive director Fatih Birol said. The release more than doubles the previous IEA record set in 2022, when member countries collectively made 182.7 million bbl available following Russia’s invasion of Ukraine. Under the IEA system, member countries are required to maintain emergency oil stocks equal to at least 90 days of net imports, giving the agency a mechanism to respond when severe disruptions threaten global supply. The move comes after crude prices surged amid concerns that the US-Iran war could lead to prolonged disruption of exports from the Gulf. Despite the planned stock release, traders remain uncertain about whether reserve barrels alone will be enough to offset losses if the disruption persists. IEA said the emergency barrels will be supplied to the market from government-controlled and obligated industry stocks held across member countries. The action marks the sixth coordinated stock release in the agency’s history and underscores the seriousness of the current supply shock. Earlier the day, Japanese Prime Minister Sanae Takaichi said that Japan might start using its strategic oil reserves as early as next week, citing Japan’s unusually high dependence on Middle Eastern crude oil.

Read More »

Infographic: Strait of Hormuz energy trade 2025

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Coordinated attacks Feb. 28 by the US and Israel on Iran and the since-escalated conflict have nearly halted shipping traffic through the Strait of Hormuz, which typically carries about 20% of the world’s crude oil and natural gas. OGJ Statistics Editor Laura Bell-Hammer compiled data to showcase 2025 energy trade through the critical transit chokepoint.   <!–> –> <!–> ]–> <!–> ]–>

Read More »

BOEM: US OCS holds 65.8 billion bbl of technically recoverable reserves

The US Outer Continental Shelf (OCS) holds mean undiscovered technically recoverable resources (UTRR) of 65.8 billion bbl of oil and 218.43 tcf of natural gas, the US Bureau of Ocean Energy Management (BOEM) said Mar. 9. Based on current production trends, these undiscovered resources represent the potential for 100 or more years of energy production from the US Outer Continental Shelf (OCS), BOEM said. A large portion of undiscovered OSC resources is located offshore the Gulf of Mexico and Alaska, according to the report. The offshore Gulf holds 26.9 million bbl of oil and 45.59 tcf of gas, while offshore Alaska holds an estimated mean 24.1 million bbl of oil and 122.29 tcf of gas. Offshore Pacific holds a mean UTRR of 10.3 million barrels of oil and 16.2 trillion cubic feet of gas, the report said. Offshore Atlantic holds a mean UTRR of 10.3 billion barrels of oil and 16.2 trillion cubic feet of gas. The assessment also evaluates the impact of prices on hydrocarbon recovery. Alaska is particularly price-sensitive, with mean undiscovered economically recoverable resources (UERR) negligible until prices average $100/bbl and $17.79/Mcf. At those levels, the mean UERR stands at 6.25 billion bbl and 13.25 tcf. At $160/bbl and $28.47/Mcf, recoverable resources jump to 14.67 billion bbl and 58.78 tcf. In the Gulf of Mexico, the mean UERR is 17.51 billion bbl of oil and 13.71 tcf at average prices of $60/bbl and $3.20/Mcf, increasing to 20.51 billion bbl and 17.49 tcf at average prices of $100/bbl and $5.34/Mcf, respectively. BOEM conducts a national resource assessment every 4 years to understand the “distribution of undiscovered oil and gas resources on the OCS” and identify opportunities for additional oil and gas exploration and development. “The Outer Continental Shelf holds tremendous resource potential,” said BOEM Acting Director Matt Giacona. “This

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Three Aberdeen oil company headquarters sell for £45m

Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

Read More »

2025 ransomware predictions, trends, and how to prepare

Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

Read More »

Why physical AI is becoming manufacturing’s next advantage

In partnership withMicrosoft and NVIDIA For decades, manufacturers have pursued automation to drive efficiency, reduce costs, and stabilize operations. That approach delivered meaningful gains, but it is no longer enough. Today’s manufacturing leaders face a different challenge: how to grow amid labor constraints, rising complexity, and increasing pressure to innovate faster without sacrificing safety, quality, or trust. The next phase of transformation will not be defined by isolated AI tools or individual robots, but by intelligence that can operate reliably in the physical world. This is where physical AI—intelligence that can sense, reason, and act in the real world—marks a decisive shift. And it is why Microsoft and NVIDIA are working together to help manufacturers move from experimentation to production at industrial scale. The industrial frontier: Intelligence and trust, not just automation Most early AI adoption focused on narrow optimization: automating tasks, improving utilization, and cutting costs. While valuable, that phase often created new friction, including skills gaps, governance concerns, and uncertainty about long‑term impact. Furthermore, the use cases were plentiful but not as strategic.
The industrial frontier represents a different approach. Rather than asking how much work machines can replace, frontier manufacturers ask how AI can expand human capability, accelerate innovation, and unlock new forms of value while remaining trustworthy and controllable. Across industries, companies that successfully move into this frontier phase share two non‑negotiables:
Intelligence: AI systems must understand how the business actually handles its data, workflows, and institutional knowledge. Trust: As AI begins to act in high‑stakes environments, organizations must retain security, governance, and observability at every layer. Without intelligence, AI becomes generic. Without trust, adoption stalls. Why manufacturing is the proving ground for physical AI Manufacturing is uniquely positioned at the center of this shift. AI is no longer confined to planning or analytics. It is moving into physical execution: coordinating machines, adapting to real‑world variability, and working alongside people on the factory floor. Robotics, autonomous systems, and AI agents must now perceive, reason, and act in dynamic environments. This transition exposes a critical gap. Traditional automation excels at repetition but struggles with adaptability. Human workers bring judgment and context but are constrained by scale. Physical AI closes that gap by enabling human‑led, AI‑operated systems, where people set intent and intelligent systems execute, learn, and improve over time. Humans are essential for scaled success. Microsoft and NVIDIA: Accelerating physical AI at scale Physical AI cannot be delivered through point solutions. It requires agentic-driven, enterprise-grade development, deployment, and operations toolchains and workflows that connect simulation, data, AI models, robotics, and governance into a coherent system. NVIDIA is building the AI infrastructure that makes physical AI possible, including accelerated computing, open models, simulation libraries, and robotics frameworks and blueprints that enable the ecosystem to build autonomous robotics systems that can perceive, reason, plan, and take action in the physical world. Microsoft complements this with a cloud and data platform designed to operate physical AI securely, at scale, and across the enterprise. Together, Microsoft and NVIDIA are enabling manufacturers to move beyond pilots toward production‑ready physical AI systems that can be developed, tested, deployed, and continuously improved across heterogeneous environments spanning the product lifecycle, factory operations, and supply chain. From intelligence to action: Human-agent teams in the factory At the industrial frontier, AI is not a standalone system, but a digital teammate.

When AI agents are grounded in the proper operational data, embedded in human workflows, and governed end to end, they can assist with tasks such as: Optimizing production lines in real time Coordinating maintenance and quality decisions Adapting operations to supply or demand disruptions Accelerating engineering and product lifecycle decisions For example, manufacturers are beginning to use simulation‑grounded AI agents to evaluate production changes virtually before deploying them on the factory floor, reducing risk while accelerating decision‑making. Crucially, frontier manufacturers design these systems so humans remain in control. AI executes, monitors, and recommends, while people provide intent, oversight, and judgment. This balance allows organizations to move faster without losing confidence or control. The role of trust in scaling physical AI As physical AI systems scale, trust becomes the limiting factor. Manufacturers must ensure that AI systems are secure, observable, and operating within policy, especially when they influence safety‑critical or mission‑critical processes. Governance cannot be an afterthought; It must be engineered into the platform itself. This is why frontier manufacturers treat trust as a first‑class requirement, pairing innovation with visibility, compliance, and accountability. Only then can physical AI move from promising demonstrations to enterprise‑wide deployment. Why this moment matters—and what’s next The convergence of AI agents, robotics, simulation, and real‑time data marks an inflection point for manufacturing. What was once experimental is becoming operational. What was once siloed is becoming connected. At NVIDIA GTC 2026, Microsoft and NVIDIA will demonstrate how this collaboration supports physical AI systems that manufacturers can deploy today and scale responsibly tomorrow. From simulation‑driven development to real‑world execution, the focus is on helping manufacturers cross the industrial frontier with confidence.
For manufacturing leaders, the question is no longer whether physical AI will reshape operations, but how quickly they can adopt it responsibly, at scale, and with trust built in from the start. Discover more with Microsoft at NVIDIA GTC 2026. This content was produced by Microsoft. It was not written by MIT Technology Review’s editorial staff.

Read More »

The Download: how AI is used for military targeting, and the Pentagon’s war on Claude

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Defense official reveals how AI chatbots could be used for targeting decisions  The US military might use generative AI systems to rank targets and recommend which to strike first, according to a Defense Department official.  A list of possible targets could first be fed into a generative AI system that the Pentagon is fielding for classified settings. Humans might then ask the system to analyze the information and prioritize the targets. They would then be responsible for checking and evaluating the results and recommendations.  OpenAI’s ChatGPT and xAI’s Grok could soon be at the center of exactly these sorts of high-stakes military decisions. Read the full story. 
—James O’Donnell  The must-reads 
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 The Pentagon’s CTO claims Claude would “pollute” the defense supply chain He blamed a “policy preference” that’s baked into the model. (CNBC) + Anthropic is reeling from OpenAI’s “compromise” with the DoD. (MIT Technology Review)  2 An ex-DOGE staffer has been accused of stealing social security data Then taking the information to his new job in the IT division of a government contractor. (Wired) + He allegedly used a thumb drive to steal the data. (Washington Post)  3 Ukraine is offering its battlefield data for AI training Allies can access the data to train drones and other UAVs. (Reuters)  + Europe has a drone-filled vision for the future of war. (MIT Technology Review)   4 Meta has postponed its latest AI launch over performance issues It fell short of rival models from Google, OpenAI, and Anthropic. (NYT $) + The company’s former AI chief is betting against LLMs. (MIT Technology Review).  5 X could be breaching sanctions on Iran An account for Iran’s new supreme leader may break US rules. (Engadget) + Hacker group Handala has become the face of Iranian cyberwarfare. (Wired) + AI is turning the conflict into theater. (MIT Technology Review)   6 A landmark social media addiction trial is wrapping up It’ll decide whether the platforms are liable for harms caused to children. (The Guardian)  + AI companions are the next stage of digital addiction. (MIT Technology Review)  7 Western AI models have “failed spectacularly” on agriculture in the Global South The biggest problem? They’re not trained on local data. (Rest of World) 

8 Internet outages in Moscow are sparking surging sales of pagers The disruptions have been blamed on new tests of web controls. (Bloomberg $)  9 Why is China obsessed with OpenClaw? Lobster-mania is spreading to the general public. (SCMP) + Tech-savvy “tinkerers” are cashing in on the craze. (MIT Technology Review)  10 Hollywood has soured on Silicon Valley Movies and TV shows have swapped eccentric founders for megalomaniac moguls. (NYT $)  Quote of the day  “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.”  —OpenAI CEO Sam Altman makes a new pitch to investors at a BlackRock event, Gizmodo reports.  One More Thing  How the Ukraine-Russia war is reshaping the tech sector in Eastern Europe  Latvia’s annual national defense exercises took place in September and October, as the Ukraine-Russia war nears its third anniversary.GATIS INDRēVICS/ LATVIAN MINISTRY OF DEFENSE When Latvian startup Global Wolf Motors first pitched the idea of a military scooter, it was met with skepticism—and a wall of bureaucracy. Then Russia launched its full-scale invasion of Ukraine in February 2022, and everything changed.   Suddenly, Ukrainian combat units wanted any equipment they could get their hands on, and they were willing to try out ideas that might not have made the cut in peacetime. 
Within weeks, the scooters were on the front line—and even behind it, being used on daring reconnaissance missions. It signaled that a new product category for companies along Ukraine’s borders had opened: civilian technologies repurposed for military needs. Read the full story.  —Peter Guest 
We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + A new mini magnet could slash the costs of MRIs and nuclear fusion.  + This interactive map of Earth offers new routes to facts about our planet. + Escape the news cycle with this deep dive into the power of fantasy and nature. (Big thanks to reader and MIT alum Vicki for the find!) + Reports of reading’s death are greatly exaggerated. 

Read More »

Future AI chips could be built on glass

Human-made glass is thousands of years old. But it’s now poised to find its way into the AI chips used in the world’s newest and largest data centers. This year, a South Korean company called Absolics is planning to start commercial production of special glass panels designed to make next-generation computing hardware more powerful and energy efficient. Other companies, including Intel, are also pushing forward in this area. If all goes well, such glass technology could reduce the energy demands of the sorts of high-performance computing chips used in AI data centers—and it could eventually do the same for consumer laptops and mobile devices if production costs fall. The idea is to use glass as the substrate, or layer, on which multiple silicon chips are connected. This form of “packaging” is an increasingly popular way to build computing hardware, because it lets engineers combine specialized chips designed for specific functions into a single system. But it presents challenges, including the fact that hardworking chips can run so hot they physically warp the substrate they’re built on. This can lead to misaligned components and may reduce how efficiently the chips can be cooled, leading to damage or premature failure.  “As AI workloads surge and package sizes expand, the industry is confronting very real mechanical constraints that impact the trajectory of high-performance computing,” says Deepak Kulkarni, a senior fellow at the chip design company Advanced Micro Devices (AMD). “One of the most fundamental is warpage.” That’s where glass comes in. It can handle the added heat better than existing substrates, and it will let engineers keep shrinking chip packages—which will make them faster and more energy efficient. It “unlocks the ability to keep scaling package footprints without hitting a mechanical wall,” says Kulkarni. 
Momentum is building behind the shift. Absolics has finished building a factory in the US that is dedicated to producing glass substrates for advanced chips and expects to begin commercial manufacturing this year. The US semiconductor manufacturer Intel is working toward incorporating glass in its next-generation chip packages, and its research has spurred other companies in the chip packaging supply chain to invest in it as well. South Korean and Chinese companies are among the early adopters. “Historically, this is not the first attempt to adopt glass in semiconductor packaging,” says Bilal Hachemi, senior technology and market analyst at the market research firm Yole Group. “But this time, the ecosystem is more solid and wider; the need for glass-based [technology] is sharper.”  Fragile but mighty
Chip packaging has relied on organic substrates such as fiberglass-reinforced epoxy since the 1990s, says Rahul Manepalli, vice president of advanced packaging at Intel. But electrochemical complications limit how closely designers can place drilled holes to create copper-coated signal and power connections between the chips and the rest of the system. Chip designers must also account for the unpredictable shrinkage and distortion that organic substrates undergo as chips heat up and cool down. “We realized about a decade ago that we are going to have some limitations with organic substrates,” says Manepalli. These glass substrate test units were photographed at an Intel facility in Chandler, Arizona, in 2023.INTEL CORPORATION Glass may help overcome a lot of these limitations. Its thermal stability could allow engineers to create 10 times more connections per millimeter than organic substrates, says Manepalli. With denser connections, Intel’s designers can then stuff 50% more silicon chips into the same package area, improving computational capability. The denser connections also enable more efficient routing for the copper wires that deliver power to the chip. And the fact that glass dissipates heat more efficiently allows for chip designs that reduce overall power consumption.  “The benefits of glass core substrates are undeniable,” says Manepalli. “It’s clear that the benefits will drive the industry to make this happen sooner rather than later, and we want to be one of the first ones who do it.”  However, working with glass creates its own challenges. For one thing, it’s fragile. Glass substrates for data center chip packages are made from panels that are only about 700 micrometers to 1.4 millimeters thick, which leaves them susceptible to cracking or even shattering, says Manepalli. Researchers at Intel and other organizations have spent years figuring out how to use other materials and special tools to integrate the glass panels safely into semiconductor manufacturing processes.  Now, Manepalli says, Intel’s research and development teams are reliably fabricating glass panels and churning out test chip packages that incorporate glass—and in early 2025 they demonstrated that a functional device with a glass core substrate could boot up the Windows operating system. It’s a significant improvement from the early testing days, when hundreds of glass panels got cracked every couple of days, he says. Semiconductor manufacturers already use glass for more limited purposes, such as temporary support structures for silicon wafers. But the independent market research firm IDTechEx estimates there’s a big market for glass substrates, one that could boost the semiconductor market for glass from $1 billion in 2025 to as much as $4.4 billion by 2036.  The material could have additional benefits if it takes off. Glass can be made astoundingly smooth—5,000 times smoother than organic substrates. This would eliminate defects that can arise as metal gets layered onto semiconductors, says Xiaoxi He, a research analyst at IDTechEx. Defects in these layers can worsen chips’ performance or even render them unusable.   Glass could also help speed the movement of data. The material can guide light, which means chip designers could use it to build high-speed signal pathways directly into the substrate. Glass “holds enormous potential for the future of energy-efficient AI compute,” says Kulkarni at AMD, because a light-based system could move signals around with far less energy than the “power-hungry” copper pathways that are currently used to carry signals between chips in a package.

A panel pivot Early research on glass packaging started at the 3D Systems Packaging Research Center at the Georgia Institute of Technology in 2009. The university eventually partnered with Absolics, a subsidiary of SKC, a South Korean company that produces chemicals and advanced materials. SKC constructed a semiconductor facility for manufacturing glass substrates in Covington, Georgia, in 2024, and the glass substrate partnership between Absolics and Georgia Tech was eventually awarded two grants in the same year—worth a combined $175 million—throughthe US government’s CHIPS for America program, established under the administration of President Joe Biden. An Absolics employee monitors production of an early version of the company’s glass substrate.COURTESY OF ABSOLICS INC Now Absolics is moving toward commercialization; it plans to start manufacturing small quantities of glass substrates for customers this year. The company has led the way in commercializing glass substrates, says Yongwon Lee, a research engineer at Georgia Tech who is not directly involved in the commercial partnership with Absolics. Absolics says its facility can currently produce a maximum of 12,000 square meters of glass panels a year. That’s enough, Lee estimates, to provide glass substrates for between 2 million and 3 million chip packages the size of Nvidia’s H100 GPU. But the company isn’t alone. Lee says that multiple large manufacturers, including Samsung Electronics, Samsung Electro-Mechanics, and LG Innotek, have “significantly accelerated” their research and pilot production efforts in glass packaging over the past year. “This trend suggests that the glass substrate ecosystem is evolving from a single early mover to a broader industrial race,” he says. Other companies are pivoting to play more specialized roles in the glass substrate supply chain. In 2025, JNTC, a company that makes electrical connectors and tempered glass for electronics, established a facility in South Korea that’s capable of producing 10,000 semi-finished glass panels per month. Such panels include drilled holes for vertical electrical connections and thin metal layers coating the glass, but they require additional manufacturing work for installation in chip packages.  Last year, that South Korean facility began taking orders to supply semi-finished glass to both specialized substrate companies and semiconductor manufacturers. The company plans to expand the facility’s production in 2026 and open an additional manufacturing line in Vietnam in 2027.  Such industry actions show how quickly glass substrate technology is moving from prototype to commercialization—and how many tech players are betting that glass could be a surprisingly strong foundation for the future of computing and AI.

Read More »

Defense official reveals how AI chatbots could be used for targeting decisions

The US military might use generative AI systems to rank lists of targets and make recommendations—which would be vetted by humans—about which to strike first, according to a Defense Department official with knowledge of the matter. The disclosure about how the military may use AI chatbots comes as the Pentagon faces scrutiny over a strike on an Iranian school, which it is still investigating.   A list of possible targets might be fed into a generative AI system that the Pentagon is fielding for classified settings. Then, said the official, who requested to speak on background with MIT Technology Review to discuss sensitive topics, humans might ask the system to analyze the information and prioritize the targets while accounting for factors like where aircraft are currently located. Humans would then be responsible for checking and evaluating the results and recommendations. OpenAI’s ChatGPT and xAI’s Grok could, in theory, be the models used for this type of scenario in the future, as both companies recently reached agreements for their models to be used by the Pentagon in classified settings. The official described this as an example of how things might work but would not confirm or deny whether it represents how AI systems are currently being used. Other outlets have reported that Anthropic’s Claude has been integrated into existing military AI systems and used in operations in Iran and Venezuela, but the official’s comments add insight into the specific role chatbots may play, particularly in accelerating the search for targets. They also shed light on the way the military is deploying two different AI technologies, each with distinct limitations.
Since at least 2017, the US military has been working on a “big data” initiative called Maven. It uses older types of AI, particularly computer vision, to analyze the oceans of data and imagery collected by the Pentagon. Maven might take thousands of hours of aerial drone footage, for example, and algorithmically identify targets. A 2024 report from Georgetown University showed soldiers using the system to select targets and vet them, which sped up the process to get approval for these targets. Soldiers interacted with Maven through an interface with a battlefield map and dashboard, which might highlight potential targets in one color and friendly forces in another. The official’s comments suggest that generative AI is now being added as a conversational chatbot layer—one the military may use to find and analyze data more quickly as it makes decisions like which targets to prioritize. 
Generative AI systems, like those that underpin ChatGPT, Claude, and Grok, are a fundamentally different technology from the AI that has primarily powered Maven. Built on large language models, they are much less battle-tested. And while Maven’s interface forced users to directly inspect and interpret data on the map, the outputs produced by generative AI models are easier to access but harder to verify.  The use of generative AI for such decisions is reducing the time required in the targeting process, added the official, who did not provide details when asked how much additional speed is possible if humans are required to spend time double-checking a model’s outputs. The use of military AI systems is under increased public scrutiny following the recent strike on a girls’ school in Iran in which more than 100 children died. Multiple news outlets have reported that the strike was from a US missile, though the Pentagon has said it is still under investigation. And while the Washington Post has reported that Claude and Maven have been involved in targeting decisions in Iran, there is no evidence yet to explain what role generative AI systems played, if any. The New York Times reported on Wednesday that a preliminary investigation found outdated targeting data to be partly responsible for the strike.  The Pentagon has been ramping up its use of AI across operations in recent months. It started offering nonclassified use of generative AI models, for tasks like analyzing contracts or writing presentations, to millions of service members back in December through an effort called GenAI.mil. But only a few generative AI models have been approved by the Pentagon for classified use.  The first was Anthropic’s Claude, which in addition to its use in Iran was reportedly used in the operations to capture Venezuelan leader Nicolas Maduro in January. But following recent disagreements between the Pentagon and Anthropic over whether Anthropic could restrict the military’s use of its AI, the Defense Department designated the company a supply chain risk and President Trump demanded on social media that the government stop using its AI products within six months. Anthropic is fighting the designation in court.  OpenAI announced an agreement on February 28 for the military to use its technologies in classified settings. Elon Musk’s company xAI has also reached a deal for the Pentagon to use its model Grok in such settings. OpenAI has said its agreement with the Pentagon came with limitations, though the practical effectiveness of those limitations is not clear.  If you have information about the military’s use of AI, you can share it securely via Signal (username jamesodonnell.22).

Read More »

Building a strong data infrastructure for AI agent success

In partnership withSAP In the race to adopt and show value from AI, enterprises are moving faster than ever to deploy agentic AI as copilots, assistants, and autonomous task-runners. In late 2025, nearly two-thirds of companies were experimenting with AI agents, while 88% were using AI in at least one business function, up from 78% in 2024, according to McKinsey’s annual AI report. Yet, while early pilots often succeed, only one in 10 companies actually scaled their AI agents. One major issue: AI agents are only as effective as the data foundation supporting them. Experts argue that most companies are seeing delays in implementing AI, not because of shortcomings in the models, but because they lack data architectures that deliver business context to be reliably used by humans and agents. Companies need to be ready with the right data architecture, and the next few months — years, at most — will be critical, says Irfan Khan, president and chief product officer of SAP Data & Analytics. “The only prediction anybody can reliably make is that we don’t know what’s going to happen in the years, months — or even weeks — ahead with AI,” he says. “To be able to get quick wins right now, you need to adopt an AI mindset and … ground your AI models with reliable data.”
While data has always been important for business, it will be even more so in the age of AI. The capabilities of agentic AI will be set more by the soundness of enterprise data architecture and governance, and less by the evolution of the models. To scale the technology, businesses need to adopt a modern data infrastructure that delivers context along with the data. More business context, not necessarily more data Traditional views often conflate structured data with high value, and unstructured data with less value. However, AI complicates that distinction. High-value data for agents is defined less by format and more by business context. Data for critical business functions — such as supply-chain operations and financial planning — is context dependent. While fine-grained, high-volume data, such as IoT, logs, and telemetry, can yield value, but only when delivered with business context.
For that reason, the real risk for agentic AI is not lack of data, but lack of grounding, says Khan. “Anything that is business contextual will, by definition, give you greater value and greater levels of reliability of the business outcome,” he says. “It’s not as simple as saying high-value data is structured data and low-value data is where you have lots of repetition — both can have huge value in the right hands, and that’s what’s different about AI.” Context can be derived through integration with software, on-site analysis and enrichment, or through the governance pipeline. Data lacking those qualities will likely be untrusted — one reason why two-thirds of business leaders do not fully trust their data, according to the Institute for Data and Enterprise AI (IDEA). The resulting “trust debt” has held back businesses in their quest for AI readiness. Overcoming that lack of trust requires shared definitions, semantic consistency, and reliable operational context to align data with business meaning. Data sprawl demands a semantic, business-aware layer Over the past decade, the most important shift in enterprise data architecture has been the separation of compute and storage, cloud-scale flexibility, says Khan. Yet, that separation and move to cloud also created sprawl, with data housed in multiple clouds, data lakes, warehouses, and a multitude of SaaS applications. As companies move to AI, that sprawl does not go away. In fact, the problem is growing with more than two-thirds of companies citing data siloes as a top challenge in adopting AI, with more than half of enterprises struggling with 1,000 data sources or more. While the last era was about laying the foundation on which to build software-as-a-service — separating compute and storage and building lakes — the next era is about delivering the right data to autonomous AI agents tasked with various business functions. “Probably the biggest innovation that occurred in data management was the separation of compute and store,” Khan says. “But what’s really making a distinction now is the way that we harmonize the data and harvest the value of the data across multiple sources of content.” To do that requires a semantic or knowledge layer that supports multiple platforms, encodes business rules and relationships, provides a business-contextual and governed view of data, and allows humans and agents to access the data in the appropriate ways. But legacy data architectures cannot power the autonomous AI systems of the future, consultancy Deloitte stated in its State of AI in the Enterprise report. Only four in 10 companies believe their data management process is ready for AI, and that’s down from 43% the previous year, suggesting that as companies explore AI deployment, they are realizing their infrastructure’s shortcomings. Agentic AI does not replace SaaS Some investors and technologists speculate that AI agents will make SaaS applications obsolete. Khan strongly disagrees. Over the past 15 years, value has steadily moved up the stack, from on-premises infrastructure to infrastructure as a service (IaaS) to platform as a service (PaaS) to SaaS. Agentic AI is simply the next layer. Agentic AI will have its own layer to access the data and interact with the business logic. The value rises up the stack, but nothing below disappears, he says.

“SaaS doesn’t go away,” he says. “It just means SaaS and these agents will cooperate with one another. Companies are not going to throw away their entire general ledger and replace it with an agent. What’s the agent going to do? It doesn’t know anything without business context and business processing.” In this emerging model, the software stack is being reshaped so that applications and data provide governed context within which AI can act effectively. SaaS applications remain the systems of record, while the semantic layer becomes the business-context source of truth. AI agents become a new engagement layer, orchestrating across systems, and both humans and agents become “first-class citizens” in how they access business logic, he says. Critically, agents cannot directly connect to every operational system. “If we’re saying agents are going to take over the world … you can’t have an agent talking to every operational backend system,” Khan warns. “It just doesn’t work that way.” This further elevates the importance of a semantic or business-fabric layer. Where to start Most enterprises need to begin where their data already lives — in platforms like Snowflake, Databricks, Google BigQuery, or an existing SAP environment. Khan says that’s normal, but warns against rebuilding old patterns of vendor lock-in. He suggests that companies prioritize the data that matters most by focusing on preserving and providing business context to operational and application data. Companies should also invest early in governance and semantics by defining shared policies, access rules, and semantic models before scaling pilots. Finally, businesses should prioritize openness and fabric-style interoperability rather than forcing all data into one stack. Khan cautions against aiming for full automation too early. “There is a new brave opportunity to really engage in the agentic and AI world,” Khan says, “Fully automating [critical business processes] is maybe a stretch, because there’s going to be a lot of extra oversight necessary.” Early wins will likely come from less-critical processes and from agents that work off fresh, stateful data rather than stale dashboards, he adds. As AI begins to deliver value and adoption increases, leaders must decide how to reinvest those gains to drive top-line efficiency or enter new markets. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read More »

Pragmatic by design: Engineering AI for the real world

In partnership withL&T Technology Services The impact of artificial intelligence extends far beyond the digital world and into our everyday lives, across the cars we drive, the appliances in our homes, and medical devices that keep people alive. More and more, product engineers are turning to AI to enhance, validate, and streamline the design of the items that furnish our worlds. The use of AI in product engineering follows a disciplined and pragmatic trajectory. A significant majority of engineering organizations are increasing their AI investment, according to our survey, but they are doing so in a measured way. This approach reflects the priorities typical of product engineers. Errors have concrete consequences beyond abstract fears, ranging from structural failures to safety recalls and even potentially putting lives at risk. The central challenge is realizing AI’s value without compromising product integrity. Drawing on data from a survey of 300 respondents and in-depth interviews with senior technology executives and other experts, this report examines how product engineering teams are scaling AI, what is limiting broader adoption, and which specific capabilities are shaping adoption today and, in the future, with actual or potential measurable outcomes. Key findings from the research include:
Verification, governance, and explicit human accountability are mandatory in an environment where the outputs are physical—and the risk high. Where product engineers are using AI to directly inform physical designs, embedded systems, and manufacturing decisions that are fixed at release, product failures can lead to real-world risks that cannot be rolled back. Product engineers are therefore adopting layered AI systems with distinct trust thresholds instead of general-purpose deployments. Predictive analytics and AI-powered simulation and validation are the top near-term investment priorities for product engineering leaders. These capabilities—selected by a majority of survey respondents—offer clear feedback loops, allowing companies to audit performance, attain regulatory approval, and prove return on investment (ROI). Building gradual trust in AI tools is imperative.
Nine in ten product engineering leaders plan to increase investment in AI in the next one to two years, but the growth is modest. The highest proportion of respondents (45%) plan to increase investment by up to 25%, while nearly a third favor a 26% to 50% boost. And just 15% plan a bigger step change—between 51% and 100%. The focus for product engineers is on optimization over innovation, with scalable proof points and near-term ROI the dominant approach to AI adoption, as opposed to multi-year transformation. Sustainability and product quality are top measurable outcomes for AI in product engineering. These outcomes, visible to customers, regulators, and investors, are prioritized over competitive metrics like time to-market and innovation—rated of medium importance—and internal operational gains like cost reduction and workforce satisfaction, at the bottom. What matters most are real-world signals like defect rates and emissions profiles rather than internal engineering dashboards. Download the report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read More »

Securing digital assets against future threats

In partnership withLedger
.cst-large,
.cst-default {
width: 100%;
}

@media (max-width: 767px) {
.cst-block {
overflow-x: hidden;
}
}

@media (min-width: 630px) {
.cst-large {
margin-left: -25%;
width: 150%;
}

@media (min-width: 960px) {
.cst-large {
margin-left: -16.666666666666664%;
width: 140.26%;
}
}

@media (min-width: 1312px) {
.cst-large {
width: 145.13%;
}
}
}

Read More »

The Gigawatt Bottleneck: Power Constraints Define AI Data Center Growth

Power is rapidly becoming the defining constraint on the next phase of data center growth. Across the industry, developers and hyperscalers are discovering that the biggest obstacle to deploying AI infrastructure is no longer capital, land, or connectivity. It’s electricity. In major markets from Northern Virginia to Texas, grid interconnection timelines are stretching out for years as utilities struggle to keep pace with a surge in large-load requests from AI-driven infrastructure. A new industry analysis from Bloom Energy reinforces that emerging reality. The company’s 2026 Data Center Power Report finds that electricity availability has moved from a planning consideration to a defining boundary on data center expansion, transforming site selection, power strategies, and the design of next-generation AI campuses. Based on surveys of hyperscalers, colocation providers, utilities, and equipment suppliers conducted through 2025, the report concludes that the determinants of data center growth are changing in the AI era. Across the industry, the result is a structural shift in how data centers are planned, financed, and powered. Industry executives interviewed for the report say the shift is already visible in real-world development decisions. “We’re seeing a geographic shift as certain regions become more power-friendly and therefore more attractive for data center construction,” said a hyperscaler energy executive quoted in the report, noting that developers are increasingly prioritizing markets where large blocks of electricity can be secured quickly and predictably. AI Load Is Accelerating Faster Than the Grid Bloom’s analysis suggests that U.S. data center IT load could grow from roughly 80 gigawatts in 2025 to about 150 gigawatts by 2028, effectively doubling within three years as AI training clusters and inference infrastructure expand. That surge is already showing up in grid planning models. The Electric Reliability Council of Texas (ERCOT), which oversees the Texas power market, now forecasts that statewide

Read More »

PJM Moves to Redefine Behind-the-Meter Power for AI Data Centers

PJM Interconnection is moving to rewrite how behind-the-meter power is treated across its grid, signaling a major shift as AI-scale data centers push electricity demand into territory the current regulatory framework was never designed to handle. For years, PJM’s retail behind-the-meter generation rules allowed customers with onsite generation to “net” their load, reducing the amount of demand counted for transmission and other grid-related charges. The framework dates back to 2004, when behind-the-meter generation was typically associated with smaller industrial facilities or campus-style energy systems. PJM now argues that those assumptions no longer hold. The arrival of very large co-located loads, particularly hyperscale and AI data centers seeking hundreds of megawatts of power on accelerated timelines, has exposed gaps in how the system accounts for and plans around those facilities. In February 2026, PJM asked the Federal Energy Regulatory Commission to approve a tariff rewrite that would sharply limit how new large loads can rely on legacy netting rules. The move reflects a broader challenge facing grid operators as the rapid expansion of AI infrastructure begins to collide with planning frameworks built for a far slower era of demand growth. The proposal follows directly from a December 18, 2025 order from FERC finding that PJM’s existing tariff was “unjust and unreasonable” because it lacked clear rates, terms, and conditions governing co-location arrangements between large loads and generating facilities. Rather than prohibiting co-location, the commission directed PJM to create transparent rules allowing data centers and other large consumers to pair with generation while still protecting system reliability and other ratepayers. In essence, FERC told PJM not to shut the door on these arrangements, but to stop improvising and build a formal framework capable of supporting them. Why Behind-the-Meter Power Matters Behind-the-meter arrangements have become one of the most attractive strategies for hyperscale

Read More »

Meta’s Expanded MTIA Roadmap Signals a New Phase in AI Data Center Architecture

Silicon as a Data Center Design Tool Custom silicon also allows hyperscale operators to shape the physical characteristics of the infrastructure around it. Traditional GPU platforms often arrive with fixed power envelopes and thermal constraints. But internally designed accelerators allow companies like Meta to tailor chips to the rack-level power and cooling budgets of their own data center architecture. That flexibility becomes increasingly important as AI infrastructure pushes power densities far beyond traditional enterprise deployments. Custom accelerators like MTIA can be engineered to fit within the liquid-to-chip cooling frameworks now emerging in hyperscale AI racks. These systems circulate coolant directly across cold plates attached to processors, removing heat far more efficiently than air cooling and enabling higher compute densities. For operators running thousands of racks across multiple campuses, small improvements in performance-per-watt can translate into enormous reductions in total power demand. Software-Defined Power One of the subtler advantages of custom silicon lies in how it interacts with data center power systems. By controlling chip-level power management features such as power capping and workload throttling, operators can fine-tune how servers consume electricity inside each rack. This creates opportunities to safely run racks closer to their electrical limits without triggering breaker trips or thermal overloads. In practice, that means data center operators can extract more useful compute from the same electrical infrastructure. At hyperscale, where campuses may draw hundreds of megawatts, these efficiencies have a direct impact on capital planning and grid interconnection requirements. The Interconnect Layer AI accelerators do not operate in isolation. Their effectiveness depends heavily on how they connect to memory, storage, and other compute nodes across the cluster. Industry analysts expect next-generation inference platforms to rely increasingly on high-speed interconnect technologies such as CXL (Compute Express Link) and advanced networking fabrics to support disaggregated memory architectures and low-latency

Read More »

From Real Estate to AI Factories: 7×24 Exchange’s Michael Siteman on Power, Politics, and the New Logic of Data Center Development

The data center industry’s explosive growth in the AI era is transforming how projects are conceived, financed, and built. What was once a real estate-driven business has become something far more complex: an engineering and infrastructure challenge defined by power availability, network topology, and local politics. That was one of the key themes in this recent episode of the Data Center Frontier Show podcast, where Editor-in-Chief Matt Vincent spoke with Michael Siteman, President of Prodigious Proclivities and a longtime leader and board member within 7×24 Exchange International. Drawing on decades of experience spanning brokerage, development, connectivity strategy, and infrastructure advisory, Siteman offered a field-level view of how the industry is adapting to the demands of AI-driven infrastructure. “The business used to be a pure real estate play,” Siteman said. “Now it’s a systems engineering problem. It’s power, network topology, the real estate itself, and political risk—all of these factors that have to work together.” Site Selection Becomes Systems Engineering For much of the early data center era, location decisions revolved around traditional real estate considerations: available buildings, proximity to customers, and nearby fiber connectivity. That logic has fundamentally changed. “Years ago, the question was: Is there a building? Are there carriers nearby?” Siteman recalled. “Now it’s completely different. Power availability, network topology, community acceptance—these are the variables that define whether a site works.” Utilities themselves have become gatekeepers in the process. “You go to a utility and ask if there’s power,” he explained. “They might say, ‘We might have power, but you have to pay us to study whether we actually have power.’” In many regions experiencing rapid digital infrastructure expansion, the answer increasingly comes back the same: there simply isn’t enough grid capacity available. Power Becomes the Project In the gigawatt-scale era of AI infrastructure, power strategy has moved

Read More »

Community Opposition Emerges as New Gatekeeper for AI Data Center Expansion

The rapid global buildout of AI infrastructure is colliding with a new constraint that hyperscalers cannot solve with capital or GPUs: local opposition. In the first months of 2026, community resistance has already begun reshaping the development pipeline. A February analysis by Sightline Climate estimates that 30–50 percent of the data center capacity expected to come online in 2026 may not be delivered on schedule, reflecting a growing set of constraints that now include power availability, permitting challenges, and increasingly organized local opposition. The financial stakes are already substantial. Recent reporting indicates that tens of billions of dollars in planned data center development have been delayed or halted amid community pushback, including an estimated $98 billion worth of projects delayed or blocked in a single quarter of 2025, according to research cited by Data Center Watch. What had been framed throughout 2024 and 2025 as an inevitable expansion of hyperscale campuses, gigawatt-scale power agreements, and AI “factory” clusters is now encountering a different kind of gatekeeper: the communities expected to host the infrastructure. The shift is already visible in project outcomes. Across the United States, multiple projects were canceled, blocked, or fundamentally reshaped in the opening months of 2026 due to organized local opposition. Reporting from The Guardian found that 26 data center projects were canceled in December and January, compared with just one cancellation in October, suggesting that community resistance campaigns are increasingly capable of stopping projects before construction begins. At the same time, local governments are responding to community pressure with moratoriums, zoning restrictions, and permitting delays that can stall projects long enough to jeopardize financing or push developers to seek more favorable jurisdictions. While opposition to data center development is not new, the scale, coordination, and success rate of these efforts suggest a structural shift in how

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE