Stay Ahead, Stay ONMINE

The Data Center Power Squeeze: Mapping the Real Limits of AI-Scale Growth

As we all know, the data center industry is at a crossroads. As artificial intelligence reshapes the already insatiable digital landscape, the demand for computing power is surging at a pace that outstrips the growth of the US electric grid. As engines of the AI economy, an estimated 1,000 new data centers1 are needed to […]

As we all know, the data center industry is at a crossroads. As artificial intelligence reshapes the already insatiable digital landscape, the demand for computing power is surging at a pace that outstrips the growth of the US electric grid. As engines of the AI economy, an estimated 1,000 new data centers1 are needed to process, store, and analyze the vast datasets that run everything from generative models to autonomous systems.

But this transformation comes with a steep price and the new defining criteria for real estate: power. Our appetite for electricity is now the single greatest constraint on our expansion, threatening to stall the very innovation we enable. In 2024, US data centers consumed roughly 4% of the nation’s total electricity, a figure that is projected to triple by 2030, reaching 12% or more.2 For AI-driven hyperscale facilities, the numbers are even more staggering. With the largest planned data centers requiring gigawatts of power, enough to supply entire cities, the cumulative demand from all data centers is expected to reach 134 gigawatts by 2030, nearly three times the current load.​3

This presents a systemic challenge. The U.S. power grid, built for a different era, is struggling to keep pace.

Utilities are reporting record interconnection requests, with some regions seeing demand projections that exceed their total system capacity by fivefold.4 In Virginia and Texas, the epicenters of data center expansion, grid operators are warning of tight supply-demand balances and the risk of blackouts during peak periods.5 The problem is not just the sheer volume of power needed, but the speed at which it must be delivered. Data center operators are racing to secure power for projects that could be online in as little as 18 months, but grid upgrades and new generation can take years, if not decades. The result is a bottleneck that is forcing the industry to rethink our approach to energy sourcing, grid integration, and infrastructure planning.​

The stakes could not be higher. If the power constraint is not resolved, the AI revolution could stall, with ripple effects across the economy. Companies may be forced to delay or scale back projects, and regions that fail to attract data centers could fall behind in the race for digital leadership. The solution will require a combination of policy innovation, technological advancement, and collaboration between the public and private sectors. Our industry is at a turning point, where the question is not just how much power is needed, but where it will come from.

The Scale of Demand: AI’s Insatiable Appetite

There is no denying it: the rise of artificial intelligence has fundamentally altered the energy landscape for data centers. In the past, data centers were primarily tasked with storing and serving digital content; this role required significant, but manageable, amounts of electricity. The advent of AI has changed that equation. AI workloads, particularly those involving large language models and deep learning, are orders of magnitude more energy-intensive than traditional computing tasks. We hear it; we know it.

Recent studies estimate that AI-specific servers in US data centers consumed 53 terawatt-hours of electricity in 2024, enough to power over 7 million homes for a year. Now consider the previous estimates that this figure could triple by 2030. Our facilities rank among the largest power consumers in the world.

The impact of this demand is already being felt. In Virginia, the nation’s data center capital, utility power demand from data centers is expected to reach 12.1 gigawatts in 2025, up from 9.3 gigawatts in 2024.5 In Texas, the figure is projected to hit 9.7 gigawatts, driven by both hyperscale and crypto-mining projects.4 These statistics reveal a fundamental shift in the way electricity is consumed. Because power is also needed for residential and industrial users, limited grid capacity is driving up prices and straining infrastructure. In a market where power availability is the critical gatekeeper of growth, many operators are willing to pay a premium for access to reliable, scalable energy.​

With all operators facing similar challenges, there are many comprehensive guides, maps, and strategies available. With these tools, we can conduct our own advance power study to hopefully shorten the due diligence timeline.

National and Regional Maps of Power Capacity

Mapping the US power infrastructure has become fundamental to site selection, risk assessment, and portfolio planning. This enables our industry to visualize broad market capacity for new load, sketching supply pockets at the regional and state level. Key siting decisions often evaluate the proximity of new projects to high-capacity, reliable generation and substations, or, where redundancy is paramount, to a mix of sources (nuclear, hydro, renewables, gas). Taken together, this also allows early teams to scan for legacy sites (retired or retiring plants) that may have available transmission interconnects and cooling resources, creating brownfield opportunities to accelerate deployment timelines.

For a macro view, EIA’s U.S. Power Plant Map provides a searchable inventory of thousands of power plants nationwide, including each facility’s nameplate capacity, fuel mix, status (operational, retired, planned), and ownership overlays. The Synapse Energy Interactive Map further supplements these records with owner and emissions data, drawing from EPA datasets to create a lens into the carbon profile of every major generator from 2018 to 2023.

Beyond their direct use in site selection, these maps are essential for long-term risk management. The trending overlays for “planned” and “standby” status provide a leading indicator for regions where capacity could swing abruptly as plants retire or convert, or where market signals are prompting investment in new generation. Emission overlays and ownership data also help anticipate political and community acceptance for new large loads—critical as our industry becomes more visible to local stakeholders.

Yet these resources have clear limits: they do not show live system operating constraints, price volatility, or grid flexibility under real-world conditions. As more data centers migrate to AI and HPC deployments, national generation maps serve as a starting point, but not the finish line, for finding available power.

Distribution-Level Hosting Capacity Maps

A decade ago, most projects could assume easy grid access at the distribution level. Today, as hyperscale nodes seek loads orders of magnitude above historic norms, local bottlenecks often dictate project feasibility more than market fundamentals. Hosting capacity maps are also increasingly being integrated into RFP processes, ensuring that new sites are competitively vetted on grid readiness, not just cost or fiber.

For granular siting, the most powerful tool now available is the distribution-level hosting capacity map. Utilities increasingly publish these as GIS portals, color-coding feeders and substations by how much new load they can connect without requiring major upgrades. Green typically means available headroom; red means grid constraints, costly reinforcements, or multi-year approvals.

The DOE U.S. Atlas of Hosting Capacity Map offers an aggregated index of these tools, linking to hundreds of live utility maps across the country, updated as of July 2025. They are designed for use by developers, municipalities, and site selectors, rapidly surfacing neighborhoods, substations, or service territories where distribution circuits are either open for business or already tapped out.

However, caveats remain. No amount of research can guarantee approval, just a “first-pass” filter. Local markets cannot always anticipate and accommodate the collective impacts if multiple projects land in their geography, as we have seen when Atlanta became a sudden hotbed of data center activity. It is also much more challenging to and slower solve the underlying reality of upstream transmission congestion and backlog queue. Despite these limitations, hosting capacity maps are fast becoming table stakes for early site planning and for engagement with utilities on real-time system flexibility.

Real-Time and Congestion Insights

Granular, real-time awareness of grid conditions is no longer optional for our industry’s planners, especially when financial commitments are six- and seven-figure monthly energy bills. Traditional models relied on historical curtailments, seasonal forecasts, or average LMPs; today, actionable intelligence depends on live dashboards and congestion sensors.

The EIA Real-Time Electricity Dashboard provides public, near-real-time data on system frequency, demand curves, regional interchange, and market pricing. This tool, combined with richer commercial and ISO dashboards, enables teams to track supply-demand imbalances, outage risks, and peak load events as they happen. The Ember US Electricity Data Explorer further breaks down generation, fuel mix, and emissions by state and ISO, with monthly, albeit lagged, detail to monitor market shifts and decarbonization trends.

These resources are not just for energy procurement teams: siting professionals, risk officers, and even marketing teams monitor real-time congestion to anticipate permitting narratives and local political risk. During acute grid scarcity or after major transmission outages, demand spikes and curtailments can rapidly upend long-term cost models. Many operators have begun overlaying congestion maps from regional ISOs (such as ISO-NE’s system maps) to triangulate lowest-risk interconnection points.

It should be noted that these dashboards do not capture “latent” or untapped potential at the substation or feeder level; their principal value is as warning systems for stress, not as market growth blueprints. Still, as grid risks become more dynamic and are often weather-driven, near-real-time mapping is now integral to keeping projects both bankable and reliable across increasingly volatile markets.

Untapped Generation Potential

Alongside known sources, the US grid harbors substantial underutilized power that could be unlocked for new demand, including non-powered dams, retired or retiring industrial infrastructure, and grid corridors with falling load. For example, only 3% of the nation’s 80,000 dams generate electricity; the National Hydropower Association and DOE estimate that retrofitting non-powered sites could add 10–12 GW of low-carbon capacity to the national grid.6 Projects like the Red Rock Hydroelectric Project and the Ohio River dam retrofits serve as prime case studies in tapping this overlooked resource, with thousands of megawatts of capacity potentially available using existing water infrastructure.

Hydropower presents one potential generation source, but we can easily think beyond it. Regions with legacy industrial assets, from decommissioned steel mills to former chemical facilities, sometimes possess transmission and land suitable for advanced data center development, as shown in brownfield overlays provided by DOE’s Clean Energy Resources toolkit. Many of these locations enjoy proximity to grid backbones and are already zoned for heavy electricity use, though timelines for securing new generation or upgrading connections remain a challenge.

For forward-thinking operators, untapped grid assets offer a hedge against oversubscribed regions, aligning economic growth with sustainability goals while diversifying risk across regions and fuel types.

Where Will the Power Head To?

No single tool or map can answer, “Where will the power come from?” at the scale and granularity our industry now requires. But by integrating national capacity maps, distribution-level hosting data, real-time grid congestion dashboards, and overlays of untapped generation, industry leaders can build a multi-layered picture of risk and opportunity. Closing the power gap for digital infrastructure will demand complex insights and tools to map not only power potential, but local acceptance as we grow into new markets.

When you layer the national generation maps, utility hosting-capacity tools, DOE clean‑energy siting work, and current development pipelines, a consistent picture emerges of a handful of markets and corridors that look structurally advantaged for power‑hungry AI and HPC builds. Five, in particular, show up repeatedly as “next‑wave” destinations for capacity, backup, and infrastructure readiness—even if each comes with its own strengths and caveats. By overlaying with JLL’s  latest data center market report, the same broad set of power‑advantaged regions keeps resurfacing as most likely to push the next wave of AI and HPC growth, because they combine power potential, industrial land, and infrastructure readiness. These five markets reveal themselves:

1. Pennsylvania / Mid‑Atlantic interior

Pennsylvania increasingly shows up as the pressure valve between Northern Virginia and the Midwest, with both power and industrial land positioning it as a natural corridor market. JLL’s U.S. Industrial Market Dynamics, Q3 2025 points to an active national pipeline and a long list of Pennsylvania and Mid‑Atlantic industrial markets—Eastern and Central Pennsylvania, Pittsburgh, and Richmond among them—where large‑scale sites and logistics infrastructure remain available, even as vacancy stabilizes. Noteworthy highlights include the state’s ample power, strong transmission position between NY/NJ and NOVA, and growing interest from both owner‑users and developers seeking 200 MW‑plus sites with 18–36 month power timelines. Regional maps of PJM’s grid show robust backbone transmission and legacy industrial corridors that can be repurposed, while state‑level land and power costs still undercut coastal metros. From our industry’s perspective, Pennsylvania scores well across all four map types: solid underlying generation, promising hosting and brownfield potential, and strategic proximity to the country’s largest existing hub.

In parallel, JLL’s mid‑year North America data center reporting highlights the continued dominance of Northern Virginia and the emergence of Pennsylvania‑adjacent sites marketed specifically for AI and hyperscale growth. When you overlay these signals with grid and clean‑energy maps that show strong transmission backbones and legacy industrial corridors, Pennsylvania and the broader interior Mid‑Atlantic begin to look like a logical next‑wave power corridor—close to existing demand, but not yet fully saturated.​

2. Dallas–Fort Worth and the Texas triangle

Texas already ranks as one of the two largest state‑level demand centers, with utility power to data centers projected at roughly 9.7 GW in 2025, up from under 8 GW a year earlier. Growth projections for Dallas–Fort Worth alone call for more than 4,300 MW of future data center power needs, making it one of the fastest‑expanding hubs in the country. What the maps show is a state with abundant generation (including gas, wind, and solar), extensive transmission corridors, and multiple utilities experimenting with tariffs and on‑site “bridge” solutions such as fuel cells to cover near‑term gaps. Hosting‑capacity style intelligence is not as consistently public as in some coastal states, and ERCOT volatility is a real risk, but from a pure power‑access and scale perspective, the Texas triangle remains high on every shortlist.

JLL’s North America Data Center Report identifies Dallas as one of the two dominant absorption engines in the continent, recording 575 MW of demand in the first half of 2025 and more than 1,000 MW of cumulative capacity growth in recent years. The same analysis notes a substantial pipeline under construction and planned, with most of it already pre‑leased—clear evidence that power‑served land in the Dallas–Fort Worth region is being locked up for AI and hyperscale users. JLL’s broader industrial work confirms that the Texas triangle (Dallas–Fort Worth, Houston, Austin/San Antonio) has an expansive industrial inventory and an active development pipeline, supporting power‑intensive uses that can plug into existing logistics, workforce, and grid infrastructure. Against the backdrop of state‑level maps showing abundant generation and ongoing grid initiatives to accelerate large‑load interconnections, Dallas and its neighboring metros stand out as a core “power‑first” growth cluster, even as utilities flag that timelines for new capacity are tightening.​

3. Atlanta and the broader Southeast

Georgia shows up repeatedly in both load‑growth forecasts and development trackers, with analysts pointing to “ample land, reasonable power costs, dense fiber, and demand from hyperscalers” as the mix driving rapid expansion around Atlanta. Regional utility maps and DOE clean‑energy siting work highlight strong transmission corridors, growing solar capacity, and a regulatory environment that has been relatively receptive to large loads. Neighboring Carolinas and Tennessee Valley territories add nuclear‑heavy baseload, which many in our industry view as attractive for AI‑class uptime and carbon narratives. While formal hosting‑capacity maps are patchier than on the West Coast or in the Northeast, the directional signal across sources is clear: the Southeast is consolidating its role as a power‑ready growth belt.

Atlanta’s data center footprint has grown from a secondary hub to a genuine powerhouse, and JLL’s mid‑year report frames it as one of the top five markets for absorption, with the market size having doubled since 2023 and on pace to double again by 2026. Fundamentals in JLL’s snapshot—low vacancy, hundreds of megawatts under construction, and over 200 MW planned—underscore how much of the city’s future grid headroom is being dedicated to our industry. At the same time, JLL’s industrial reports highlight Atlanta and a series of Southeastern markets (Charlotte, Nashville, Savannah, Jacksonville, and others) as active logistics and industrial corridors, where large tracts of industrial‑zoned land, transportation nodes, and utility‑served sites are already in play. Layer this on top of DOE clean‑energy siting work that points to growing solar, nuclear baseload nearby, and strengthening transmission in the region, and the picture that emerges is a Southeast arc anchored by Atlanta: a belt where both the grid and industrial real estate ecosystems are being tuned for very large, very power‑dense deployments.​

4. Ohio / Midwest datacenter corridor

Columbus and the surrounding Ohio corridor are now firmly on the industry’s radar as an emerging “power plus land” play. The market particularly around Columbus has quietly become one of the most interesting power‑and‑land plays on the map, with American Electric Power reporting interconnection requests for 36 sites totaling 13 GW of load in its Ohio service territory alone, down from over 30 GW after queue pruning, and roughly 18 GW of new demand from data centers across its multi‑state footprint. Analysts now flag “significant growth in Ohio” as operators cluster near existing fiber, interstate transmission, and legacy industrial infrastructure. From a mapping perspective, DOE’s clean‑energy resources work and Midwestern advocacy groups point to competitively priced power, access to renewables and storage, and brownfield opportunities at former heavy‑industry sites, such as steel, auto, and chemical sites. This combination—grid backbone, stranded or underused capacity, and supportive state‑level engagement—makes Ohio and adjacent Midwest states a recurring “up‑and‑to‑the‑right” region in long‑range plans.

Wisconsin is now joining that corridor in a visible way. Microsoft has announced multi‑billion‑dollar plans for large data center campuses in Mount Pleasant and other southeastern Wisconsin locations, leveraging the high‑capacity infrastructure originally developed for the Foxconn project, including substantial transmission build‑out and industrial‑zoned land near Lake Michigan. Regional grid maps and ISO data show that this corner of Wisconsin sits on strong high‑voltage corridors connecting into both Midcontinent Independent System Operator (MISO) and neighboring PJM interfaces, while state and local leaders are positioning these investments as anchors for broader clean‑energy and advanced‑manufacturing strategies. In practical terms, the same attributes that defined Ohio’s rise—available power, legacy industrial sites, and proximity to major load centers like Chicago—are now being replicated just across the state line, turning southeastern Wisconsin into a new node on the Midwest data‑center spine.​

While JLL’s North America Data Center Report focuses on Chicago as the Midwest’s incumbent core, with substantial inventory and ongoing expansion, it also notes significant capacity growth and hyperscale interest in other Central U.S. markets tied into major transmission and fiber routes. JLL’s industrial local reports for Columbus, Cleveland, Milwaukee, and other Midwest metros show robust pipelines and healthy absorption, signaling that large, infrastructure‑ready parcels remain available even as demand from manufacturing, logistics, and data centers accelerates. When you combine that with grid analyses showing heavy transmission corridors, legacy industrial substations, and DOE‑identified clean‑energy and brownfield opportunities across the region, the Midwest looks less like a peripheral option and more like a central growth spine for AI‑class infrastructure over the next decade.​

5. Emerging “stranded‑power” markets: Phoenix, Las Vegas/Reno, and the interior/landlocked West

Finally, several western metros and sub‑regions repeatedly appear in power and data center mapping as candidates for large‑scale AI growth: Phoenix, parts of Nevada like Las Vegas/Reno, and pockets in states like Idaho, Oklahoma, and Louisiana. Visualizations of future capacity requirements suggest that Phoenix could ultimately support over 5,000 MW of data center demand, with Las Vegas/Reno and other desert or interior hubs not far behind. The through‑line here is a search for “stranded” or under‑utilized power: regions with strong high‑voltage infrastructure, growing renewables, relatively low land costs, and, in some cases, nearby non‑powered dams or other upgradeable assets. Utilities and developers are testing more integrated models, such as combining new generation, storage, and large on‑site backup, to turn these maps from theoretical potential into power‑ready campuses.

JLL’s view of the “Landlocked” West points to a set of interior markets that pair strong high‑voltage networks and industrial land with rising data center interest. Phoenix, for example, is highlighted in JLL’s data center report with nearly 900+ MW of inventory, low vacancy, more than 1.3 GW under construction, and over 4 GW planned, an extraordinary signal of how much power‑enabled capacity developers expect to bring online there in the next few years. JLL’s industrial market coverage for Phoenix, Las Vegas, Salt Lake City, and Denver adds another layer, showing active development and logistics ecosystems that make it easier to stand up and support very large campuses.

When these markets are cross‑referenced with DOE clean‑energy and resource maps, which highlight nearby renewables, storage projects, and in some cases non‑powered or underutilized infrastructure, they form a loose “stranded‑power” arc: places where our industry is betting that today’s relative headroom can be converted into tomorrow’s AI‑scale footprints before they, too, become crowded.​

Industrial real estate and data center reporting validates what the power and hosting‑capacity maps already imply: our industry’s future is coalescing around a series of corridors—Pennsylvania and the Mid‑Atlantic interior, the Texas triangle centered on Dallas–Fort Worth, an Atlanta‑anchored Southeast, the Ohio/Midwest belt, and select interior‑West hubs—where power availability, industrial land, and infrastructure readiness are aligning fastest.

References:

1.       https://itif.org/publications/2025/10/27/data-center-capacity-will-need-increase-130-percent-by-2030-meet-demand-for-ai/

2.       https://www.pewresearch.org/short-reads/2025/10/24/what-we-know-about-energy-use-at-us-data-centers-amid-the-ai-boom/

3.       https://www.spglobal.com/commodity-insights/en/news-research/latest-news/electric-power/101425-data-center-grid-power-demand-to-rise-22-in-2025-nearly-triple-by-2030-1068451

4.       https://gridstrategiesllc.com/wp-content/uploads/National-Load-Growth-Report-2024.pdf

5.       https://www.brownadvisory.com/intl/insights/data-center-balancing-act-powering-sustainable-ai-growth

6.       https://www.hydro.org/waterpower/converting-non-powered-dams/

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Intel aims advanced Xeon 6+ at AI edge computing

At the Mobile World Conference show in Barcelona, Intel showcased its most advanced processor yet, the Xeon 6+ processor, codenamed “Clearwater Forest.” Technically, it is one of Intel’s most complex chiplet designs, with a package that combines a total of 12 compute chiplets manufactured on a mix of Intel 18A

Read More »

Nvidia partners with telecom providers for open 6G networks

Nvidia has partnered with a variety of global telecom providers for a commitment to build 6G on open and secure artificial intelligence-native platforms, bringing software-defined networking to telecommunications. Announced at the Mobile World Congress conference, the list of Nvidia partners is a who’s who of telecom — Booz Allen, BT

Read More »

U.S. Department of Energy Brings Together Vertical Gas Corridor Countries to Strengthen Energy Coordination

WASHINGTON, DC — The U.S. Department of Energy (DOE) today hosted officials from Bulgaria, Greece, Romania, Moldova, Ukraine, and the European Commission to advance work on the Vertical Gas Corridor. The meeting built on progress made at the Partnership for Transatlantic Energy Cooperation Summit in Athens in November 2025 and the Transatlantic Gas Security Summit in Washington, D.C. in February 2026.  “By partnering with the countries of the Vertical Corridor, we are opening major opportunities to expand U.S. LNG exports to Central and Eastern Europe,” said Joshua Volz. “This effort is so important to our President and Secretary because it aligns with our nation’s strengths and commitment to supporting friends and allies across Europe.” The technical discussion brought together Energy Ministries, national regulators, and Transmission System Operators (TSOs) to address key objectives essential to unlocking the Vertical Gas Corridor’s capacity to enable the northbound flow of regasified U.S. LNG from Greece and expand access to European markets:  Resolving regulatory friction points that impact long-term planning Harmonizing tariffs to achieve cost competitiveness Reviewing strategic infrastructure investments necessary to enable full corridor capacity Today’s meeting reinforces DOE’s commitment to strengthening U.S. energy leadership and helping allies secure reliable alternatives to adversarial energy suppliers. By reducing barriers to U.S. LNG exports, DOE continues to support America’s role as a leading global energy provider.                                                                                               ###

Read More »

Equinor lets EPC contract for Gullfaks field

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Equinor Energy AS has let an engineering, procurement, and construction (EPC) contract to SLB to upgrade the subsea compression system for Gullfaks field in the Norwegian North Sea. Under the contract, SLB OneSubsea will deliver two next-generation compressor modules to replace the units originally supplied in 2015 as part of the world’s first multiphase subsea compression system. The upgraded modules will increase differential pressure and flow capacity, enhancing recovery and extending field life, SLB said, while installation within the existing subsea infrastructure will minimize downtime and reduce overall campaign costs, the company continued. Gullfaks field lies in block 34/10 in the northern part of the North Sea. Three large production platforms with concrete substructures make up the development solution for the main field.

Read More »

Oxy cutting oil-and-gas capex by $300 million, eyes 1% production growth

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Occidental Petroleum Corp., Houston, will spend $5.5-5.9 billion on capital projects this year, an 8% drop from 2025 and $800 million less than executives’ early forecast late last year, as the company continues to emphasize efficiency gains. Spending on oil-and-gas operations will be $300 million less than last year. Sunil Mathew, chief financial officer, late last week told investors and analysts that Occidental’s capital spending budget for 2026 (adjusted for the recently completed divestiture of OxyChem) will focus on short-cycle projects and be roughly 70% devoted to US onshore assets. Still, onshore capex will drop by $400 million from last year in part because of a drop in Permian basin activities and efficiency improvements. Other elements of Occidental’s spending plan include: A reduction of about $100 million compared to last year for exploration work A $250 million drop in spending at the company’s Low Carbon Ventures group housing Stratos Mathew said capex, which will be weighted a little to the first half, sets up Occidental’s production to average 1.45 MMboe/d for the full year, a tick up from 2025’s average of 1.434 MMboe/d but down from the roughly 1.48

Read More »

Diamondback’s Van’t Hof growing ‘more confident about the macro’

The early Barnett production will help Diamondback slightly increase its oil production this year from 2025’s average of 497,200 b/d. Van’t Hof and his team are eyeing 505,000 b/d this year with total expected production of 926,000-962,000 boe/d versus last year’s 921,000 boe/d. On a Feb. 24 conference call with analysts and investors, Van’t Hof said he’s feeling better than in recent quarters about that production number possibly moving up. The bigger picture for the oil-and-gas sector, he said, has grown a bit brighter. “Some people have been talking about [oversupplying the market] for 2 years. It just hasn’t seemed to happen as aggressively as some expected,” Van’t Hof said. “As we turn to higher demand in the summer and driving season […] people will start to find reasons to be less bearish […] In general, we just feel more confident about the macro after a couple of big shocks last year on the supply side and the demand side.” In the last 3 months of 2025, Diamondback posted a net loss of more than $1.4 billion due to a $3.6 billion impairment charge because of lower commodity prices’ effect on the company’s reserves. Adjusted EBITA fell to $2.0 billion from $2.5 billion in late 2024 and revenues during the quarter slipped to nearly $3.4 billion from $3.7 billion. Shares of Diamondback (Ticker: FANG) were essentially flat at $173.68 in early-afternoon trading on Feb. 24. Over the past 6 months, they are still up more than 20% and the company’s market value is now $50 billion.

Read More »

Vaalco Energy advances offshore drilling, development in Gabon and Ivory Coast

Vaalco Energy Inc. is drilling Etame field offshore Gabon and a preparing a field development plan (FDP) off Ivory Coast.  In Gabon, Vaalco drilled, completed, and placed Etame 15H-ST development well on production in Etame oil field in 1V block. The well has a 250 m lateral interval of net pay in high-quality Gamba sands near the top of the reservoir. The well had a stabilized flow rate of about 2,000 gross b/d of oil with a 38% water cut through a 42/64-in. choke and ESP at 54 Hz, confirming expectations from the ET-15P pilot well results. The company is working to stabilize pressure and manage the reservoir. West Etame step out exploration well spudded in mid-February. Drilling the well from the S1 slot on the Etame platform Etame West (ET-14P) exploration prospect has a 57% chance of geologic success and is expected to reach the target zone by mid-March. Etame Marin block lies in Congo basin about 32 km off the coast of Gabon. The license area is spread over five fields covering about 187 sq km. Vaalco is operator at the block with 58.8% interest. In Ivory Coast, Vaalco has been confirmed as operator (60%) of Kossipo field on the CI-40 Block southwest of Baobab field with partner PetroCI holding the remaining 40%. An FDP is expected to be completed in second-half 2026. New ocean bottom node (OBN) seismic data is expected to drive and derisk Vaalco’s updated evaluation and development plan. Estimated Gross 2C resources are 102-293 MMboe in place. The Baobab Ivorien (formerly MV10) floating production storage and offloading vessel (FPSO) is currently off the East coast of Africa and is expected to return to Ivory Coast by late March.  

Read More »

Ovintiv sets 2026 plan around Permian, Montney after declaring portfolio shift ‘complete’

2026 guidance For 2026, Ovintiv plans to invest $2.25–2.35 billion, up slightly from the $2.147 billion spent in 2025. McCracken said capital spend will be highest in first-quarter 2026 at about $625 million, “largely due to $50 million of capital allocated to the Anadarko and some drilling activity in the Montney that we inherited from NuVista.” The program is designed to deliver 205,000–212,000 b/d of oil and condensate, some 2 bcfd of natural gas, and 620,000–645,000 boe/d total company production. For full-year 2025, the company produced 614,500 boe/d.  The company is pursuing a “stay‑flat” oil strategy, maintaining liquids output through steady activity rather than aggressive volume growth.  Permian Ovintiv plans to run 5 rigs and 1-2 frac crews in the Permian basin this year, bringing 125–135 net wells online. Oil and condensate volumes are expected to average 117,000–123,000 b/d, with natural gas production of 270–295 MMcfd. The company projects 2026 drilling and completion costs below $600/ft, about $25/ft lower than 2025. Chief operating officer Gregory Givens credited faster cycle times and ongoing application of surfactant technology. Ovintiv has now deployed surfactants in about 300 Permian wells, generating a 9% uplift in oil productivity versus comparable control wells. Givens also reiterated that Ovintiv remains committed to its established cube‑development model. Responding to an analyst question, he said the company continues completing entire cubes at once, then returning “18 months later” to develop adjacent cubes—an approach that stabilizes well performance and reduces parent‑child degradation, he said. “We are getting the whole cube at the same time, and that is working quite well for us,” he said. The company plans to drill its first Barnett Woodford test well across Midland basin acreage in 2026. Ovintiv holds Barnett rights across roughly 100,000 acres and intends to move cautiously given the zone’s depth, higher pressure,

Read More »

Why network bandwidth matters a lot

One interesting point about VPNs is raised by fully a third of capacity-hungry enterprises: SD-WAN is the cheapest and easiest way to increase capacity to remote sites. Yes, service reliability of broadband Internet access for these sites is highly variable, so enterprises say they need to pilot test in a target area to determine whether even business-broadband Internet is reliable enough, but if it is, high capacity is both available and cheap. Clearly data center networking is taking the prime position in enterprise network planning, even without any contribution from AI. Will AI contribute? Enterprises generally believe that self-hosted AI will indeed require more network bandwidth, but again think this will be largely confined to the data center. AI, they say, has a broader and less predictable appetite for data, and business applications involving the data that’s subject to governance, or that’s already data-center hosted, are likely to be hosted proximate to the data. That was true for traditional software, and it’s likely just as true for AI. Yes, but…today, three times as many enterprises say that they’d use AI needs simply to boost justification for capacity expansion as think they currently need it. AI hype has entered, and perhaps even dominates, capital network project justifications. These capacity trends don’t impact enterprises alone, they also reshape the equipment space. Only 9% of enterprises say they have invested in white-box devices to build capacity and data center configuration flexibility, but the number that say they would evaluate them in 2026 is double that. This may be what’s behind Cisco’s decision to push its new G300 chip. AI’s role in capital project justifications may also be why Cisco positions the G300 so aggressively as an AI facilitator. Make no mistake, though; this is really all about capacity and QoE, even for AI.

Read More »

JLL: Hyperscale and AI Demand Push North American Data Centers Toward Industrial Scale

JLL’s North America Data Center Report Year-End 2025 makes a clear argument that the sector is no longer merely expanding but has shifted into a phase of industrial-scale acceleration driven by hyperscalers, AI platforms, and capital markets that increasingly treat digital infrastructure as core, bond-like collateral. The report’s central thesis is straightforward. Structural demand has overwhelmed traditional real estate cycles. JLL supports that claim with a set of reinforcing signals: Vacancy remains pinned near zero. Most new supply is pre-leased years ahead. Rents continue to climb. Debt markets remain highly liquid. Investors are engineering new financial structures to sustain growth. Author Andrew Batson notes that JLL’s Data Center Solutions team significantly expanded its methodology for this edition, incorporating substantially more hyperscale and owner-occupied capacity along with more than 40 additional markets. The subtitle — “The data center sector shifts into hyperdrive” — serves as an apt one-line summary of the report’s posture. The methodological change is not cosmetic. By incorporating hyper-owned infrastructure, total market size increases, vacancy compresses, and historical time series shift accordingly. JLL is explicit that these revisions reflect improved visibility into the market rather than a change in underlying fundamentals; and, if anything, suggest prior reports understated the sector’s true scale. The Market in Three Words: Tight, Pre-Leased, Relentless The report’s key highlights page serves as an executive brief for investors, offering a concise snapshot of market conditions that remain historically constrained. Vacancy stands at just 1%, unchanged year over year, while 92% of capacity currently under construction is already pre-leased. At the same time, geographic diversification continues to accelerate, with 64% of new builds now occurring in so-called frontier markets. JLL also notes that Texas, when viewed as a unified market, could surpass Northern Virginia as the top data center market by 2030, even as capital

Read More »

7×24 Exchange’s Dennis Cronin on the Data Center Workforce Crisis: The Talent Cliff Is Already Here

The data center industry has spent the past two years obsessing over power constraints, AI density, and supply chain pressure. But according to longtime mission critical leader Dennis Cronin, the sector’s most consequential bottleneck may be far more human. In a recent episode of the Data Center Frontier Show Podcast, Cronin — a founding member of 7×24 Exchange International and board member of the Mission Critical Global Alliance (MCGA) — delivered a stark message: the workforce “talent cliff” the industry keeps discussing as a future risk is already impacting operations today. A Million-Job Gap Emerging Cronin’s assessment reframes the workforce conversation from a routine labor shortage to what he describes as a structural and demographic challenge. Based on recent analysis of open roles, he estimates the industry is currently short between 467,000 and 498,000 workers across core operational positions including facilities managers, operations engineers, electricians, generator technicians, and HVAC specialists. Layer in emerging roles tied to AI infrastructure, sustainability, and cyber-physical security, and the potential demand rises to roughly one million jobs. “The coming talent cliff is not coming,” Cronin said. “It’s here, here and now.” With data center capacity expanding at roughly 30% annually, the workforce pipeline is not keeping pace with physical buildout. The Five-Year Experience Trap One of the industry’s most persistent self-inflicted wounds, Cronin argues, is the widespread requirement for five years of experience in roles that are effectively entry level. The result is a closed-loop hiring dynamic: New workers can’t get hired without experience They can’t gain experience without being hired Operators end up poaching from each other Workers may benefit from the resulting 10–20% salary jumps, but the overall talent pool remains stagnant. “It’s not helping us grow the industry,” Cronin said. In a market defined by rapid expansion and increasing system complexity, that

Read More »

Aeroderivative Turbines Move to the Center of AI Data Center Power Strategy

From “Backup” to “Bridging” to Behind-the-Meter Power Plants The most important shift is conceptual: these systems are increasingly blurring the boundary between emergency backup and primary power supply. Traditionally, data center electrical architecture has been clearly tiered: UPS (seconds to minutes) to ride through utility disturbances and generator start. Diesel gensets (minutes to hours or days) for extended outages. Utility grid as the primary power source. What’s changing is the rise of bridging power:  generation deployed to energize a site before the permanent grid connection is ready, or before sufficient utility capacity becomes available. Providers such as APR Energy now explicitly market turbine-based solutions to data centers seeking behind-the-meter capacity while awaiting utility build-out. That framing matters because it fundamentally changes expected runtime. A generator that operates for a few hours per year is one regulatory category. A turbine that runs continuously for weeks or months while a campus ramps is something very different; and it is drawing increased scrutiny from regulators who are beginning to treat these installations as material generation assets rather than temporary backup systems. The near-term driver is straightforward. AI workloads are arriving faster than grid infrastructure can keep pace. Data Center Frontier and other industry observers have documented the growing scramble for onsite generation as interconnection queues lengthen and critical equipment lead times expand. Mainstream financial and business media have taken notice. The Financial Times has reported on data centers turning to aeroderivative turbines and diesel fleets to bypass multi-year power delays. Reuters has likewise covered large gas-turbine-centric strategies tied to hyperscale campuses, underscoring how quickly the co-located generation model is moving into the mainstream. At the same time, demand pressure is tightening turbine supply chains. Industry reporting points to extended waits for new units, one reason repurposed engine cores and mobile aeroderivative packages are gaining

Read More »

Cooling’s New Reality: It’s Not Air vs. Liquid Anymore. It’s Architecture.

By early 2026, the data center cooling conversation has started to sound less like a product catalog and more like a systems engineering summit. The old framing – air cooling versus liquid cooling – still matters, but it increasingly misses the point. AI-era facilities are being defined by thermal constraints that run from chip-level cold plates to facility heat rejection, with critical decisions now shaped by pumping power, fluid selection, reliability under ambient extremes, water availability, and manufacturing throughput. That full-stack shift is written all over a grab bag of recent cooling announcements. On one end of the spectrum we see a Department of Energy-funded breakthrough aimed directly at next-generation GPU heat flux. On the other, it’s OEM product launches built to withstand –20°F to 140°F operating conditions and recover full cooling capacity within minutes of a power interruption. In between we find a major acquisition move for advanced liquid cooling IP, a manufacturing expansion that more than doubles footprint, and the quiet rise of refrigerants and heat-transfer fluids as design-level considerations. What’s emerging is a new reality. Cooling is becoming one of the primary constraints on AI deployment technically, economically, and geographically. The winners will be the players that can integrate the whole stack and scale it. 1) The Chip-level Arms Race: Single-phase Fights for More Runway The most “pure engineering” signal in this news batch comes from HRL Laboratories, which on Feb. 24, 2026 unveiled details of a single-phase direct liquid cooling approach called Low-Chill™. HRL’s framing is pointed: the industry wants higher GPU and rack power densities, but many operators are wary of the cost and operational complexity of two-phase cooling. HRL says Low-Chill was developed under the U.S. Department of Energy’s ARPA-E COOLERCHIPS program, and claims a leap that goes straight at the bottleneck. It can increase

Read More »

Policy Shock: Big Tech Told to Power Its Own AI Buildout

The AI data center boom has been colliding with grid reality for more than two years. This week, the issue moved closer to the policy front lines. The White House is advancing a “ratepayer protection” framework that has gained visibility in recent days, aimed at ensuring large AI data center projects do not shift grid upgrade costs onto residential customers. It’s a signal widely interpreted by industry observers as encouraging hyperscalers to bring dedicated power solutions to the table. The Power Question Moves to Center Stage Washington now appears poised to push the industry toward a structural response to the data center power conundrum. The new federal impetus for major technology companies to shoulder the cost of their own power infrastructure is quickly emerging as one of the most consequential policy developments for the digital infrastructure sector in 2026. If formalized, the initiative would effectively codify a shift already underway which has found hyperscale and AI developers moving aggressively toward behind-the-meter generation and dedicated energy strategies. For an industry already grappling with interconnection delays, utility pushback, and mounting community scrutiny, the signal is unmistakable. The era of relying primarily on shared grid capacity for large AI campuses may be ending. From Market Trend to Policy Direction Large tech firms, including the biggest cloud and AI players, have been under increasing pressure from regulators and utilities concerned about ratepayer exposure and grid reliability. Policymakers are now signaling that future large-load approvals may hinge on whether developers can demonstrate energy self-sufficiency or dedicated supply. The logic is straightforward. AI campuses are arriving at hundreds of megawatts to gigawatt scale. Transmission upgrades are measured in multi-year timelines. Utilities face growing political pressure to protect residential customers. In that context, the emerging federal posture does not create a new trend so much as accelerate

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »