
Stargate is OpenAI’s massive AI infrastructure initiative, developed as a joint venture in partnership with Oracle and SoftBank. Formally announced in January 2025, the program is accelerating rapidly with the disclosure of five new U.S. data center sites. These additions—along with the flagship development in Abilene, Texas, and other ongoing projects—bring Stargate’s total planned capacity to nearly 7 gigawatts (GW). The cumulative investment estimate has now topped $400 billion as the program heads toward its ultimate goal: a 10 GW, $500 billion buildout. While the initiative focuses on building capacity with non-Microsoft partners, Microsoft remains a key technology partner and OpenAI’s primary cloud provider (Azure).
Where Are the Five New Sites?
The next wave of Stargate capacity is landing in regions already familiar with large-scale data center development. Based on public reporting and company statements, the five identified sites are:
-
Shackelford County, Texas (greater Abilene expansion): An extension of the area already hosting Vantage Data Centers’ Frontier project, a $25 billion development on 1,200 acres.
-
Milam County, Texas (Central Texas growth corridor): Previously announced as the home of a SoftBank-led Stargate data center campus.
-
Doña Ana County, New Mexico (Las Cruces area): Linked to Project Jupiter, a proposed $165 billion build spearheaded by BorderPlex Digital Assets, with Stack Infrastructure reported as a potential participant.
-
Lordstown, Ohio (Eastern PJM/FirstEnergy territory): Redevelopment of a former GM/Foxconn complex, being repositioned as a large-scale AI campus through a collaboration between OpenAI, Oracle, and SoftBank.
-
An additional Midwest site (TBD): Location yet to be disclosed.
These builds are being advanced under partnership models, with Oracle expected to lead three of the sites and SoftBank/SB Energy two. Together, they reinforce Stargate’s path toward a 10 GW national roadmap.
Scale and Performance Goals
With the addition of the five new campuses, plus Abilene and other previously announced projects, Stargate now approaches 7 gigawatts of planned capacity, roughly two-thirds of the 10 GW scale outlined for the initiative.
Although detailed rack configurations have not been disclosed, industry observers expect the sites to be built for extreme AI density, in the range of 100–150 kW per rack. At that level, the facilities would host tens of thousands of racks dedicated to training and inference workloads.
NVIDIA is reported to be supplying up to $100 billion in AI accelerators across the program, with Oracle constructing multiple large-scale halls at the Abilene hub. Earlier briefings referenced deployments exceeding 400,000 GPUs at Abilene alone, though no specific GPU generation has been confirmed. Final hardware choices are expected to hinge on project timelines and NVIDIA’s next-generation Vera Rubin parts, as well as broader supply commitments.
The partners have also projected as many as 25,000 onsite jobs during the construction phase across the new campuses. The current expansion is being described as ahead of schedule toward an end-2025 funding and commitment milestone. However, individual site energization dates and module-by-module commissioning schedules have not yet been published.
How Are the Various Roles Broken Down?
The Stargate program is being advanced through a division of responsibilities among its principal partners:
-
OpenAI: Oversees program design, capacity planning, and the consumption of compute for frontier AI models. Reuters has reported that OpenAI may tap debt facilities to finance chip leases in parallel with capital spending on the physical data center infrastructure.
-
Oracle: Acts as the primary cloud infrastructure partner and is leading three of the five new sites. Oracle is also executing a previously announced 4.5 GW Stargate build program in the U.S. (July 2025). Its existing development in Abilene provides the foundation for early Stargate capacity.
-
SoftBank / SB Energy: Serves as an equity and energy development partner, expected to lead two of the new sites. The group is contributing expertise in power procurement and onsite generation strategies.
-
NVIDIA: The strategic silicon supplier for the program. Multiple reports reference large multi-year GPU supply commitments that align with Stargate’s long-term roadmap.
Where Is the Power for All This Development Coming From?
Sourcing 7 to 10 GW of new capacity is a non-trivial challenge. In the near term, much of the initial demand will fall to local utilities at the selected sites. But across the portfolio, expect a layered approach combining grid interconnections, PPAs, onsite generation, and advanced storage.
In Texas, both Shackelford/Abilene and Milam County fall within ERCOT – where queues and transmission upgrades are notoriously complex, yet where private-wire gas, renewable + storage, and fast-track generation can accelerate time-to-power. The Abilene hub is already an Oracle marquee, and while the exact PPA mix isn’t public, a blend of grid power, renewables, storage, and dispatchable thermal is the likely recipe. These builds will also be governed by Texas Senate Bill 6, which sets new reliability rules for large-scale generation.
The Lordstown, OH site places OpenAI squarely in PJM territory, where developers navigate queue reform, capacity market dynamics, and local siting politics. Expect advanced conductor upgrades, substation expansions, and staged energizations as key ingredients. Ohio regulators are already reshaping policy to accommodate surging AI-era demand, while utilities pursue alternate sources to meet the load.
Doña Ana County, NM will tap Western renewables and long-haul transmission, with the state’s incentive environment and available land/water positioning southern New Mexico as an emerging AI-compute hub. Site-specific sourcing isn’t yet public, but the region is well-placed for a renewables-heavy strategy.
Given the timelines and sheer magnitude of power required, onsite generation (ranging from gas turbines capable of high hydrogen blends to large-scale batteries and, over the decade, potentially SMRs) will likely supplement grid feeds to hit commissioning dates. Longer-term, PPA structures and behind-the-meter solutions will be essential to mitigate congestion and curtailment risks.
Cooling, Density, and Campus Design Implications
Although vendors weren’t named in the disclosures, the scale and implied GPU counts point to extremely high-density white space that will require advanced liquid cooling. Expect a mix of direct-to-chip, rear-door heat exchangers, and potentially immersion cooling for select training clusters.
Backbone water systems are likely to be sized for multi-gigawatt campuses, with economizers and waste-heat loops deployed where climate and offtakers make them viable. By contrast, solutions that significantly strain local water resources are the least likely to be chosen.
On the electrical side, new utility transmission will shape substation-centric campus planning. Dedicated feeders for AI halls and separate utility corridors for rapid phasing are the logical next step. Greenfield sites without legacy industrial load may prove the fastest adopters of the most efficient, large-scale solutions.
This wave of development also elevates supply-chain modularity from experiment to necessity. Factory-built power modules, skid-mounted cooling plants, and standardized hall “blocks” of 50–100 MW each will enable deployments to ramp rapidly while reducing site-specific engineering friction. At this scale, modular isn’t just proof-of-concept; it becomes the backbone of how these campuses are built.
Will There Be Significant Local Pushback? And How Can It Be Addressed?
It’s important to note that Stargate is not operating in a political vacuum. The initiative carries significant federal support—both for the data center sector broadly and Stargate specifically. That backing is tied to national priorities around AI capacity and strategic competition with China. Such political capital can prove decisive in areas like permitting coordination and transmission approvals, where federal agencies often have a gatekeeping role.
OpenAI CEO Sam Altman underscored the scale of ambition at the launch event, telling Forbes that what was unveiled was “just a small fraction of what this site will eventually be—and this site itself is just a small fraction of the overall build. And all of that still won’t be enough to serve even the demand of ChatGPT.”
Senator Ted Cruz, speaking at the same event, wasn’t subtle in his endorsement, saying: “Message number one: America will beat China in the race for AI.”