
What 6 GW of GPUs Really Means
The 6 GW of accelerator load envisioned under the OpenAI–AMD partnership will be distributed across multiple hyperscale AI factory campuses. If OpenAI begins with 1 GW of deployment in 2026, subsequent phases will likely be spread regionally to balance supply chains, latency zones, and power procurement risk.
Importantly, this represents entirely new investment in both power infrastructure and GPU capacity. OpenAI and its partners have already outlined multi-GW ambitions under the broader Stargate program; this new initiative adds another major tranche to that roadmap.
Designing for the AI Factory Era
These upcoming facilities are being purpose-built for next-generation AI factories, where MI450-class clusters could drive rack densities exceeding 100 kW. That level of compute concentration makes advanced power and cooling architectures mandatory, not optional. Expected solutions include:
- Warm-water liquid cooling (manifold, rear-door, and CDU variants) as standard practice.
- Facility-scale water loops and heat-reuse systems—including potential district-heating partnerships where feasible.
- Medium-voltage distribution within buildings, emphasizing busway-first designs and expanded fault-current engineering.
While AMD has not yet disclosed thermal design power (TDP) specifications for the MI450, a 1 GW campus target implies tens of thousands of accelerators. That scale assumes liquid cooling, ultra-dense racks, and minimal network latency footprints, pushing architectures decisively toward an “AI-first” orientation.
Design considerations for these AI factories will likely include:
- Liquid-to-liquid cooling plants engineered for step-function capacity adders (200–400 MW blocks).
- Optics-friendly white space layouts with short-reach topologies, fiber raceways, and aisles optimized for module swaps.
- Substation adjacency and on-site generation envelopes negotiated during early land-banking phases.
Networking, Memory, and Power Integration
As compute density scales, networking and memory bottlenecks will define infrastructure design. Expect fat-tree and dragonfly network topologies, 800 G–1.6 T interconnects, and aggressive optical-module roadmaps to minimize collective-operation latency, aligning with recent disclosures from major networking vendors.
HBM [High Bandwidth Memory] supply will become existential: every top-tier GPU depends on it. AMD’s deal rests on assured HBM volume, and Samsung’s recent capacity expansions point to an industry racing to meet that demand.
Finally, powering these campuses will require hybridized energy strategies, combining grid interconnects with on-site fast-start turbines, H₂-ready fuel cells, and battery energy storage systems (BESS) for ride-through and peak-shaving. Developers will need to address local interconnection and permitting challenges upfront to meet these enormous load requirements.
Are There Implications for the Data Center Industry?
OpenAI’s addition of AMD as a true gigawatt-scale GPU supplier will ripple through the entire data center ecosystem, from hyperscale cloud platforms to colocation providers and boutique AI builders. Every segment will now face pressure to validate ROCm-native software stacks and consider dual-path accelerator SKUs.
While NVIDIA’s CUDA platform maintains a substantial developmental lead, OpenAI’s endorsement of AMD accelerators at this scale changes the incentive structure. A successful migration (or even partial diversification) away from CUDA dependence would strengthen buyer leverage on both pricing and lead times across the industry.
Software Maturity and Ecosystem Acceleration
To fully capitalize on MI450-class clusters, the supporting software ecosystem must mature rapidly. This means optimized ROCm environments, tuned PyTorch/XLA kernels, and mixed-precision graph compilers capable of overlapping compute and communication at scale. The OpenAI collaboration will accelerate this process, driving kernel-library refinement, collective communication tuning, and scheduler integration that enterprise adopters can inherit downstream.
For smaller or regional data center operators, that acceleration provides an important signal: AMD-based GPU infrastructure is becoming a practical, supportable alternative to NVIDIA, backed by ecosystem momentum rather than experimental interest.
Memory, Packaging, and the Bottleneck Ahead
Even with a broader field of GPU suppliers, HBM and advanced packaging remain the long pole in the tent. Production capacity for HBM3E and HBM4 will ultimately gate how quickly large-scale GPU deployments can materialize. OpenAI’s Samsung and SK Group partnerships, announced October 1, aim to mitigate that risk by expanding high-bandwidth memory output during the critical 2026–2028 build window: exactly when AMD’s first 1 GW phase is scheduled to come online.
Bottom Line for Builders and Operators
- Design now for MI450-class liquid cooling and 100–150 kW+ racks.
- Engineer around HBM constraints—plan staggered cluster ramps, test/soak environments, and flexible network fabrics.
- Diversify power supply through grid and on-site generation, medium-voltage (MV) distribution, and BESS for peak-shaving and micro-event coverage.
- Treat software as infrastructure: invest in ROCm performance engineering pipelines and view communication libraries as critical-path assets, not optional tools.
OpenAI × Samsung (and Korea) — The “Stargate Korea” Pillar
As part of its broader Stargate initiative, OpenAI has announced strategic partnerships in South Korea with Samsung and SK Group to expand global AI infrastructure and boost production of advanced memory components. The collaboration combines new AI data center capacity in Korea with scaled-up HBM and DRAM manufacturing, aligning with the nation’s ambition to become a top-three global AI power supported by national-level industrial policy.
The partnerships target the most acute bottleneck in AI compute scaling: high-bandwidth memory supply. Reports suggest aggressive wafer-start schedules designed to meet Stargate-scale buildout timelines, enabling a steadier flow of HBM components for training-class GPUs.
Industry sources also point to AI data center developments within Korea, including projects involving SK-affiliated operators, forming a Northeast Asia hub that co-locates compute, optics, and memory manufacturing. This regional concentration will reduce latency in component logistics and create a vertically aligned supply corridor from chip fabrication to deployment.
Through these partnerships, Korean vendors in memory, packaging, and optical interconnects gain tighter integration with OpenAI’s hardware roadmaps—a development that directly benefits AMD’s deployment strategy by stabilizing HBM supply for the critical 2026 and beyond window.
Ultimately, the Stargate Korea partnerships reinforce AMD’s emergence as a credible, scalable alternative to NVIDIA in large-scale AI compute and solidify OpenAI’s long-term global development roadmap.