Stay Ahead, Stay ONMINE

Inside Microsoft’s Global AI Infrastructure: The Fairwater Blueprint for Distributed Supercomputing

Microsoft’s newest AI data center in Wisconsin, known as “Fairwater,” is being framed as far more than a massive, energy-intensive compute hub. The company describes it as a community-scale investment — one that pairs frontier-model training capacity with regional development. Microsoft has prepaid local grid upgrades, partnered with the Root-Pike Watershed Initiative Network to restore […]

Microsoft’s newest AI data center in Wisconsin, known as “Fairwater,” is being framed as far more than a massive, energy-intensive compute hub. The company describes it as a community-scale investment — one that pairs frontier-model training capacity with regional development. Microsoft has prepaid local grid upgrades, partnered with the Root-Pike Watershed Initiative Network to restore nearby wetlands and prairie sites, and launched Wisconsin’s first Datacenter Academy in collaboration with Gateway Technical College, aiming to train more than 1,000 students over the next five years.

The company is also highlighting its broader statewide impact: 114,000 residents trained in AI-related skills through Microsoft partners, alongside the opening of a new AI Co-Innovation Lab at the University of Wisconsin–Milwaukee, focused on applying AI in advanced manufacturing.

It’s Just One Big, Happy AI Supercomputer…

The Fairwater facility is not a conventional, multi-tenant cloud region. It’s engineered to operate as a single, unified AI supercomputer, built around a flat networking fabric that interconnects hundreds of thousands of accelerators. Microsoft says the campus will deliver up to 10× the performance of today’s fastest supercomputers, purpose-built for frontier-model training.

Physically, the site encompasses three buildings across 315 acres, totaling 1.2 million square feet of floor area, all supported by 120 miles of medium-voltage underground cable, 72.6 miles of mechanical piping, and 46.6 miles of deep foundation piles.

At the rack level, each NVL72 system integrates 72 NVIDIA Blackwell GPUs (GB200), fused together via NVLink/NVSwitch into a single high-bandwidth memory domain capable of 1.8 TB/s GPU-to-GPU throughput and 14 TB of pooled memory per rack. This creates a topology that may appear as independent servers but can be orchestrated as a single, giant accelerator.

Microsoft reports that one NVL72 can process up to 865,000 tokens per second. Future Fairwater-class deployments (including those under construction in the UK and Norway) are expected to adopt the GB300 architecture, extending pooled memory and overall system coherence even further.

To scale beyond a single rack, Azure links racks into pods using InfiniBand and Ethernet fabrics running at 800 Gb/s in a non-blocking fat-tree topology, ensuring that every GPU can communicate with every other at line rate. Multiple pods are then stitched together across each building using hop-count minimization techniques, and even the physical layout reflects optimization: a two-story rack configuration minimizes distance to further reduce latency.

The model itself scales globally. Through Microsoft’s AI Wide Area Network (AI WAN), multiple regional campuses operate in concert as a distributed supercomputer, pooling compute, storage, and scheduling resources across geographies for both resiliency and elastic capacity.

What’s the big-picture significance here? Fairwater represents a systems-of-systems approach — from silicon to server, rack, pod, building, campus, and ultimately to WAN — with every layer optimized for frontier-scale AI training throughput, rather than general-purpose cloud elasticity. It signals Microsoft’s intent to build a worldwide, interlinked AI supercomputing fabric, a development certain to shape ongoing debates about the architecture and economics of AI.

Don’t Forget About the Storage Requirements

Behind Fairwater’s GPU-driven power lies a re-architected Azure storage stack, designed to aggregate both capacity and bandwidth across thousands of nodes and hundreds of thousands of drives, eliminating the need for manual sharding. Microsoft reports that a single Azure Blob Storage account can sustain more than two million operations per second, while BlobFuse2 provides low-latency, high-throughput access that effectively brings object storage to GPU node-local speed.

The physical footprint of this storage layer is equally revealing. Microsoft notes that the dedicated storage and compute facility at Fairwater stretches the length of five football fields, underscoring that AI infrastructure at scale is as much about data movement and persistence as it is about raw compute. In other words, the supercomputer’s backbone isn’t just made of GPUs and memory. It’s built on the ability to feed them efficiently.

Keep Chill and Energized

As you might’ve heard, the days of relying solely on air cooling are effectively over, at least for AI-focused data centers. Traditional air systems simply can’t manage GB200-era rack densities. To meet the massive thermal loads of frontier AI hardware, Microsoft’s Fairwater campus employs facility-scale, closed-loop liquid cooling. The system is filled once during construction and recirculates continuously, using 172 twenty-foot fans to chill large heat-exchanger fins along the building’s exterior.

Microsoft reports that over 90% of the site’s capacity operates on this zero-operational-water loop, while the remaining ~10%—housing legacy or general-purpose servers—uses outside air cooling and switches to water-based heat exchange only during extreme heat events. Because the loop is sealed and non-evaporative, Fairwater’s total water use is both limited and precisely measurable, reinforcing Microsoft’s goal of predictable, sustainable operations.

Power strategy, meanwhile, remains a parallel priority. Although the company’s announcement did not detail electrical sourcing, Microsoft has committed to pre-paying grid upgrades to avoid costs being passed on to local ratepayers. The company will also match any fossil-based consumption 1:1 with new carbon-free generation, including a 250-MW solar power purchase agreement in Portage County.

Still, near-term reliability realities persist. Leadership has acknowledged that new natural-gas generation is likely to be located near the campus to meet startup demand and grid stability. As one executive told Reuters, “This is LNG territory,” underscoring the pragmatic challenge of powering multi-gigawatt campuses in constrained markets, where bringing capacity online often requires temporary fossil generation alongside long-term renewable investments.

One of Many

Fairwater is just the beginning. Microsoft is already replicating identical, AI-optimized data centers across the United States and abroad, with two major European campuses now advancing under the same architectural model:

  • United Kingdom (Loughton): Part of a $30 billion UK investment program extending through 2028, which includes $15 billion in capital expenditures to expand cloud and AI infrastructure. The effort—developed in partnership with nScale—will deliver the country’s largest supercomputer, incorporating more than 23,000 NVIDIA GPUs.

  • Norway (Narvik): In partnership with nScale and Aker, Microsoft is investing $6.2 billion to build a hydropower-backed AI campus designed around abundant, low-cost renewable energy and the cooling advantages of Norway’s northern climate. Initial services are expected to come online in 2026.

In Wisconsin, Microsoft has expanded its commitment to more than $7 billion, adding a second data center of similar scale. The first site will employ about 500 full-time workers, growing to roughly 800 after the second is operational, with as many as 3,000 construction jobs created during the build-out phase.

Globally, Microsoft has stated plans to invest approximately $80 billion in 2025 alone to expand its portfolio of AI-enabled data centers. To decarbonize the electricity supporting these facilities, the company has signed a framework agreement with Brookfield to deliver more than 10.5 gigawatts of new renewable generation over the next five years—at the time, the largest corporate clean-energy deal in history.

Microsoft has also entered into a first-of-its-kind 50-MW fusion power purchase agreement with Helion, targeting 2028 operations, alongside regional contracts such as the 250-MW solar PPA in Portage County, Wisconsin. Together, these initiatives illustrate how Microsoft is coupling massive infrastructure expansion with an aggressive clean-energy transition—an approach increasingly mirrored across the hyperscale landscape.

Is This a Picture of the Future or Just an Interim Step while AI Finds its Place?

Microsoft isn’t merely building larger GPU rooms. It’s industrializing a new architectural pattern for AI factories. The model centers on NVL72 racks as compute building blocks, non-blocking 800 Gb/s fabrics, exabyte-scale storage, facility-scale liquid cooling, and a global AI Wide Area Network that links these campuses into a distributed supercomputer. Wisconsin serves as the flagship; the UK and Norway are the next stamps in what is rapidly becoming a worldwide deployment strategy. The approach fuses ruthless systems engineering with a pragmatic energy posture, and the financial magnitude is staggering.

To put that investment into context: over the next five years, Microsoft, AWS, Google, Meta, and Oracle together are projected to spend roughly $2 trillion on data center expansion. By comparison, the entire U.S. Interstate Highway System, built over 35 years (from 1956 to its “completion” in 1991) cost just over $300 billion in 2025 dollars. That means today’s hyperscale AI buildout represents nearly seven times the inflation-adjusted cost of the national highway network.

While not an apples-to-apples comparison, the analogy underscores the scope of transformation underway. The AI infrastructure boom is poised to reshape economies, workforces, and regional development on a scale comparable to the infrastructure projects that defined the industrial age: only this time, the network being built is digital.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Intel details new efficient Xeon processor line

The new chips will be able to support up to 12-channel DDR5 memory with speeds of up to 8000 MT/s, a substantial increase over the 8 channels of 6400MT/s in the prior generation. In addition to that, the platform will support up to 6 UPI 2.0 links with up to

Read More »

Crude Near $59 on Surplus Fears

Oil fell as escalating trade tensions between China and the US diminished demand for risky assets while the International Energy Agency increased its estimate of a record crude surplus. West Texas Intermediate dropped 1.3% on Tuesday to settle near $59 a barrel, the lowest since May, while Brent hovered near $62. In the latest tit-for-tat between Beijing and Washington, China placed limits on five US entities of one of South Korea’s biggest shipbuilders, and threatened further retaliatory measures. The Paris-based IEA on Tuesday increased its forecast for an unprecedented oversupply of oil for 2026. Worldwide crude supplies will exceed demand by almost 4 million barrels a day next year, a record overhang in annual terms, the agency said. Also on Tuesday, US Federal Reserve Chair Jerome Powell reinforced speculation that officials are on track to cut rates in October amid a weakening labor market. “The intraday bounce off the lows in oil today was in reaction of a turnaround in risk tone from Powell’s dovish comments on quantitative easing,” said Frank Monkam, head of macro trading at Buffalo Bayou Commodities. Even so, with oil fundamentals still skewed bearish, “the 60-62 level on WTI is likely to constitute a firm resistance level should the bounce extend.” The projected supply surplus is up roughly 18% from last month’s estimate, as the OPEC+ alliance continues to revive output and the outlook for the group’s rivals strengthens. Several executives from major oil trading houses speaking in London said they see crude prices falling from here. Ben Luckock, global head of oil at Trafigura Group, warned that the onset of a long-awaited oil market surplus is “just about here,” while Gunvor Chief Executive Officer Torbjorn Tornqvist said gasoline and diesel demand may have plateaued. Oil posted losses in August and September, and WTI has shed

Read More »

New York City could face power reliability issues beginning next year: ISO

Dive Brief: The New York electric grid faces increased risk of power shortages over the next five years unless planned projects, including new transmission and offshore wind resources, are brought online, the state’s grid operator said Tuesday. Starting next summer, the Independent System Operator anticipates its reliability margins in New York City will be dangerously thin, making the grid more vulnerable to failures. In addition to potentially losing anticipated wind power, the system is strained by generator deactivations, increasing consumer demand, transmission limitations and difficulties in developing new resources, the ISO said in a statement highlighting a pair of reliability reports.  “The NYISO’s findings should be alarming to residents and serve as another wake up call for the state,” Gavin Donohue, president of the Independent Power Producers of New York, said in a statement. “Electric demand is continuing to drastically rise, and the state needs to look at all possible resources.” Dive Insight: The ISO’s Short-Term Assessment of Reliability examined the five-year period from July 2025 to July 2030 and identified reliability weaknesses beginning in New York City in 2026, in Long Island in 2027 and in the Lower Hudson Valley region in 2030.  The New York City area will be deficient “through the entire five-year horizon without the completion and energization of future planned projects,” the STAR report concluded. Those projects include the 816-MW Empire Wind offshore project, which was expected to be online by 2027. That timeline was complicated when the Trump administration halted the project in April before allowing it to resume a month later. Since then, it has faced additional complications and delays. The STAR report also cited the importance of the 1,250-MW Champlain Hudson Power Express transmission line, which is slated bring power to the city from Quebec beginning next year. “Once CHPE, Empire Wind, and

Read More »

California to invest $226M in offshore wind ports amid federal cuts

Dive Brief: The California Energy Commission authorized $42.75 million in grants for offshore wind port development on Wednesday, in what commission staff described as the first appropriation of the state’s Climate Bond funds for offshore wind. Last month, California state lawmakers also authorized $225.7 million in spending for offshore wind ports and related facilities through June 2030, with funds coming from a $10 billion Climate Bond approved by voters in 2024. The construction of supportive ports and transmission systems in California is critical to deploying commercial floating offshore wind by the mid 2030s, according to a survey by offshore wind industry trade association Oceantic. Dive Insight: Despite stop-work orders and project cancellations on the East Coast, California’s status as a global leader in floating offshore wind development remains unchanged, according to Oceantic Senior Policy Director Nancy Kirshner-Rodriguez. “States have always been the driving force of our industry, and will continue to be, regardless of who is in power,” she said. “They are creating the demand signals that will pull in investment, and they are underwriting the critical enabling infrastructure investments that make the market move forward.” The Port San Luis Harbor District — one of five entities that received grants from the CEC last week to fund offshore wind port planning, engineering and design — is on track to become the state’s first dedicated offshore wind hub, according to Reid Boggiano, a CEC offshore wind program specialist. The Port of Long Beach is also well on its way to building Pier Wind, a dedicated 400-acre offshore wind terminal that the CEC awarded $20 million to wrap up planning and engineering and to complete environmental assessments. The Humboldt Bay Harbor and the cities of Oakland and Richmond also received grants for feasibility studies to determine if those communities could host offshore wind

Read More »

Interior denies canceling largest solar project in U.S. after axing review

The U.S. Department of the Interior has canceled its broad environmental review for the seven individual projects that make up the 6.2-GW Esmeralda 7 solar project located on federal land in Nevada and will review and permit each project individually, according to a spokesperson. Esmeralda 7 is set to be the largest solar project in the U.S. by capacity. The project’s National Environmental Policy Act status is listed as canceled on the Bureau of Land Management’s website. In a Tuesday email, the spokesperson said the BLM had not canceled the project. “During routine discussions prior to the lapse in appropriations, the proponents and BLM agreed to change their approach for the Esmeralda 7 Solar Project in Nevada,” they said. “Instead of pursuing a programmatic level environmental analysis, the applicants will now have the option to submit individual project proposals to the BLM to more effectively analyze potential impacts.” According to a draft programmatic environmental impact statement for the project from July of last year, the individual projects are: Lone Mountain Solar, Nivloc Solar, Smoky Valley Solar, Red Ridge 1 Solar, Red Ridge 2 Solar, Esmeralda Energy Center and Gold Dust Solar, which “would be geographically contiguous and encompass approximately 62,300 acres of BLM-administered lands approximately 30 miles west of Tonopah, Nevada.” Developers of the seven projects include Invenergy, Avantus, and NextEra. A spokesperson for NextEra told The Guardian, “We are in the early stage of development and remain committed to pursuing our project’s comprehensive environmental analysis by working closely with the Bureau of Land Management.” All of the projects have pending right-of-way applications before the Bureau of Land Management, the draft PEIS said. As the project’s previous NEPA review process has been canceled, it’s unclear how long it will now take each project to secure approvals. The original draft PEIS said that

Read More »

Losing power, losing billions: How offshoring grid materials weakens America

Jim Welsh is CEO of Peak Nano. National security requires self-reliance and independence. Today, America’s energy infrastructure and supply chain face a critical test of both.  Electrification, AI, data centers and extreme weather are driving unprecedented demand for energy and grid reliability. The U.S. Department of Energy’s latest Grid Reliability Report makes one thing clear: the time for “business-as-usual” is over. The U.S. can’t keep up, let alone grow, on old infrastructure. We need a radical shift to expand, modernize and intelligently manage the grid.  Success depends on bold investments in digital technologies, enhanced equipment and advanced grid management. Together, these will enable real-time monitoring, rapid integration of renewables and far less energy lost in transmission, which today can range from 8-12% of power generation. That’s how America will meet surging demand and protect its global leadership in the AI era.   The U.S. has an Achilles’ heel: critical materials. At the heart of grid reliability are magnets, rare-earth minerals and dielectric film. These materials are often overlooked but are vital for our power infrastructure, grid reliability and every part of our energy security. Dielectric films, for example, are used in capacitors that condition power, convert AC/DC power, keep power flowing steadily and even help manage spikes in demand to keep the grid stable and secure.   The problem? The U.S. has outsourced our ability to replace failing transformers, capacitors and other critical grid components. Capacitor film — a highly engineered, ultra-thin plastic that enables power stability and distribution for our grid — is almost entirely made overseas, and 75% is made in China, which dominates the global supply.  Every year, we spend nearly $200 billion overseas for this film and other critical materials. Historically, we’ve migrated our capacity to manufacture it overseas. No American manufacturer even builds the equipment to make dielectric films.  This isn’t

Read More »

Kuwait Unveils Major Offshore Discovery

State-owned Kuwait Oil Co. made a “major” discovery in the Jazah natural gas field in the OPEC member’s offshore region. “The initial exploration well recorded the highest production rate from a vertical well in the Minagish formation in Kuwait’s history,” KOC said in a statement on Monday. The company has made similar announcements for oil and gas offshore discoveries since last year. Initial tests from the Jazah-1 well revealed “exceptional production exceeding 29 million cubic feet of gas per day, and more than 5,000 barrels per day of condensate,” KOC, a unit of state-owned Kuwait Petroleum Corp., said. The field’s initial estimated area measures about 40 square kilometers (15.4 square miles), with projections indicating about 1 trillion cubic feet of gas and 120 million barrels of condensate, KOC said. Kuwait is OPEC’s fifth-biggest producer, with a current output of about 2.52 million barrels a day. It aims to boost capacity to 4 million barrels a day by 2035. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.

Read More »

Florida’s Data Center Moment: Power, Policy, and Potential

Florida is rapidly positioning itself as one of the next major frontiers for data center development. With extended tax incentives, proactive utilities, and a strategic geographic advantage, the state is aligning power, policy, and economic development in ways that echo the early playbook of Northern Virginia. In the latest episode of The Data Center Frontier Show, Buddy Rizer, Executive Director of Loudoun County Economic Development, and Lila Jaber, Founder of the Florida’s Women in Energy Leadership Forum and former Chair of the Florida Public Service Commission, join DCF to explore the opportunities and lessons shaping Florida’s emergence as a data center powerhouse. Energy and Infrastructure: A Strong Starting Position Unlike regions grappling with grid strain, Florida begins its data center growth story with energy abundance. While Loudoun County, Virginia—home to the world’s largest concentration of data centers—faced a 600 MW power deficit last year and could reach 12 GW of demand by 2030, Florida maintains excess generation capacity and robust renewable energy integration. Utilities like Florida Power & Light (FPL) and Duke Energy are already preparing for hyperscale and AI-driven loads, filing new large-load tariff structures to balance growth with ratepayer protection. Over the past decade, Florida utilities have also invested billions to harden their grids against hurricanes and extreme weather, resulting in some of the most resilient energy infrastructure in the country. Florida’s 10-year generation planning requirement, which ensures a diverse portfolio including nuclear, solar, and battery storage, further positions the state to meet growing digital infrastructure needs through hybrid on-site generation and demand-response capabilities. Economic and Workforce Advantages The state’s renewed sales tax exemptions for data centers through 2037—and the raised 100 MW IT load threshold—signal a strong bid to attract hyperscale operators and large-scale AI campuses. Florida also offers a competitive electricity rate structure comparable to Virginia’s

Read More »

Inside Blackstone’s Electrification Push: From Shermco to the Power Backbone of AI Data Centers

According to the National Electrical Manufacturers Association (NEMA), U.S. energy demand is projected to grow 50% by 2050. Electrical manufacturers have invested more than $10 billion since 2021 in new technologies to expand grid and manufacturing capacity, also reducing reliance on materials from China by 32% since 2018. Power access, sustainable infrastructure, and land acquisition have become critical factors shaping where and how data center facilities are built. As we previously reported in Data Center Frontier, investors realized this years ago, viewing these facilities both as technology assets and a unique convergence of real estate, utility infrastructure, and mission-critical systems that can also generate revenue. One of those investors is global asset manager Blackstone, which through its Energy Transition Partners private equity arm, recently acquired Shermco Industries for $1.6 billion. Announced August 21, the deal is part of Blackstone’s strategy to invest in companies that support the growing demand for electrification and a more reliable power grid. The goal is to strengthen data center infrastructure reliability and expand critical electrical services. Founded in 1974, Texas-based Shermco is one of the largest electrical testing organizations accredited by the InterNational Electrical Testing Association (NETA). The company operates in a niche yet important space: providing lifecycle electrical services, including maintenance, testing, commissioning, repair, and design, in support of data centers, utilities, and industrial clients. It has more than 40 service centers in the U.S. and Canada. In addition to helping Blackstone support its electrification and power grid reliability goals, the Shermco purchase is also part of Blackstone’s strategy to increase scale and resources—revenue increases without a substantial increase in resources—thus expanding its footprint and capabilities within the essential energy services sector.  As data centers expand globally, become more energy intensive, and are pressured to incorporate renewables and modernize grids, Blackstone’s leaders plan to leverage Shermco’s

Read More »

Cooling, Compute, and Convergence: How Strategic Alliances Are Informing the AI Data Center Playbook

Schneider Electric and Compass Datacenters: Prefabrication Meets the AI Frontier “We’re removing bottlenecks and setting a new benchmark for AI-ready data centers.” — Aamir Paul, Schneider Electric In another sign of how collaboration is accelerating the next wave of AI infrastructure, Schneider Electric and Compass Datacenters have joined forces to redefine the data center “white space” build-out: the heart of where power, cooling, and compute converge. On September 9, the two companies unveiled the Prefabricated Modular EcoStruxure™ Pod, a factory-built, fully integrated white space module designed to compress construction timelines, reduce CapEx, and simplify installation while meeting the specific demands of AI-ready infrastructure. The traditional fit-out process for the IT floor (i.e. integrating power distribution, cooling systems, busways, cabling, and network components) has long been one of the slowest and most error-prone stages of data center construction. Schneider and Compass’ new approach tackles that head-on, shifting the entire workflow from fragmented on-site assembly to standardized off-site manufacturing. “The traditional design and approach to building out power, cooling, and IT networking equipment has relied on multiple parties installing varied pieces of equipment,” the companies noted. “That process has been slow, inefficient, and prone to errors. Today’s growing demand for AI-ready infrastructure makes traditional build-outs ripe for improvement.” Inside the EcoStruxure Pod: White Space as a Product The EcoStruxure Pod packages every core element of a high-performance white space environment (power, cooling, and IT integration) into a single prefabricated, factory-tested unit. Built for flexibility, it supports hot aisle containment, InRow cooling, and Rear Door Heat Exchanger (RDHx) configurations, alongside high-power busways, complex network cabling, and a technical water loop for hybrid or full liquid-cooled deployments. By manufacturing these pods off-site, Schneider Electric can deliver a complete, ready-to-install white space module that arrives move-in ready. Once delivered to a Compass Datacenters campus, the

Read More »

Inside Microsoft’s Global AI Infrastructure: The Fairwater Blueprint for Distributed Supercomputing

Microsoft’s newest AI data center in Wisconsin, known as “Fairwater,” is being framed as far more than a massive, energy-intensive compute hub. The company describes it as a community-scale investment — one that pairs frontier-model training capacity with regional development. Microsoft has prepaid local grid upgrades, partnered with the Root-Pike Watershed Initiative Network to restore nearby wetlands and prairie sites, and launched Wisconsin’s first Datacenter Academy in collaboration with Gateway Technical College, aiming to train more than 1,000 students over the next five years. The company is also highlighting its broader statewide impact: 114,000 residents trained in AI-related skills through Microsoft partners, alongside the opening of a new AI Co-Innovation Lab at the University of Wisconsin–Milwaukee, focused on applying AI in advanced manufacturing. It’s Just One Big, Happy AI Supercomputer… The Fairwater facility is not a conventional, multi-tenant cloud region. It’s engineered to operate as a single, unified AI supercomputer, built around a flat networking fabric that interconnects hundreds of thousands of accelerators. Microsoft says the campus will deliver up to 10× the performance of today’s fastest supercomputers, purpose-built for frontier-model training. Physically, the site encompasses three buildings across 315 acres, totaling 1.2 million square feet of floor area, all supported by 120 miles of medium-voltage underground cable, 72.6 miles of mechanical piping, and 46.6 miles of deep foundation piles. At the rack level, each NVL72 system integrates 72 NVIDIA Blackwell GPUs (GB200), fused together via NVLink/NVSwitch into a single high-bandwidth memory domain capable of 1.8 TB/s GPU-to-GPU throughput and 14 TB of pooled memory per rack. This creates a topology that may appear as independent servers but can be orchestrated as a single, giant accelerator. Microsoft reports that one NVL72 can process up to 865,000 tokens per second. Future Fairwater-class deployments (including those under construction in the UK and Norway)

Read More »

Powering the AI Era: Innovations in Data Center Power Supply Design and Infrastructure

Recently, Data Center Frontier sister publication Electronic Design (ED) released an eBook curated by ED Senior Editor James Morra titled In the Age of AI, A New Playbook for Power Supply Design, with a collection of detailed technology articles focused on understanding the nuts and bolts of delivering power to AI-centric data centers. This compendium explores how the surge in artificial intelligence (AI) workloads is transforming data center power architectures and includes suggestions for addressing the issues. Breaking the Power Barrier As GPUs like NVIDIA’s Blackwell B100 and B200 cross the 1,000-watt threshold per chip, rack power densities are soaring beyond 100 kW, and in some projections, approaching 1 MW per rack. This unprecedented demand is exposing the limits of legacy 12-volt and 48-volt architectures, where inefficient conversion stages and high I²R losses drive up both energy waste and cooling load. Powering the Next Era of AI Infrastructure As AI data centers scale toward multi-megawatt clusters and rack densities approach one megawatt, traditional power architectures are straining under the load. The next frontier of efficiency lies in rethinking how electricity is distributed, converted, and protected inside the rack. From high-voltage DC distribution to wide-bandgap semiconductors and intelligent eFuses, a new generation of technologies is reshaping power delivery for AI. The articles in this report drill down into five core themes driving that transformation: Electronic Fuses (eFuses) for Power Protection Texas Instruments and others are introducing 48-volt-rated eFuses that integrate current sensing, control, and switching into a single device. These allow hot-swapping of AI servers without dangerous inrush currents, enable intelligent fault detection, and can be paralleled to support rack loads exceeding 100 kW. The result: simplified PCB design, improved reliability, and robust support for AI’s steep and dynamic current requirements. The Shift from 48 V to 400–800 V High-Voltage DC (HVDC)

Read More »

Fusion Energy Moves Toward Reality: Strategic Investments by CFS, Google, and Eni Signal Commercial Readiness

Global Fusion Momentum: France, Europe, and a New Competitive Context As CFS, Google, Eni, and Helion press ahead, other fusion efforts worldwide are also making waves, reminding us this is a global race, not a U.S.-exclusive pursuit. In France, the CEA’s WEST tokamak recently achieved a new benchmark by sustaining plasma for more than 22 minutes (1,337 seconds) at ~50 million °C, breaking previous records and demonstrating improved plasma control and stability. That milestone underscores the incremental but essential progress in continuous operation, one of the key prerequisites for any commercially viable fusion system. Meanwhile, ITER, the international flagship built in southern France, continues its slow-but-steady assembly. Despite years of delays and cost overruns, ITER remains central to global fusion ambitions. It’s not expected to produce significant fusion output until the 2030s, but its role in validating large-scale superconducting magnet systems, remote maintenance, tritium breeding, plasma control, and heat management is essential to de-risking downstream commercial fusion designs. Elsewhere in Europe, Proxima Fusion (Germany) is gaining attention. The company is developing a quasi-isodynamic stellarator design and has recently raised €130 million in its Series A, showing that alternative confinement geometries are earning investor support. While that path is more speculative, it adds needed diversity to the fusion technology portfolio. Germany’s Wendelstein 7-X Raises the Bar Germany added another major milestone to the fusion timeline this fall. At the Max Planck Institute for Plasma Physics, researchers operating the Wendelstein 7-X stellarator sustained a high-performance plasma for 43 seconds, setting a new world record for continuous fusion confinement. The run demonstrated stability and control at temperatures exceeding 30 million °C, proving that stellarators, once viewed mainly as scientific curiosities, can now compete head-to-head with tokamaks in performance. Unlike tokamaks, which rely on strong external currents to confine plasma, stellarators use a twisted

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Can we repair the internet?

From addictive algorithms to exploitative apps, data mining to misinformation, the internet today can be a hazardous place. Books by three influential figures—the intellect behind

Read More »