Stay Ahead, Stay ONMINE

A new Microsoft chip could lead to more stable quantum computers

Microsoft announced today that it has made significant progress in its 20-year quest to make topological quantum bits, or qubits—a special approach to building quantum computers that could make them more stable and easier to scale up.  Researchers and companies have been working for years to build quantum computers, which could unlock dramatic new abilities to simulate complex materials and discover new ones, among many other possible applications.  To achieve that potential, though, we must build big enough systems that are stable enough to perform computations. Many of the technologies being explored today, such as the superconducting qubits pursued by Google and IBM, are so delicate that the resulting systems need to have many extra qubits to correct errors.  Microsoft has long been working on an alternative that could cut down on the overhead by using components that are far more stable. These components, called Majorana quasiparticles, are not real particles. Instead, they are special patterns of behavior that may arise inside certain physical systems and under certain conditions. The pursuit has not been without setbacks, including a high-profile paper retraction by researchers associated with the company in 2018. But the Microsoft team, which has since pulled this research effort in house, claims it is now on track to build a fault-tolerant quantum computer containing a few thousand qubits in a matter of years and that it has a blueprint for building out chips that each contain a million qubits or so, a rough target that could be the point at which these computers really begin to show their power. This week the company announced a few early successes on that path: piggybacking on a Nature paper published today that describes a fundamental validation of the system, the company says it has been testing a topological qubit, and that it has wired up a chip containing eight of them.  “You don’t get to a million qubits without a lot of blood, sweat, and tears and solving a lot of really difficult technical challenges along the way. And I do not want to understate any of that,” says Chetan Nayak, a Microsoft technical fellow and leader of the team pioneering this approach. That said, he says, “I think that we have a path that we very much believe in, and we see a line of sight.”  Researchers outside the company are cautiously optimistic. “I’m very glad that [this research] seems to have hit a very important milestone,” says computer scientist Scott Aaronson, who heads the ​​Quantum Information Center at the University of Texas at Austin. “I hope that this stands, and I hope that it’s built up.” Even and odd The first step in building a quantum computer is constructing qubits that can exist in fragile quantum states—not 0s and 1s like the bits in classical computers, but rather a mixture of the two. Maintaining qubits in these states and linking them up with one another is delicate work, and over the years a significant amount of research has gone into refining error correction schemes to make up for noisy hardware.  For many years, theorists and experimentalists alike have been intrigued by the idea of creating topological qubits, which are constructed through mathematical twists and turns and have protection from errors essentially baked into their physics. “It’s been such an appealing idea to people since the early 2000s,” says Aaronson. “The only problem with it is that it requires, in a sense, creating a new state of matter that’s never been seen in nature.” Microsoft has been on a quest to synthesize this state, called a Majorana fermion, in the form of quasiparticles. The Majorana was first proposed nearly 90 years ago as a particle that is its own antiparticle, which means two Majoranas will annihilate when they encounter one another. With the right conditions and physical setup, the company has been hoping to get behavior matching that of the Majorana fermion within materials. In the last few years, Microsoft’s approach has centered on creating a very thin wire or “nanowire” from indium arsenide, a semiconductor. This material is placed in close proximity to aluminum, which becomes a superconductor close to absolute zero, and can be used to create superconductivity in the nanowire. Ordinarily you’re not likely to find any unpaired electrons skittering about in a superconductor—electrons like to pair up. But under the right conditions in the nanowire, it’s theoretically possible for an electron to hide itself, with each half hiding at either end of the wire. If these complex entities, called Majorana zero modes, can be coaxed into existence, they will be difficult to destroy, making them intrinsically stable.  ”Now you can see the advantage,” says Sankar Das Sarma, a theoretical physicist at the University of Maryland, College Park, who did early work on this concept. “You cannot destroy a half electron, right? If you try to destroy a half electron, that means only a half electron is left. That’s not allowed.” In 2023, the Microsoft team published a paper in the journal Physical Review B claiming that this system had passed a specific protocol designed to assess the presence of Majorana zero modes. This week in Nature, the researchers reported that they can “read out” the information in these nanowires—specifically, whether there are Majorana zero modes hiding at the wires’ ends. If there are, that means the wire has an extra, unpaired electron. “What we did in the Nature paper is we showed how to measure the even or oddness,” says Nayak. “To be able to tell whether there’s 10 million or 10 million and one electrons in one of these wires.” That’s an important step by itself, because the company aims to use those two states—an even or odd number of electrons in the nanowire—as the 0s and 1s in its qubits.  If these quasiparticles exist, it should be possible to “braid” the four Majorana zero modes in a pair of nanowires around one another by making specific measurements in a specific order. The result would be a qubit with a mix of these two states, even and odd. Nayak says the team has done just that, creating a two-level quantum system, and that it is currently working on a paper on the results. Researchers outside the company say they cannot comment on the qubit results, since that paper is not yet available. But some have hopeful things to say about the findings published so far. “I find it very encouraging,” says Travis Humble, director of the Quantum Science Center at Oak Ridge National Laboratory in Tennessee. “It is not yet enough to claim that they have created topological qubits. There’s still more work to be done there,” he says. But “this is a good first step toward validating the type of protection that they hope to create.”  Others are more skeptical. Physicist Henry Legg of the University of St Andrews in Scotland, who previously criticized Physical Review B for publishing the 2023 paper without enough data for the results to be independently reproduced, is not convinced that the team is seeing evidence of Majorana zero modes in its Nature paper. He says that the company’s early tests did not put it on solid footing to make such claims. “The optimism is definitely there, but the science isn’t there,” he says. One potential complication is impurities in the device, which can create conditions that look like Majorana particles. But Nayak says the evidence has only grown stronger as the research has proceeded. “This gives us confidence: We are manipulating sophisticated devices and seeing results consistent with a Majorana interpretation,” he says. “They have satisfied many of the necessary conditions for a Majorana qubit, but there are still a few more boxes to check,” Das Sarma said after seeing preliminary results on the qubit. “The progress has been impressive and concrete.” Scaling up On the face of it, Microsoft’s topological efforts seem woefully behind in the world of quantum computing—the company is just now working to combine qubits in the single digits while others have tied together more than 1,000. But both Nayak and Das Sarma say other efforts had a strong head start because they involved systems that already had a solid grounding in physics. Work on the topological qubit, on the other hand, has meant starting from scratch.  “We really were reinventing the wheel,” Nayak says, likening the team’s efforts to the early days of semiconductors, when there was so much to sort out about electron behavior and materials, and transistors and integrated circuits still had to be invented. That’s why this research path has taken almost 20 years, he says: “It’s the longest-running R&D program in Microsoft history.” Some support from the US Defense Advanced Research Projects Agency could help the company catch up. Early this month, Microsoft was selected as one of two companies to continue work on the design of a scaled-up system, through a program focused on underexplored approaches that could lead to utility-scale quantum computers—those whose benefits exceed their costs. The other company selected is PsiQuantum, a startup that is aiming to build a quantum computer containing up to a million qubits using photons. Many of the researchers MIT Technology Review spoke with would still like to see how this work plays out in scientific publications, but they were hopeful. “The biggest disadvantage of the topological qubit is that it’s still kind of a physics problem,” says Das Sarma. “If everything Microsoft is claiming today is correct … then maybe right now the physics is coming to an end, and engineering could begin.”  This story was updated with Henry Legg’s current institutional affiliation.

Microsoft announced today that it has made significant progress in its 20-year quest to make topological quantum bits, or qubits—a special approach to building quantum computers that could make them more stable and easier to scale up. 

Researchers and companies have been working for years to build quantum computers, which could unlock dramatic new abilities to simulate complex materials and discover new ones, among many other possible applications. 

To achieve that potential, though, we must build big enough systems that are stable enough to perform computations. Many of the technologies being explored today, such as the superconducting qubits pursued by Google and IBM, are so delicate that the resulting systems need to have many extra qubits to correct errors. 

Microsoft has long been working on an alternative that could cut down on the overhead by using components that are far more stable. These components, called Majorana quasiparticles, are not real particles. Instead, they are special patterns of behavior that may arise inside certain physical systems and under certain conditions.

The pursuit has not been without setbacks, including a high-profile paper retraction by researchers associated with the company in 2018. But the Microsoft team, which has since pulled this research effort in house, claims it is now on track to build a fault-tolerant quantum computer containing a few thousand qubits in a matter of years and that it has a blueprint for building out chips that each contain a million qubits or so, a rough target that could be the point at which these computers really begin to show their power.

This week the company announced a few early successes on that path: piggybacking on a Nature paper published today that describes a fundamental validation of the system, the company says it has been testing a topological qubit, and that it has wired up a chip containing eight of them. 

“You don’t get to a million qubits without a lot of blood, sweat, and tears and solving a lot of really difficult technical challenges along the way. And I do not want to understate any of that,” says Chetan Nayak, a Microsoft technical fellow and leader of the team pioneering this approach. That said, he says, “I think that we have a path that we very much believe in, and we see a line of sight.” 

Researchers outside the company are cautiously optimistic. “I’m very glad that [this research] seems to have hit a very important milestone,” says computer scientist Scott Aaronson, who heads the ​​Quantum Information Center at the University of Texas at Austin. “I hope that this stands, and I hope that it’s built up.”

Even and odd

The first step in building a quantum computer is constructing qubits that can exist in fragile quantum states—not 0s and 1s like the bits in classical computers, but rather a mixture of the two. Maintaining qubits in these states and linking them up with one another is delicate work, and over the years a significant amount of research has gone into refining error correction schemes to make up for noisy hardware. 

For many years, theorists and experimentalists alike have been intrigued by the idea of creating topological qubits, which are constructed through mathematical twists and turns and have protection from errors essentially baked into their physics. “It’s been such an appealing idea to people since the early 2000s,” says Aaronson. “The only problem with it is that it requires, in a sense, creating a new state of matter that’s never been seen in nature.”

Microsoft has been on a quest to synthesize this state, called a Majorana fermion, in the form of quasiparticles. The Majorana was first proposed nearly 90 years ago as a particle that is its own antiparticle, which means two Majoranas will annihilate when they encounter one another. With the right conditions and physical setup, the company has been hoping to get behavior matching that of the Majorana fermion within materials.

In the last few years, Microsoft’s approach has centered on creating a very thin wire or “nanowire” from indium arsenide, a semiconductor. This material is placed in close proximity to aluminum, which becomes a superconductor close to absolute zero, and can be used to create superconductivity in the nanowire.

Ordinarily you’re not likely to find any unpaired electrons skittering about in a superconductor—electrons like to pair up. But under the right conditions in the nanowire, it’s theoretically possible for an electron to hide itself, with each half hiding at either end of the wire. If these complex entities, called Majorana zero modes, can be coaxed into existence, they will be difficult to destroy, making them intrinsically stable. 

”Now you can see the advantage,” says Sankar Das Sarma, a theoretical physicist at the University of Maryland, College Park, who did early work on this concept. “You cannot destroy a half electron, right? If you try to destroy a half electron, that means only a half electron is left. That’s not allowed.”

In 2023, the Microsoft team published a paper in the journal Physical Review B claiming that this system had passed a specific protocol designed to assess the presence of Majorana zero modes. This week in Nature, the researchers reported that they can “read out” the information in these nanowires—specifically, whether there are Majorana zero modes hiding at the wires’ ends. If there are, that means the wire has an extra, unpaired electron.

“What we did in the Nature paper is we showed how to measure the even or oddness,” says Nayak. “To be able to tell whether there’s 10 million or 10 million and one electrons in one of these wires.” That’s an important step by itself, because the company aims to use those two states—an even or odd number of electrons in the nanowire—as the 0s and 1s in its qubits. 

If these quasiparticles exist, it should be possible to “braid” the four Majorana zero modes in a pair of nanowires around one another by making specific measurements in a specific order. The result would be a qubit with a mix of these two states, even and odd. Nayak says the team has done just that, creating a two-level quantum system, and that it is currently working on a paper on the results.

Researchers outside the company say they cannot comment on the qubit results, since that paper is not yet available. But some have hopeful things to say about the findings published so far. “I find it very encouraging,” says Travis Humble, director of the Quantum Science Center at Oak Ridge National Laboratory in Tennessee. “It is not yet enough to claim that they have created topological qubits. There’s still more work to be done there,” he says. But “this is a good first step toward validating the type of protection that they hope to create.” 

Others are more skeptical. Physicist Henry Legg of the University of St Andrews in Scotland, who previously criticized Physical Review B for publishing the 2023 paper without enough data for the results to be independently reproduced, is not convinced that the team is seeing evidence of Majorana zero modes in its Nature paper. He says that the company’s early tests did not put it on solid footing to make such claims. “The optimism is definitely there, but the science isn’t there,” he says.

One potential complication is impurities in the device, which can create conditions that look like Majorana particles. But Nayak says the evidence has only grown stronger as the research has proceeded. “This gives us confidence: We are manipulating sophisticated devices and seeing results consistent with a Majorana interpretation,” he says.

“They have satisfied many of the necessary conditions for a Majorana qubit, but there are still a few more boxes to check,” Das Sarma said after seeing preliminary results on the qubit. “The progress has been impressive and concrete.”

Scaling up

On the face of it, Microsoft’s topological efforts seem woefully behind in the world of quantum computing—the company is just now working to combine qubits in the single digits while others have tied together more than 1,000. But both Nayak and Das Sarma say other efforts had a strong head start because they involved systems that already had a solid grounding in physics. Work on the topological qubit, on the other hand, has meant starting from scratch. 

“We really were reinventing the wheel,” Nayak says, likening the team’s efforts to the early days of semiconductors, when there was so much to sort out about electron behavior and materials, and transistors and integrated circuits still had to be invented. That’s why this research path has taken almost 20 years, he says: “It’s the longest-running R&D program in Microsoft history.”

Some support from the US Defense Advanced Research Projects Agency could help the company catch up. Early this month, Microsoft was selected as one of two companies to continue work on the design of a scaled-up system, through a program focused on underexplored approaches that could lead to utility-scale quantum computers—those whose benefits exceed their costs. The other company selected is PsiQuantum, a startup that is aiming to build a quantum computer containing up to a million qubits using photons.

Many of the researchers MIT Technology Review spoke with would still like to see how this work plays out in scientific publications, but they were hopeful. “The biggest disadvantage of the topological qubit is that it’s still kind of a physics problem,” says Das Sarma. “If everything Microsoft is claiming today is correct … then maybe right now the physics is coming to an end, and engineering could begin.” 

This story was updated with Henry Legg’s current institutional affiliation.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Intel details new efficient Xeon processor line

The new chips will be able to support up to 12-channel DDR5 memory with speeds of up to 8000 MT/s, a substantial increase over the 8 channels of 6400MT/s in the prior generation. In addition to that, the platform will support up to 6 UPI 2.0 links with up to

Read More »

USA EIA Raises USA Oil Production Forecasts

The U.S. Energy Information Administration (EIA) raised its U.S. crude oil production forecast for 2025 and 2026 in its latest short term energy outlook (STEO), which was released on October 7. According to this STEO, the EIA now sees U.S. crude oil production, including lease condensate, averaging 13.53 million barrels per day in 2025 and 13.51 million barrels per day in 2026. In its previous STEO, which was released in September, the EIA projected that U.S. crude oil production, including lease condensate, would average 13.44 million barrels per day this year and 13.30 million barrels per day next year. The EIA’s October STEO sees U.S. crude oil output coming in at 13.66 million barrels per day in the fourth quarter of 2025, 13.62 million barrels per day in the first quarter of next year, 13.53 million barrels per day in the second quarter, 13.40 million barrels per day in the third quarter, and 13.48 million barrels per day in the fourth quarter. In its September STEO, the EIA projected that U.S. crude oil production would average 13.51 million barrels per day in the fourth quarter of this year, 13.45 million barrels per day in the first quarter of next year, 13.39 million barrels per day in the second quarter, 13.20 million barrels per day in the third quarter, and 13.17 million barrels per day in the fourth quarter. The EIA’s latest STEO projected that the Lower 48 states, excluding the Gulf of America, will contribute 11.22 million barrels per day of the total projected figure for 2025 and 11.10 million barrels per day of the total projected figure for 2026. The Federal Gulf of America is expected to contribute 1.89 million barrels per day to this year’s total projected figure and 1.96 million barrels per day to next year’s total

Read More »

Cenovus Buys Into MEG in Open Market as Takeover Bid Advances

Cenovus Energy Inc said Tuesday it has acquired 8.5 percent of MEG Energy Corp’s common stock through open trading, even as its takeover offer for the pure-play oil sands producer progresses with Strathcona Resources Ltd dropping a competing bid. The open-market acquisition involved about 21.72 million shares out of around 254.38 million MEG common shares issued and outstanding, Toronto- and New York-listed Cenovus said in a statement on its website. Cenovus started buying into Toronto-listed MEG October 8, according to Tuesday’s statement. That day, Cenovus announced it had signed a new agreement with MEG that amended the price and the cash-and-stock allocation for the takeover. The transactions happened “through the facilities of the Toronto Stock Exchange or other Canadian alternative exchanges or markets”, Cenovus said. “The MEG common shares were acquired by Cenovus in furtherance of its previously announced transaction with MEG”, Cenovus said. “To the extent Cenovus is able, the company intends to vote any acquired shares in favor of the transaction”. Under the amended agreement, each MEG shareholder can opt to receive for each MEG common share CAD 29.5 ($21) in cash or 1.24 Cenovus common shares, subject to a maximum of $3.8 billion in cash and 157.7 million Cenovus common shares. “The pro-rated consideration represents a mix of 50 percent cash and 50 percent Cenovus common shares”, Cenovus said in a press release October 8. “On a fully pro-rated basis, the consideration per MEG common share represents approximately CAD 14.75 in cash and 0.62 of a Cenovus common share. “The fully pro-rated consideration for MEG represents a value of approximately CAD 29.8 per MEG share at Cenovus’ closing share price on October 7, 2025, an increase of approximately CAD 1.32 per share based on current market pricing relative to the terms of the original arrangement agreement. “The consideration under

Read More »

Strategists Forecast Week on Week USA Crude Stock Build

In an oil and gas report sent to Rigzone late Monday by the Macquarie team, Macquarie strategists, including Walt Chancellor, revealed that they are forecasting that U.S. crude inventories will be up by 5.2 million barrels for the week ending October 10. “This follows a 3.7 million barrel build in the prior week, with the crude balance realizing relatively close to our expectations,” the strategists said in the report. “For this week’s balance, from refineries, we model a large reduction in crude runs (-0.6 million barrels per day) following a surprisingly strong print last week,” they added. “Among net imports, we model a large reduction, with exports higher (+0.2 million barrels per day) and imports lower (-0.6 million barrels per day) on a nominal basis,” they continued. The strategists warned in the report that timing of cargoes remains a source of potential volatility in this week’s crude balance. “From implied domestic supply (prod.+adj.+transfers), we look for a moderate increase (+0.5 MBD) on a nominal basis this week,” the strategists went on to state in the report. “Rounding out the picture, we model a larger increase (+0.5 million barrels) in SPR [Strategic Petroleum Reserve] stocks this week,” they added. Also in the report, the Macquarie strategists noted that, “among products”, they “look for draws in gasoline (-1.4 million barrels) and distillate (-0.6 million barrels), with a build in jet (+1.1 million barrels)”. “We model implied demand for these three products at ~14.2 million barrels per day for the week ending October 10,” the strategists stated in the report. In its latest weekly petroleum status report at the time of writing, which was released on October 8 and includes data for the week ending October 3, the U.S. Energy Information Administration (EIA) highlighted that U.S. commercial crude oil inventories, excluding those in

Read More »

Greenflash Acquires Planned 200 MW BESS Project in Texas

Greenflash Infrastructure LP said Tuesday it had acquired a proposed 200-megawatt (MW) battery energy storage system (BESS) project in Fort Bend County, Texas, from Advanced Power. “The fully permitted, interconnection-ready project is expected to receive Notice to Proceed in 2026, with commercial operations targeted for mid-2027”, Houston-based power investor Greenflash said in a press release. Greenflash managing partner Vishal Apte said, “This acquisition adds near-term, execution-ready capacity toward our five-gigawatt ERCOT [Electric Reliability Council of Texas market] buildout”. Advanced Power chief executive Tom Spang said, “ERCOT, like other major power markets in the U.S., has an urgent need for projects that enhance grid reliability”. “As a premier developer of thermal, renewable and now, BESS, technology, Advanced Power is committed to bringing these contemporary power solutions to companies like Greenflash, who recognize the region’s urgent and growing energy and capacity needs”, Spang added. Advanced Power’s Rock Rose project “was selected for its interconnection position, transmission access and capacity to support grid reliability and flexible dispatch”, Greenflash said. “The acquisition supports Greenflash’s strategy to deploy utility-scale battery projects across ERCOT”. Rock Rose is Greenflash’s second battery energy storage project. Earlier this month it said it had completed hybrid tax capital and debt financing for Project Soho, a 400-MW standalone battery storage in Brazoria County, Texas. “The project is the largest standalone BESS currently under construction in TX and is ahead of schedule to energize in Q1 2026, and achieve commercial operations in Q2 2026”, Greenflash said in an online statement October 7. “We designed this financing structure to be a scalable, repeatable template for our five-gigawatt near-term ERCOT pipeline”, said Greenflash co-founder and vice president Joel Chisolm. The financing included a preferred equity investment from funds managed by New York City-based Wafra Inc. “Acadia Infrastructure Capital LP, a North American power infrastructure investment

Read More »

100+ Energy Supply Chain Cos Call for EPL Reform

More than 100 UK energy supply chain companies have called for Energy Profits Levy (EPL) reform, industry body Offshore Energies UK (OEUK) announced in a release sent to Rigzone recently. In a letter co-signed by more than 110 companies to Chris McDonald MP – Minister for Industry in the Department for Energy Security and Net Zero (DESNZ) and the Department for Business and Trade (DBT) – OEUK’s Supply Chain Champion, Steve Nicol, who is also Executive President of Operations at Wood, “led a call to government urging them to work with industry and implement a competitive, permanent tax regime from 2026, as outlined in the Treasury’s 2025 oil and gas price mechanism consultation”, the release highlighted. The letter, which was carried in full in OEUK’s release, pointed out that its signatories include manufacturers, professional services, and engineering companies. “Together, the organizations signing this letter represent over 110 supply chain companies which support tens of thousands of jobs,” the letter stated. “We contribute billions to the UK economy in taxes paid, jobs supported, and through the domestic and international trade of our goods and services,” it added. OEUK warned in its release that “without a permanent replacement for the temporary Energy Profits Levy, the nation risks losing thousands more jobs, billions in investment, and critical supply chain capability essential for the UK’s energy security and transition”. The industry body went on to state in the release that it is making the case that the EPL Levy isn’t working for government, industry, or consumers. “The Office for Budget Responsibility (OBR) has revised down its forecast EPL revenue from GBP 41.6 billion ($55.4 billion) in November 2022 to GBP 17.4 billion ($23.2 billion) in its latest outlook,” OEUK noted in its release, adding that “this covers the period 2022-23 to 2027-28”. “This is less than half

Read More »

Oil Tankers Avoid Sanctioned China Port

Three supertankers that were headed for Rizhao port are now looking for alternative berths, following US sanctions on the terminal that handles around a 10th of China’s oil imports.  Two of the very large crude carriers, which can haul as much as 2 million barrels, are signaling Ningbo Zhoushan port near Shanghai as their destination, according to ship-tracking data compiled by Bloomberg. The third is now on its way to Tianjin in China’s north. The Rizhao Shihua Crude Oil Terminal, which was blacklisted by Washington last week over its role in taking Iranian crude, is located in Shandong province, the center of China’s refining industry. Partly owned by Sinopec, Rizhao is the major entry point for foreign crude for the oil major, also known as China Petroleum & Chemical Corp., and is connected with several of its facilities by a lengthy pipeline.  Spherical  which is carrying around 2 million barrels of Brazilian oil – and the New Vista,  with about 1.8 million barrels of Abu Dhabi crude, are heading to Ningbo Zhoushan. Habshan, transporting 1.9 million barrels from Africa, is signaling Tianjin.  As well as being routed to different ports, oil that was headed to Rizhao could also be offloaded onto smaller ships to be taken to Sinopec refineries along the Yangtze River that get their oil via the pipeline from the terminal in Shandong, Energy Aspects Ltd. said in a note last week. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.

Read More »

Florida’s Data Center Moment: Power, Policy, and Potential

Florida is rapidly positioning itself as one of the next major frontiers for data center development. With extended tax incentives, proactive utilities, and a strategic geographic advantage, the state is aligning power, policy, and economic development in ways that echo the early playbook of Northern Virginia. In the latest episode of The Data Center Frontier Show, Buddy Rizer, Executive Director of Loudoun County Economic Development, and Lila Jaber, Founder of the Florida’s Women in Energy Leadership Forum and former Chair of the Florida Public Service Commission, join DCF to explore the opportunities and lessons shaping Florida’s emergence as a data center powerhouse. Energy and Infrastructure: A Strong Starting Position Unlike regions grappling with grid strain, Florida begins its data center growth story with energy abundance. While Loudoun County, Virginia—home to the world’s largest concentration of data centers—faced a 600 MW power deficit last year and could reach 12 GW of demand by 2030, Florida maintains excess generation capacity and robust renewable energy integration. Utilities like Florida Power & Light (FPL) and Duke Energy are already preparing for hyperscale and AI-driven loads, filing new large-load tariff structures to balance growth with ratepayer protection. Over the past decade, Florida utilities have also invested billions to harden their grids against hurricanes and extreme weather, resulting in some of the most resilient energy infrastructure in the country. Florida’s 10-year generation planning requirement, which ensures a diverse portfolio including nuclear, solar, and battery storage, further positions the state to meet growing digital infrastructure needs through hybrid on-site generation and demand-response capabilities. Economic and Workforce Advantages The state’s renewed sales tax exemptions for data centers through 2037—and the raised 100 MW IT load threshold—signal a strong bid to attract hyperscale operators and large-scale AI campuses. Florida also offers a competitive electricity rate structure comparable to Virginia’s

Read More »

Inside Blackstone’s Electrification Push: From Shermco to the Power Backbone of AI Data Centers

According to the National Electrical Manufacturers Association (NEMA), U.S. energy demand is projected to grow 50% by 2050. Electrical manufacturers have invested more than $10 billion since 2021 in new technologies to expand grid and manufacturing capacity, also reducing reliance on materials from China by 32% since 2018. Power access, sustainable infrastructure, and land acquisition have become critical factors shaping where and how data center facilities are built. As we previously reported in Data Center Frontier, investors realized this years ago, viewing these facilities both as technology assets and a unique convergence of real estate, utility infrastructure, and mission-critical systems that can also generate revenue. One of those investors is global asset manager Blackstone, which through its Energy Transition Partners private equity arm, recently acquired Shermco Industries for $1.6 billion. Announced August 21, the deal is part of Blackstone’s strategy to invest in companies that support the growing demand for electrification and a more reliable power grid. The goal is to strengthen data center infrastructure reliability and expand critical electrical services. Founded in 1974, Texas-based Shermco is one of the largest electrical testing organizations accredited by the InterNational Electrical Testing Association (NETA). The company operates in a niche yet important space: providing lifecycle electrical services, including maintenance, testing, commissioning, repair, and design, in support of data centers, utilities, and industrial clients. It has more than 40 service centers in the U.S. and Canada. In addition to helping Blackstone support its electrification and power grid reliability goals, the Shermco purchase is also part of Blackstone’s strategy to increase scale and resources—revenue increases without a substantial increase in resources—thus expanding its footprint and capabilities within the essential energy services sector.  As data centers expand globally, become more energy intensive, and are pressured to incorporate renewables and modernize grids, Blackstone’s leaders plan to leverage Shermco’s

Read More »

Cooling, Compute, and Convergence: How Strategic Alliances Are Informing the AI Data Center Playbook

Schneider Electric and Compass Datacenters: Prefabrication Meets the AI Frontier “We’re removing bottlenecks and setting a new benchmark for AI-ready data centers.” — Aamir Paul, Schneider Electric In another sign of how collaboration is accelerating the next wave of AI infrastructure, Schneider Electric and Compass Datacenters have joined forces to redefine the data center “white space” build-out: the heart of where power, cooling, and compute converge. On September 9, the two companies unveiled the Prefabricated Modular EcoStruxure™ Pod, a factory-built, fully integrated white space module designed to compress construction timelines, reduce CapEx, and simplify installation while meeting the specific demands of AI-ready infrastructure. The traditional fit-out process for the IT floor (i.e. integrating power distribution, cooling systems, busways, cabling, and network components) has long been one of the slowest and most error-prone stages of data center construction. Schneider and Compass’ new approach tackles that head-on, shifting the entire workflow from fragmented on-site assembly to standardized off-site manufacturing. “The traditional design and approach to building out power, cooling, and IT networking equipment has relied on multiple parties installing varied pieces of equipment,” the companies noted. “That process has been slow, inefficient, and prone to errors. Today’s growing demand for AI-ready infrastructure makes traditional build-outs ripe for improvement.” Inside the EcoStruxure Pod: White Space as a Product The EcoStruxure Pod packages every core element of a high-performance white space environment (power, cooling, and IT integration) into a single prefabricated, factory-tested unit. Built for flexibility, it supports hot aisle containment, InRow cooling, and Rear Door Heat Exchanger (RDHx) configurations, alongside high-power busways, complex network cabling, and a technical water loop for hybrid or full liquid-cooled deployments. By manufacturing these pods off-site, Schneider Electric can deliver a complete, ready-to-install white space module that arrives move-in ready. Once delivered to a Compass Datacenters campus, the

Read More »

Inside Microsoft’s Global AI Infrastructure: The Fairwater Blueprint for Distributed Supercomputing

Microsoft’s newest AI data center in Wisconsin, known as “Fairwater,” is being framed as far more than a massive, energy-intensive compute hub. The company describes it as a community-scale investment — one that pairs frontier-model training capacity with regional development. Microsoft has prepaid local grid upgrades, partnered with the Root-Pike Watershed Initiative Network to restore nearby wetlands and prairie sites, and launched Wisconsin’s first Datacenter Academy in collaboration with Gateway Technical College, aiming to train more than 1,000 students over the next five years. The company is also highlighting its broader statewide impact: 114,000 residents trained in AI-related skills through Microsoft partners, alongside the opening of a new AI Co-Innovation Lab at the University of Wisconsin–Milwaukee, focused on applying AI in advanced manufacturing. It’s Just One Big, Happy AI Supercomputer… The Fairwater facility is not a conventional, multi-tenant cloud region. It’s engineered to operate as a single, unified AI supercomputer, built around a flat networking fabric that interconnects hundreds of thousands of accelerators. Microsoft says the campus will deliver up to 10× the performance of today’s fastest supercomputers, purpose-built for frontier-model training. Physically, the site encompasses three buildings across 315 acres, totaling 1.2 million square feet of floor area, all supported by 120 miles of medium-voltage underground cable, 72.6 miles of mechanical piping, and 46.6 miles of deep foundation piles. At the rack level, each NVL72 system integrates 72 NVIDIA Blackwell GPUs (GB200), fused together via NVLink/NVSwitch into a single high-bandwidth memory domain capable of 1.8 TB/s GPU-to-GPU throughput and 14 TB of pooled memory per rack. This creates a topology that may appear as independent servers but can be orchestrated as a single, giant accelerator. Microsoft reports that one NVL72 can process up to 865,000 tokens per second. Future Fairwater-class deployments (including those under construction in the UK and Norway)

Read More »

Powering the AI Era: Innovations in Data Center Power Supply Design and Infrastructure

Recently, Data Center Frontier sister publication Electronic Design (ED) released an eBook curated by ED Senior Editor James Morra titled In the Age of AI, A New Playbook for Power Supply Design, with a collection of detailed technology articles focused on understanding the nuts and bolts of delivering power to AI-centric data centers. This compendium explores how the surge in artificial intelligence (AI) workloads is transforming data center power architectures and includes suggestions for addressing the issues. Breaking the Power Barrier As GPUs like NVIDIA’s Blackwell B100 and B200 cross the 1,000-watt threshold per chip, rack power densities are soaring beyond 100 kW, and in some projections, approaching 1 MW per rack. This unprecedented demand is exposing the limits of legacy 12-volt and 48-volt architectures, where inefficient conversion stages and high I²R losses drive up both energy waste and cooling load. Powering the Next Era of AI Infrastructure As AI data centers scale toward multi-megawatt clusters and rack densities approach one megawatt, traditional power architectures are straining under the load. The next frontier of efficiency lies in rethinking how electricity is distributed, converted, and protected inside the rack. From high-voltage DC distribution to wide-bandgap semiconductors and intelligent eFuses, a new generation of technologies is reshaping power delivery for AI. The articles in this report drill down into five core themes driving that transformation: Electronic Fuses (eFuses) for Power Protection Texas Instruments and others are introducing 48-volt-rated eFuses that integrate current sensing, control, and switching into a single device. These allow hot-swapping of AI servers without dangerous inrush currents, enable intelligent fault detection, and can be paralleled to support rack loads exceeding 100 kW. The result: simplified PCB design, improved reliability, and robust support for AI’s steep and dynamic current requirements. The Shift from 48 V to 400–800 V High-Voltage DC (HVDC)

Read More »

Fusion Energy Moves Toward Reality: Strategic Investments by CFS, Google, and Eni Signal Commercial Readiness

Global Fusion Momentum: France, Europe, and a New Competitive Context As CFS, Google, Eni, and Helion press ahead, other fusion efforts worldwide are also making waves, reminding us this is a global race, not a U.S.-exclusive pursuit. In France, the CEA’s WEST tokamak recently achieved a new benchmark by sustaining plasma for more than 22 minutes (1,337 seconds) at ~50 million °C, breaking previous records and demonstrating improved plasma control and stability. That milestone underscores the incremental but essential progress in continuous operation, one of the key prerequisites for any commercially viable fusion system. Meanwhile, ITER, the international flagship built in southern France, continues its slow-but-steady assembly. Despite years of delays and cost overruns, ITER remains central to global fusion ambitions. It’s not expected to produce significant fusion output until the 2030s, but its role in validating large-scale superconducting magnet systems, remote maintenance, tritium breeding, plasma control, and heat management is essential to de-risking downstream commercial fusion designs. Elsewhere in Europe, Proxima Fusion (Germany) is gaining attention. The company is developing a quasi-isodynamic stellarator design and has recently raised €130 million in its Series A, showing that alternative confinement geometries are earning investor support. While that path is more speculative, it adds needed diversity to the fusion technology portfolio. Germany’s Wendelstein 7-X Raises the Bar Germany added another major milestone to the fusion timeline this fall. At the Max Planck Institute for Plasma Physics, researchers operating the Wendelstein 7-X stellarator sustained a high-performance plasma for 43 seconds, setting a new world record for continuous fusion confinement. The run demonstrated stability and control at temperatures exceeding 30 million °C, proving that stellarators, once viewed mainly as scientific curiosities, can now compete head-to-head with tokamaks in performance. Unlike tokamaks, which rely on strong external currents to confine plasma, stellarators use a twisted

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »