Stay Ahead, Stay ONMINE

A new Microsoft chip could lead to more stable quantum computers

Microsoft announced today that it has made significant progress in its 20-year quest to make topological quantum bits, or qubits—a special approach to building quantum computers that could make them more stable and easier to scale up.  Researchers and companies have been working for years to build quantum computers, which could unlock dramatic new abilities to simulate complex materials and discover new ones, among many other possible applications.  To achieve that potential, though, we must build big enough systems that are stable enough to perform computations. Many of the technologies being explored today, such as the superconducting qubits pursued by Google and IBM, are so delicate that the resulting systems need to have many extra qubits to correct errors.  Microsoft has long been working on an alternative that could cut down on the overhead by using components that are far more stable. These components, called Majorana quasiparticles, are not real particles. Instead, they are special patterns of behavior that may arise inside certain physical systems and under certain conditions. The pursuit has not been without setbacks, including a high-profile paper retraction by researchers associated with the company in 2018. But the Microsoft team, which has since pulled this research effort in house, claims it is now on track to build a fault-tolerant quantum computer containing a few thousand qubits in a matter of years and that it has a blueprint for building out chips that each contain a million qubits or so, a rough target that could be the point at which these computers really begin to show their power. This week the company announced a few early successes on that path: piggybacking on a Nature paper published today that describes a fundamental validation of the system, the company says it has been testing a topological qubit, and that it has wired up a chip containing eight of them.  “You don’t get to a million qubits without a lot of blood, sweat, and tears and solving a lot of really difficult technical challenges along the way. And I do not want to understate any of that,” says Chetan Nayak, a Microsoft technical fellow and leader of the team pioneering this approach. That said, he says, “I think that we have a path that we very much believe in, and we see a line of sight.”  Researchers outside the company are cautiously optimistic. “I’m very glad that [this research] seems to have hit a very important milestone,” says computer scientist Scott Aaronson, who heads the ​​Quantum Information Center at the University of Texas at Austin. “I hope that this stands, and I hope that it’s built up.” Even and odd The first step in building a quantum computer is constructing qubits that can exist in fragile quantum states—not 0s and 1s like the bits in classical computers, but rather a mixture of the two. Maintaining qubits in these states and linking them up with one another is delicate work, and over the years a significant amount of research has gone into refining error correction schemes to make up for noisy hardware.  For many years, theorists and experimentalists alike have been intrigued by the idea of creating topological qubits, which are constructed through mathematical twists and turns and have protection from errors essentially baked into their physics. “It’s been such an appealing idea to people since the early 2000s,” says Aaronson. “The only problem with it is that it requires, in a sense, creating a new state of matter that’s never been seen in nature.” Microsoft has been on a quest to synthesize this state, called a Majorana fermion, in the form of quasiparticles. The Majorana was first proposed nearly 90 years ago as a particle that is its own antiparticle, which means two Majoranas will annihilate when they encounter one another. With the right conditions and physical setup, the company has been hoping to get behavior matching that of the Majorana fermion within materials. In the last few years, Microsoft’s approach has centered on creating a very thin wire or “nanowire” from indium arsenide, a semiconductor. This material is placed in close proximity to aluminum, which becomes a superconductor close to absolute zero, and can be used to create superconductivity in the nanowire. Ordinarily you’re not likely to find any unpaired electrons skittering about in a superconductor—electrons like to pair up. But under the right conditions in the nanowire, it’s theoretically possible for an electron to hide itself, with each half hiding at either end of the wire. If these complex entities, called Majorana zero modes, can be coaxed into existence, they will be difficult to destroy, making them intrinsically stable.  ”Now you can see the advantage,” says Sankar Das Sarma, a theoretical physicist at the University of Maryland, College Park, who did early work on this concept. “You cannot destroy a half electron, right? If you try to destroy a half electron, that means only a half electron is left. That’s not allowed.” In 2023, the Microsoft team published a paper in the journal Physical Review B claiming that this system had passed a specific protocol designed to assess the presence of Majorana zero modes. This week in Nature, the researchers reported that they can “read out” the information in these nanowires—specifically, whether there are Majorana zero modes hiding at the wires’ ends. If there are, that means the wire has an extra, unpaired electron. “What we did in the Nature paper is we showed how to measure the even or oddness,” says Nayak. “To be able to tell whether there’s 10 million or 10 million and one electrons in one of these wires.” That’s an important step by itself, because the company aims to use those two states—an even or odd number of electrons in the nanowire—as the 0s and 1s in its qubits.  If these quasiparticles exist, it should be possible to “braid” the four Majorana zero modes in a pair of nanowires around one another by making specific measurements in a specific order. The result would be a qubit with a mix of these two states, even and odd. Nayak says the team has done just that, creating a two-level quantum system, and that it is currently working on a paper on the results. Researchers outside the company say they cannot comment on the qubit results, since that paper is not yet available. But some have hopeful things to say about the findings published so far. “I find it very encouraging,” says Travis Humble, director of the Quantum Science Center at Oak Ridge National Laboratory in Tennessee. “It is not yet enough to claim that they have created topological qubits. There’s still more work to be done there,” he says. But “this is a good first step toward validating the type of protection that they hope to create.”  Others are more skeptical. Physicist Henry Legg of the University of St Andrews in Scotland, who previously criticized Physical Review B for publishing the 2023 paper without enough data for the results to be independently reproduced, is not convinced that the team is seeing evidence of Majorana zero modes in its Nature paper. He says that the company’s early tests did not put it on solid footing to make such claims. “The optimism is definitely there, but the science isn’t there,” he says. One potential complication is impurities in the device, which can create conditions that look like Majorana particles. But Nayak says the evidence has only grown stronger as the research has proceeded. “This gives us confidence: We are manipulating sophisticated devices and seeing results consistent with a Majorana interpretation,” he says. “They have satisfied many of the necessary conditions for a Majorana qubit, but there are still a few more boxes to check,” Das Sarma said after seeing preliminary results on the qubit. “The progress has been impressive and concrete.” Scaling up On the face of it, Microsoft’s topological efforts seem woefully behind in the world of quantum computing—the company is just now working to combine qubits in the single digits while others have tied together more than 1,000. But both Nayak and Das Sarma say other efforts had a strong head start because they involved systems that already had a solid grounding in physics. Work on the topological qubit, on the other hand, has meant starting from scratch.  “We really were reinventing the wheel,” Nayak says, likening the team’s efforts to the early days of semiconductors, when there was so much to sort out about electron behavior and materials, and transistors and integrated circuits still had to be invented. That’s why this research path has taken almost 20 years, he says: “It’s the longest-running R&D program in Microsoft history.” Some support from the US Defense Advanced Research Projects Agency could help the company catch up. Early this month, Microsoft was selected as one of two companies to continue work on the design of a scaled-up system, through a program focused on underexplored approaches that could lead to utility-scale quantum computers—those whose benefits exceed their costs. The other company selected is PsiQuantum, a startup that is aiming to build a quantum computer containing up to a million qubits using photons. Many of the researchers MIT Technology Review spoke with would still like to see how this work plays out in scientific publications, but they were hopeful. “The biggest disadvantage of the topological qubit is that it’s still kind of a physics problem,” says Das Sarma. “If everything Microsoft is claiming today is correct … then maybe right now the physics is coming to an end, and engineering could begin.”  This story was updated with Henry Legg’s current institutional affiliation.

Microsoft announced today that it has made significant progress in its 20-year quest to make topological quantum bits, or qubits—a special approach to building quantum computers that could make them more stable and easier to scale up. 

Researchers and companies have been working for years to build quantum computers, which could unlock dramatic new abilities to simulate complex materials and discover new ones, among many other possible applications. 

To achieve that potential, though, we must build big enough systems that are stable enough to perform computations. Many of the technologies being explored today, such as the superconducting qubits pursued by Google and IBM, are so delicate that the resulting systems need to have many extra qubits to correct errors. 

Microsoft has long been working on an alternative that could cut down on the overhead by using components that are far more stable. These components, called Majorana quasiparticles, are not real particles. Instead, they are special patterns of behavior that may arise inside certain physical systems and under certain conditions.

The pursuit has not been without setbacks, including a high-profile paper retraction by researchers associated with the company in 2018. But the Microsoft team, which has since pulled this research effort in house, claims it is now on track to build a fault-tolerant quantum computer containing a few thousand qubits in a matter of years and that it has a blueprint for building out chips that each contain a million qubits or so, a rough target that could be the point at which these computers really begin to show their power.

This week the company announced a few early successes on that path: piggybacking on a Nature paper published today that describes a fundamental validation of the system, the company says it has been testing a topological qubit, and that it has wired up a chip containing eight of them. 

“You don’t get to a million qubits without a lot of blood, sweat, and tears and solving a lot of really difficult technical challenges along the way. And I do not want to understate any of that,” says Chetan Nayak, a Microsoft technical fellow and leader of the team pioneering this approach. That said, he says, “I think that we have a path that we very much believe in, and we see a line of sight.” 

Researchers outside the company are cautiously optimistic. “I’m very glad that [this research] seems to have hit a very important milestone,” says computer scientist Scott Aaronson, who heads the ​​Quantum Information Center at the University of Texas at Austin. “I hope that this stands, and I hope that it’s built up.”

Even and odd

The first step in building a quantum computer is constructing qubits that can exist in fragile quantum states—not 0s and 1s like the bits in classical computers, but rather a mixture of the two. Maintaining qubits in these states and linking them up with one another is delicate work, and over the years a significant amount of research has gone into refining error correction schemes to make up for noisy hardware. 

For many years, theorists and experimentalists alike have been intrigued by the idea of creating topological qubits, which are constructed through mathematical twists and turns and have protection from errors essentially baked into their physics. “It’s been such an appealing idea to people since the early 2000s,” says Aaronson. “The only problem with it is that it requires, in a sense, creating a new state of matter that’s never been seen in nature.”

Microsoft has been on a quest to synthesize this state, called a Majorana fermion, in the form of quasiparticles. The Majorana was first proposed nearly 90 years ago as a particle that is its own antiparticle, which means two Majoranas will annihilate when they encounter one another. With the right conditions and physical setup, the company has been hoping to get behavior matching that of the Majorana fermion within materials.

In the last few years, Microsoft’s approach has centered on creating a very thin wire or “nanowire” from indium arsenide, a semiconductor. This material is placed in close proximity to aluminum, which becomes a superconductor close to absolute zero, and can be used to create superconductivity in the nanowire.

Ordinarily you’re not likely to find any unpaired electrons skittering about in a superconductor—electrons like to pair up. But under the right conditions in the nanowire, it’s theoretically possible for an electron to hide itself, with each half hiding at either end of the wire. If these complex entities, called Majorana zero modes, can be coaxed into existence, they will be difficult to destroy, making them intrinsically stable. 

”Now you can see the advantage,” says Sankar Das Sarma, a theoretical physicist at the University of Maryland, College Park, who did early work on this concept. “You cannot destroy a half electron, right? If you try to destroy a half electron, that means only a half electron is left. That’s not allowed.”

In 2023, the Microsoft team published a paper in the journal Physical Review B claiming that this system had passed a specific protocol designed to assess the presence of Majorana zero modes. This week in Nature, the researchers reported that they can “read out” the information in these nanowires—specifically, whether there are Majorana zero modes hiding at the wires’ ends. If there are, that means the wire has an extra, unpaired electron.

“What we did in the Nature paper is we showed how to measure the even or oddness,” says Nayak. “To be able to tell whether there’s 10 million or 10 million and one electrons in one of these wires.” That’s an important step by itself, because the company aims to use those two states—an even or odd number of electrons in the nanowire—as the 0s and 1s in its qubits. 

If these quasiparticles exist, it should be possible to “braid” the four Majorana zero modes in a pair of nanowires around one another by making specific measurements in a specific order. The result would be a qubit with a mix of these two states, even and odd. Nayak says the team has done just that, creating a two-level quantum system, and that it is currently working on a paper on the results.

Researchers outside the company say they cannot comment on the qubit results, since that paper is not yet available. But some have hopeful things to say about the findings published so far. “I find it very encouraging,” says Travis Humble, director of the Quantum Science Center at Oak Ridge National Laboratory in Tennessee. “It is not yet enough to claim that they have created topological qubits. There’s still more work to be done there,” he says. But “this is a good first step toward validating the type of protection that they hope to create.” 

Others are more skeptical. Physicist Henry Legg of the University of St Andrews in Scotland, who previously criticized Physical Review B for publishing the 2023 paper without enough data for the results to be independently reproduced, is not convinced that the team is seeing evidence of Majorana zero modes in its Nature paper. He says that the company’s early tests did not put it on solid footing to make such claims. “The optimism is definitely there, but the science isn’t there,” he says.

One potential complication is impurities in the device, which can create conditions that look like Majorana particles. But Nayak says the evidence has only grown stronger as the research has proceeded. “This gives us confidence: We are manipulating sophisticated devices and seeing results consistent with a Majorana interpretation,” he says.

“They have satisfied many of the necessary conditions for a Majorana qubit, but there are still a few more boxes to check,” Das Sarma said after seeing preliminary results on the qubit. “The progress has been impressive and concrete.”

Scaling up

On the face of it, Microsoft’s topological efforts seem woefully behind in the world of quantum computing—the company is just now working to combine qubits in the single digits while others have tied together more than 1,000. But both Nayak and Das Sarma say other efforts had a strong head start because they involved systems that already had a solid grounding in physics. Work on the topological qubit, on the other hand, has meant starting from scratch. 

“We really were reinventing the wheel,” Nayak says, likening the team’s efforts to the early days of semiconductors, when there was so much to sort out about electron behavior and materials, and transistors and integrated circuits still had to be invented. That’s why this research path has taken almost 20 years, he says: “It’s the longest-running R&D program in Microsoft history.”

Some support from the US Defense Advanced Research Projects Agency could help the company catch up. Early this month, Microsoft was selected as one of two companies to continue work on the design of a scaled-up system, through a program focused on underexplored approaches that could lead to utility-scale quantum computers—those whose benefits exceed their costs. The other company selected is PsiQuantum, a startup that is aiming to build a quantum computer containing up to a million qubits using photons.

Many of the researchers MIT Technology Review spoke with would still like to see how this work plays out in scientific publications, but they were hopeful. “The biggest disadvantage of the topological qubit is that it’s still kind of a physics problem,” says Das Sarma. “If everything Microsoft is claiming today is correct … then maybe right now the physics is coming to an end, and engineering could begin.” 

This story was updated with Henry Legg’s current institutional affiliation.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cisco initiative targets device security

Cisco is announcing a security initiative that will push customers to update or replace aging infrastructure components, such as routers, switches and firewalls, as well as discourage them from using any insecure features. Called Resilient Infrastructure, the plan calls for Cisco to strengthen network security by increasing default protections, removing

Read More »

NetOps teams struggle with AI readiness

Some 87% of respondents indicated that internet and cloud environments are creating network blind spots in many areas. Half of organizations reported a lack of adequate insight into public clouds, 44% of respondents indicated transit and peering networks created blind spots, and 43% said remote work environments lack visibility. Other

Read More »

USA Crude Oil Stocks Drop by 3.4 Million Barrels WoW

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR) decreased by 3.4 million barrels from the week ending November 7 to the week ending November 14, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. This EIA report, which was released on November 19 and included data for the week ending November 14, showed that crude oil stocks, not including the SPR, stood at 424.2 million barrels on November 14, 427.6 million barrels on November 7, and 430.3 million barrels on November 15, 2024. Crude oil in the SPR stood at 410.9 million barrels on November 14, 410.4 million barrels on November 7, and 389.2 million barrels on November 15, 2024, the report highlighted. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.680 billion barrels on November 14, the report revealed. Total petroleum stocks were down 2.2 million barrels week on week and up 47.1 million barrels year on year, the report showed. “At 424.2 million barrels, U.S. crude oil inventories are about five percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 2.3 million barrels from last week and are about three percent below the five year average for this time of year. Finished gasoline inventories decreased, while blending components inventories increased last week,” the EIA added. “Distillate fuel inventories increased by 0.2 million barrels last week and are about seven percent below the five year average for this time of year. Propane/propylene inventories remained unchanged from last week and are about 16 percent above the five year average for this

Read More »

Norway Gas Output Hits Six-Month High

Norway produced 336.76 million standard cubic meters a day (MMscmd) of natural gas in October, its highest over the last six months, according to preliminary monthly production figures from the country’s upstream regulator. However, last month’s gas output fell 1.7 percent compared to October 2024, though it beat the Norwegian Offshore Directorate’s (NOD) projection by 2.1 percent. Norway sold 10.4 billion scm of gas last month, up 1.9 billion scm from September, the NOD reported on its website. The Nordic country’s oil production in October averaged 1.82 million barrels per day (MMbpd), down 3.6 percent from September but up 2.1 percent from October 2024. The figure exceeded the NOD forecast by 0.4 percent. Total liquids production was 2.02 MMbpd, down 2.8 percent month-on-month but up 0.8 percent year-on-year. “Preliminary production figures for October 2025 show an average daily production of 2,017,000 barrels of oil, NGL and condensate”, the NOD said. “The total petroleum production so far in 2025 is about 197.1 million Sm3 oil equivalents (MSm3 o.e.), broken down as follows: about 87.8 MSm3 o.e. of oil, about 9.7 MSm3 o.e. of NGL and condensate and about 99.5 MSm3 o.e. of gas for sale”, it said. “The total volume is 4.1 MSm3 o.e. less than 2024”. For the third quarter majority state-owned Equinor ASA reported Norwegian equity liquid and gas production of 1.42 million barrels of oil equivalent a day (MMboed), up from 1.36 MMboed in Q2 and 1.31 MMboed in Q3 2024. “In the third quarter of 2025, new fields coming onstream (Johan Castberg and Halten East) drove an increase in production compared to the same quarter last year”, Equinor said of its Norwegian production in its quarterly report October 29. “High production efficiency from Johan Sverdrup, new wells and a lower impact from turnarounds and maintenance more than

Read More »

CEO Denies Alleged TotalEnergies Link to Mozambique Crimes

TotalEnergies SE Chief Executive Officer Patrick Pouyanne rejected accusations the French energy firm has responsibilities in alleged killing of civilians four years ago at its liquefied natural gas project site in Mozambique. The company “is accused of having directly financed and materially supported” a group of armed forces, who “allegedly detained, tortured and killed dozens of civilians” at the LNG project in the north of the country, the European Center for Constitutional and Human Rights said in a statement Tuesday. It filed a criminal complaint over the allegations with the French National Anti-Terrorism Prosecutor this week. “We will defend ourselves and we will explain that all this has nothing to do with TotalEnergies,” Pouyanne said Wednesday on LCI television station. “We’ve done inquiries. We never managed to find evidence” of the allegations.  The complaint comes as Total is on the verge of restarting construction of the project for the first time since the site was shut in 2021 due to an Islamist insurgency. Other global corporations operating in conflict areas have had cases brought against them including Holcim Ltd.’s Lafarge, on trial in France over operations in Syria, and a US ruling against BNP Paribas related to Sudan. The ECCHR complaint, citing an account by Politico, accuses Total of “complicity in war crimes” through a financial link to a Mozambican army unit that allegedly held civilians in shipping containers where dozens of them were tortured and killed at the project between July and September 2021. The company had evacuated the site earlier that year after an attack by insurgents and declared a force majeure. In 2023, Jean-Christophe Rufin, a former French ambassador hired by Total to review the security and humanitarian situation around the project, warned that the developers should stop paying bonuses to Mozambique’s security forces protecting the site.  Total asked government authorities to

Read More »

Powering the grid: embracing EPC for extra-high-voltage growth

Across the country, the demand for power is soaring. Hyperscale facilities, rising industrial load, extreme weather impacts and the loss of firm power capacity are pushing the grid harder than ever. Integration of renewable and distributed generation sources — often far from load centers — has been constrained as infrastructure build-out has lagged soaring demand. The response from the energy sector has been a boom in capital investment, significant new construction and rebuilds of aging infrastructure, aiming to dramatically increase capacity on the grid. The complexity and sheer scale of these projects pose serious risks. A streamlined approach to project delivery, utilizing the engineer-procure-construct (EPC) model, will be key to delivering at the rate the market demands. Accelerating the front end of projects, from concept to mobilization, offers opportunities to optimize through integrated delivery and collaborative contracting. Three important takeaways: Extra-high-voltage (EHV) projects, such as 765-kV transmission lines, are an important part of the sector’s response to modern challenges. Given limited practical experience with such projects, partnerships can better leverage that pool of experience. The portfolio-based approach required to scale extra-high-voltage infrastructure needs EPC delivery for maximum efficiency. The importance of collaboration and coordination is magnified for region-spanning efforts. Bridging Experience Gaps Solving capacity challenges means significant capital investment is essential, particularly in EHV transmission infrastructure. However, most of the limited 765-kV infrastructure in the U.S. was built decades ago. The number of people in today’s workforce who have hands-on experience with design, construction or commissioning at that scale is small and shrinking. The supply of experienced workers — especially field personnel, skilled linemen and engineering leadership — for high-voltage projects is a major constraint in an already-tight labor market. The risk created by that lack of bench strength requires trust among all stakeholders for the projects. Intentional knowledge transfer

Read More »

Dynagas Q3 Revenue Down YoY

Dynagas LNG Partners LP on Thursday reported $38.89 million in revenue for the third quarter, down from $39.07 million for the same three-month period last year. The decrease brought down net profit adjusted for nonrecurring items from $14.48 million for Q3 2024 to $14.23 million, or $0.36 per share, for Q3 2025, the Athens-based owner and operator of liquefied natural gas (LNG) carriers said in its quarterly report. The revenue fall was driven by “the decrease of the daily hire rate of the Arctic Aurora in the three-month period ending September 30, 2025, and the decrease in revenue earning days of the Yenisei River due to unscheduled repairs”, Dynagas said. “The above decrease in voyage revenues was partially offset by the non-cash effect of the amortization of deferred revenues and the value of the EU ETS emissions allowances due to the Partnership by the charterers of its vessels”. Dynagas logged average daily hire gross of commissions of nearly $70,000 per day per vessel in Q3 2025, down from around $72,800 per day per vessel for Q3 2024. Its fleet, consisting of six carriers with a combined capacity of approximately 914,000 cubic meters (32.28 million cubic feet), had utilization rates of 99.1 percent and 100 percent in Q3 2025 and Q3 2024 respectively. “Our fleet-wide time charter equivalent of $67,094 per day comfortably exceeded our cash breakeven for the quarter of approximately $47,500, allowing us to continue generating stable free cash flow”, said chief executive Tony Lauritzen. While revenue dropped, net income grew from $15.05 million for Q3 2024 to $18.66 million for Q3 2025. This was “mainly attributable to the increase of other income from insurance claims for damages incurred in prior years, the decrease in net interest and finance costs… [and] the decrease in general and administrative expenses”, Dynagas said.

Read More »

Russian Oil Giant Recommends Lowest Interim Dividends Since 2020

Russian oil giant Rosneft PJSC plans to pay the lowest interim dividends since the pandemic in 2020 as slumping crude prices, a stronger ruble and looming US sanctions bite. The board of directors of Russia’s largest state-controlled oil producer recommended to pay 11.56 rubles, $0.14, per share in interim dividends, according to a regulatory filing on Thursday.  The recommendation comes just a day before unprecedented US sanctions are due to hit Rosneft and fellow Russian oil giant Lukoil PJSC. President Donald Trump’s administration last month stepped up restrictions on Russia’s oil industry, which together with gas accounts for about a quarter of the nation’s coffers.  Rosneft’s earnings were already undermined by lower global oil prices amid fears of global surplus and much stronger ruble, with the appreciation of the nation’s currency meaning fewer rubles for each sold barrel. As a result, Rosneft’s net income shrank by 68% in the first half of the year from the same period in 2024.  Rosneft, responsible for over a third of the nation’s oil output, has been paying dividends to the state since 1999, and to other shareholders since 2006 when it began trading publicly. The producer started to pay interim dividends in 2017, distributing half of its profit to shareholders. It scrapped the payouts for the first half of 2020 after posting a loss for the period. Lukoil’s board of directors will discuss recommendations on interim dividends on Friday. The oil producer initially planned to discuss nine-month payouts on Oct. 23, but postponed after US announced sanctions against the company on Oct. 22. Some Lukoil units on Friday received extensions to sanctions waivers that the Trump administration imposed. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate

Read More »

Nvidia is flying high: Is there anything left to say?

Supply chain risks, he said, “are numerous in nature; however, it is clear that Nvidia is customer Number One with all of their suppliers, which drives an inordinate allocation of resources to ensure that production flows. Any disruption would likely be materials-based as opposed to a process or labor issue from their vendor base.” He added, “geopolitical events would be the most likely origin of any type of medium to long term disruption, think China-Taiwan, expansion of the Russia-Ukraine conflict, or escalation in the US-China trade war.” For lower impact events, he said, “[Nvidia] does a nice job of setting conservative shipment goals and targets for Wall Street, which they almost invariably beat quarter after quarter. This provides some cushion for them to absorb a labor, process, or geopolitical hiccup and still meet their stated goals. Shipment volumes may not exceed targets, but shipments would continue to flow; the spice must flow after all.” In a worst-case scenario where shipments are materially impacted, there is little recourse for enterprises that are not large-scale cloud consumers with clout with the limited providers in the space, Bickley added. Enterprises joining a ‘very long queue’ According to Sanchit Vir Gogia, the chief analyst at Greyhound Research, the Nvidia earnings call “confirms that the bottleneck in enterprise AI is no longer imagination or budget. It is capacity. Nvidia reported $57 billion in quarterly revenue, with more than $51 billion from data center customers alone, yet still described itself as supply-constrained at record levels.” Blackwell and Blackwell Ultra, he said, have become the default currency of AI infrastructure, yet even at a build rate of roughly 1,000 GPU racks per week, the company cannot meet demand.

Read More »

Server memory prices could double by 2026 as AI demand strains supply

Limited options for enterprise buyers As supply tightens, most enterprises face limited leverage in selecting suppliers. “Enterprise will have less control over what memory supplier they can choose unless you are a hyperscaler or tier-2 AI datacenter scale enterprise,” Neil Shah, VP for research and partner at Counterpoint Research, told NetworkWorld. “For most enterprises investing in AI infrastructure, they will rely on vendors such as Dell, Lenovo, HPE, Supermicro, and others on their judgment to select the best memory supplier.” Shah advised enterprises with control over their bill of materials to negotiate and lock in supply and costs in advance. “In most cases for long-tail enterprises, smaller buyers without volume leverage, they will have little control as demand outstrips supply, so the prudent thing would be to spread out the rollout over time to average out the cost spikes,” he said. Legacy shortage opens door for Chinese suppliers The current pricing pressure has its roots in production decisions made months ago. According to Counterpoint, the supply crunch originated at the low end of the market as Samsung, SK Hynix, and Micron redirected production toward high-bandwidth memory for AI accelerators, which commands higher margins but consumes three times the wafer capacity of standard DRAM. That shift created an unusual price inversion: DDR4 used in budget devices now trades at approximately $2.10 per gigabit, while server-grade DDR5 sells for around $1.50 per gigabit, according to the firm. This tightness is creating an opportunity for China’s CXMT, noted Shah. “DDR4 is being used in low- to mid-tier smart devices and considering bigger vendors such as Samsung and SK Hynix planned to ramp down DDR4 capacity, CXMT could gain advantage and balance the supply versus demand dynamics moving into the second half of next year,” Shah said.

Read More »

Cobalt 200: Microsoft’s next-gen Arm CPU targets lower TCO for cloud workloads

These architectural improvements underpin Cobalt 200’s claimed increase in performance, which, according to Stephen Sopko, analyst at HyperFRAME Research, will lead to a reduction in total cost of ownership (TCO) compared to its predecessor. As a result, enterprise customers can benefit from consolidating workloads onto fewer machines. “For example, a 1k-instance cluster can see up to 30-40% TCO gains,” Sopko said, adding that this also helps enterprises free up resources to allocate to other workloads or projects. Moor Strategy and Insights principal analyst Matt Kimball noted that the claimed improvements in throughput-per-watt could be beneficial for compute-intensive workloads such as AI inferencing, microservices, and large-scale data processing. Some of Microsoft’s customers are already using Cobalt 100 virtual machines (VMs) for large-scale data processing workloads, and the chips are deployed across 32 Azure data centers, the company said. With Cobalt 200, the company will directly compete with AWS’s Graviton series and Google’s recently announced Axion processors, both of which leverage Arm architecture to deliver better price-performance for cloud workloads. Microsoft and other hyperscalers have been forced to design their own chips for data centers due to the skyrocketing costs for AI and cloud infrastructure, supply constraints around GPUs, and the need for energy-efficient yet customizable architectures to optimize workloads.

Read More »

AWS boosts its long-distance cloud connections with custom DWDM transponder

By controlling the entire hardware stack, AWS can implement comprehensive security measures that would be challenging with third-party solutions, Rehder stated. “This initial long-haul deployment represents just the first implementation of the in-house technology across our extensive long-haul network. We have already extended deployment to Europe, with plans to use the AWS DWDM transponder for all new long-haul connections throughout our global infrastructure,” Rehder wrote. Cloud vendors are some of the largest optical users in the world, though not all develop their own DWDM or other optical systems, according to a variety of papers on the subject. Google develops its own DWDM, for example, but others like Microsoft Azure develop only parts and buy optical gear from third parties. Others such as IBM, Oracle and Alibaba have optical backbones but also utilize third-party equipment. “We are anticipating that the time has come to interconnect all those new AI data centers being built,” wrote Jimmy Yu, vice president at Dell’Oro Group, in a recent optical report. “We are forecasting data center interconnect to grow at twice the rate of the overall market, driven by increased spending from cloud providers. The direct purchases of equipment for DCI will encompass ZR/ZR+ optics for IPoDWDM, optical line systems for transport, and DWDM systems for high-performance, long-distance terrestrial and subsea transmission.”

Read More »

Nvidia’s first exascale system is the 4th fastest supercomputer in the world

The world’s fourth exascale supercomputer has arrived, pitting Nvidia’s proprietary chip technologies against the x86 systems that have dominated supercomputing for decades. For the 66th edition of the TOP500, El Capitan holds steady at No. 1 while JUPITER Booster becomes the fourth exascale system on the list. The JUPITER Booster supercomputer, installed in Germany, uses Nvidia CPUs and GPUs and delivers a peak performance of exactly 1 exaflop, according to the November TOP500 list of supercomputers, released on Monday. The exaflop measurement is considered a major milestone in pushing computing performance to the limits. Today’s computers are typically measured in gigaflops and teraflops—and an exaflop translates to 1 billion gigaflops. Nvidia’s GPUs dominate AI servers installed in data centers as computing shifts to AI. As part of this shift, AI servers with Nvidia’s ARM-based Grace CPUs are emerging as a high-performance alternative to x86 chips. JUPITER is the fourth-fastest supercomputer in the world, behind three systems with x86 chips from AMD and Intel, according to TOP500. The top three supercomputers on the TOP500 list are in the U.S. and owned by the U.S. Department of Energy. The top two supercomputers—the 1.8-exaflop El Capitan at Lawrence Livermore National Laboratory and the 1.35-exaflop Frontier at Oak Ridge National Laboratory—use AMD CPUs and GPUs. The third-ranked 1.01-exaflop Aurora at Argonne National Laboratory uses Intel CPUs and GPUs. Intel scrapped its GPU roadmap after the release of Aurora and is now restructuring operations. The JUPITER Booster, which was assembled by France-based Eviden, has Nvidia’s GH200 superchip, which links two Nvidia Hopper GPUs with CPUs based on ARM designs. The CPU and GPU are connected via Nvidia’s proprietary NVLink interconnect, which is based on InfiniBand and provides bandwidth of up to 900 gigabytes per second. JUPITER first entered the Top500 list at 793 petaflops, but

Read More »

Samsung’s 60% memory price hike signals higher data center costs for enterprises

Industry-wide price surge driven by AI Samsung is not alone in raising prices. In October, TrendForce reported that Samsung and SK Hynix raised DRAM and NAND flash prices by up to 30% for Q4. Similarly, SK Hynix said during its October earnings call that its HBM, DRAM, and NAND capacity is “essentially sold out” for 2026, with the company posting record quarterly operating profit exceeding $8 billion, driven by surging AI demand. Industry analysts attributed the price increases to manufacturers redirecting production capacity. HBM production for AI accelerators consumes three times the wafer capacity of standard DRAM, according to a TrendForce report, citing remarks from Micron’s Chief Business Officer. After two years of oversupply, memory inventories have dropped to approximately eight weeks from over 30 weeks in early 2023. “The memory industry is tightening faster than expected as AI server demand for HBM, DDR5, and enterprise SSDs far outpaces supply growth,” said Manish Rawat, semiconductor analyst at TechInsights. “Even with new fab capacity coming online, much of it is dedicated to HBM, leaving conventional DRAM and NAND undersupplied. Memory is shifting from a cyclical commodity to a strategic bottleneck where suppliers can confidently enforce price discipline.” This newfound pricing power was evident in Samsung’s approach to contract negotiations. “Samsung’s delayed pricing announcement signals tough behind-the-scenes negotiations, with Samsung ultimately securing the aggressive hike it wanted,” Rawat said. “The move reflects a clear power shift toward chipmakers: inventories are normalized, supply is tight, and AI demand is unavoidable, leaving buyers with little room to negotiate.” Charlie Dai, VP and principal analyst at Forrester, said the 60% increase “signals confidence in sustained AI infrastructure growth and underscores memory’s strategic role as the bottleneck in accelerated computing.” Servers to cost 10-25% more For enterprises building AI infrastructure, these supply dynamics translate directly into

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »