Stay Ahead, Stay ONMINE

What a massive thermal battery means for energy storage

Rondo Energy just turned on what it says is the world’s largest thermal battery, an energy storage system that can take in electricity and provide a consistent source of heat. The company announced last week that its first full-scale system is operational, with 100 megawatt-hours of capacity. The thermal battery is powered by an off-grid solar array and will provide heat for enhanced oil recovery (more on this in a moment). Thermal batteries could help clean up difficult-to-decarbonize sectors like manufacturing and heavy industrial processes like cement and steel production. With Rondo’s latest announcement, the industry has reached a major milestone in its effort to prove that thermal energy storage can work in the real world. Let’s dig into this announcement, what it means to have oil and gas involved, and what comes next. The concept behind a thermal battery is overwhelmingly simple: Use electricity to heat up some cheap, sturdy material (like bricks) and keep it hot until you want to use that heat later, either directly in an industrial process or to produce electricity. Rondo’s new system has been operating for 10 weeks and achieved all the relevant efficiency and reliability benchmarks, according to the company. The bricks reach temperatures over 1,000 °C (about 1,800 °F), and over 97% of the energy put into the system is returned as heat. This is a big step from the 2 MWh pilot system that Rondo started up in 2023, and it’s the first of the mass-produced, full-size heat batteries that the company hopes to put in the hands of customers. Thermal batteries could be a major tool in cutting emissions: 20% of total energy demand today is used to provide heat for industrial processes, and most of that is generated by burning fossil fuels. So this project’s success is significant for climate action. There’s one major detail here, though, that dulls some of that promise: This battery is being used for enhanced oil recovery, a process where steam is injected down into wells to get stubborn oil out of the ground. It can be  tricky for a climate technology to show its merit by helping harvest fossil fuels. Some critics argue that these sorts of techniques keep that polluting infrastructure running longer. When I spoke to Rondo founder and chief innovation officer  John O’Donnell about the new system, he defended the choice to work with oil and gas.   “We are decarbonizing the world as it is today,” O’Donnell says. To his mind, it’s better to help an oil and gas company use solar power for its operation than leave it to continue burning natural gas for heat. Between cheap solar, expensive natural gas, and policies in California, he adds, Rondo’s technology made sense for the customer. Having a willing customer pay for a full-scale system has been crucial to Rondo’s effort to show that it can deliver its technology. And the next units are on the way: Rondo is currently building three more full-scale units in Europe. The company will be able to bring them online cheaper and faster because of what it’s learned from the California project, O’Donnell says.  The company has the capacity to build more batteries, and do it quickly. It currently makes batteries at its factory in Thailand, which has the capacity to make 2.4 gigawatt-hours’ worth of heat batteries today. I’ve been following progress on thermal batteries for years, and this project obviously represents a big step forward. For all the promises of cheap, robust energy storage, there’s nothing like actually building a large-scale system and testing it in the field. It’s definitely hard to get excited about enhanced oil recovery—we need to stop burning fossil fuels, and do it quickly, to avoid the worst impacts of climate change. But I see the argument that as long as oil and gas operations exist, there’s value in cleaning them up. And as O’Donnell puts it, heat batteries can help: “This is a really dumb, practical thing that’s ready now.” This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Rondo Energy just turned on what it says is the world’s largest thermal battery, an energy storage system that can take in electricity and provide a consistent source of heat.

The company announced last week that its first full-scale system is operational, with 100 megawatt-hours of capacity. The thermal battery is powered by an off-grid solar array and will provide heat for enhanced oil recovery (more on this in a moment).

Thermal batteries could help clean up difficult-to-decarbonize sectors like manufacturing and heavy industrial processes like cement and steel production. With Rondo’s latest announcement, the industry has reached a major milestone in its effort to prove that thermal energy storage can work in the real world. Let’s dig into this announcement, what it means to have oil and gas involved, and what comes next.

The concept behind a thermal battery is overwhelmingly simple: Use electricity to heat up some cheap, sturdy material (like bricks) and keep it hot until you want to use that heat later, either directly in an industrial process or to produce electricity.

Rondo’s new system has been operating for 10 weeks and achieved all the relevant efficiency and reliability benchmarks, according to the company. The bricks reach temperatures over 1,000 °C (about 1,800 °F), and over 97% of the energy put into the system is returned as heat.

This is a big step from the 2 MWh pilot system that Rondo started up in 2023, and it’s the first of the mass-produced, full-size heat batteries that the company hopes to put in the hands of customers.

Thermal batteries could be a major tool in cutting emissions: 20% of total energy demand today is used to provide heat for industrial processes, and most of that is generated by burning fossil fuels. So this project’s success is significant for climate action.

There’s one major detail here, though, that dulls some of that promise: This battery is being used for enhanced oil recovery, a process where steam is injected down into wells to get stubborn oil out of the ground.

It can be  tricky for a climate technology to show its merit by helping harvest fossil fuels. Some critics argue that these sorts of techniques keep that polluting infrastructure running longer.

When I spoke to Rondo founder and chief innovation officer  John O’Donnell about the new system, he defended the choice to work with oil and gas.  

“We are decarbonizing the world as it is today,” O’Donnell says. To his mind, it’s better to help an oil and gas company use solar power for its operation than leave it to continue burning natural gas for heat. Between cheap solar, expensive natural gas, and policies in California, he adds, Rondo’s technology made sense for the customer.

Having a willing customer pay for a full-scale system has been crucial to Rondo’s effort to show that it can deliver its technology.

And the next units are on the way: Rondo is currently building three more full-scale units in Europe. The company will be able to bring them online cheaper and faster because of what it’s learned from the California project, O’Donnell says. 

The company has the capacity to build more batteries, and do it quickly. It currently makes batteries at its factory in Thailand, which has the capacity to make 2.4 gigawatt-hours’ worth of heat batteries today.

I’ve been following progress on thermal batteries for years, and this project obviously represents a big step forward. For all the promises of cheap, robust energy storage, there’s nothing like actually building a large-scale system and testing it in the field.

It’s definitely hard to get excited about enhanced oil recovery—we need to stop burning fossil fuels, and do it quickly, to avoid the worst impacts of climate change. But I see the argument that as long as oil and gas operations exist, there’s value in cleaning them up.

And as O’Donnell puts it, heat batteries can help: “This is a really dumb, practical thing that’s ready now.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

OEUK to Host Cross Spectrum Energy Policy Debate

Industry body Offshore Energies UK (OEUK) announced, in a statement sent to Rigzone this week, that it will host a “cross spectrum” energy policy debate. “OEUK is to bring together key voices for a public debate on the UK’s energy future,” the industry body said in the statement. “With decisions due on major oil and gas projects, ongoing public concern about energy bills, and debate over net zero policy, the trade body has said conversation and collaboration are needed instead of confrontation, if the UK is to tackle these issues properly,” it added. OEUK noted in the statement that representatives from groups on all sides will be invited to speak at a public debate in London as OEUK “kicks off another series of open debates around the country”. The London event comes as renewed attention is focused on the development of the Rosebank oil field in the North Atlantic west of Shetland, OEUK said in the statement, adding that the group “has been calling for a pragmatic discussion on energy which allows room for facts and all points of view to be heard”. In the statement, OEUK Chief Executive David Whitehouse said, “the challenges facing the country, and indeed the world, when it comes to energy are significant and complex”. “The decisions made by government will impact people’s pockets, families, communities, and futures. We owe it to them and to ourselves to have a proper debate focused on conversation not confrontation,” he added. “OEUK believes that if we are serious about finding solutions, we must take the polarization out of the debate and find a pragmatic way through. By bringing these groups together I hope we can develop a better approach,” he went on to state. In a statement posted on its site back in April, OEUK announced that it

Read More »

Williams Acquires Stake in Woodside’s Louisiana LNG

Woodside Energy Group Ltd has roped in another partner for the under-construction Louisiana LNG with Williams Companies Inc acquiring a minority interest and entering into an offtake agreement for 1.5 million metric tons per annum (MMtpa). “The strategic partnership involves the sale by Woodside of a 10 percent interest in Louisiana LNG LLC (HoldCo) and an 80 percent interest and operatorship of Driftwood Pipeline LLC (PipelineCo) to Williams for a purchase price of $250 million at the effective date of 1 January 2025”, Woodside said in an online statement Thursday. “The total proceeds received are $378 million including proportionate capital reimbursement since the effective date”. The companies expect Tulsa, Oklahoma-based Williams to invest $1.9 billion. “As part of the investment in Louisiana LNG, Williams assumes LNG offtake obligations for 10 percent of produced volumes”, Australian oil and gas company Woodside said. “Williams’ total share of LNG production from Louisiana LNG will be 1.6 million tonnes per annum. This LNG production will be supplied to Williams under an LNG SPA [sale and purchase agreement] for approximately 1.5 Mtpa and Williams will also receive the proportionate benefit (10 percent) of the Louisiana LNG 1.0 Mtpa SPA previously signed with Uniper”, Woodside said. It was referring to its agreement with the German power and gas utility for up to two MMtpa – one MMtpa from Louisiana LNG for 13 years and up to one MMtpa from Woodside’s global LNG portfolio for a term starting with Louisiana LNG’s start of commercial operations until 2039. “Woodside’s total capital expenditure for the Louisiana LNG Project is now expected to be $9.9 billion reduced from $11.8 billion at final investment decision (FID)”, Woodside added. Williams, which currently operates over 33,000 miles of pipeline and markets of over seven billion cubic feet a day of gas at the Sequent platform, will build

Read More »

TX Upstream Employment Trajectory Shows ‘Precarious Balance’

In a statement sent to Rigzone recently, the Texas Independent Producers and Royalty Owners Association (TIPRO) noted that, according to its analysis, “the estimated employment trajectory in the Texas upstream sector through September 2025 illustrates a precarious balance between operational resilience and mounting headwinds, as declining global oil prices collide with tariffs and geopolitical flashpoints”. TIPRO said in the statement that, due to the ongoing federal government shutdown and suspension of related services, the Current Employment Statistics (CES) report from the U.S. Bureau of Labor Statistics (BLS) for the month of September has been delayed until the government resumes operations. The industry body added, however, that Texas oil and natural gas extraction jobs are projected to dip modestly to 69,650 in September compared to August, “buoyed by Permian Basin efficiencies, yet tempered by corporate consolidations and workforce reductions amid rising costs per barrel”. That dip would represent a drop of 0.5 percent month on month, TIPRO highlighted. “Support Activities employment in Texas, estimated at 134,425 in September (-0.5 percent month on month), face sharper volatility from rig count erosion (down 7.6 percent year on year) and service sector streamlining, exacerbating a net Q3 contraction despite high job postings,” TIPRO said in the statement. “Combined, Texas upstream sector employment is estimated at 204,075 in September (-0.5 percent month on month) … underscoring the Permian’s outsized role in sustaining Texas’ upstream employment at ~205,000 while navigating tariff uncertainties, various global supply and demand scenarios, and federal furloughs,” it added. “These dynamics highlight the industry’s indispensable economic engine, while fortifying U.S. energy security through Texas production dominance,” it continued. In the statement, TIPRO noted that, from January to September 2025, employment in the Texas upstream sector “displayed early resilience followed by late-summer softening”. “Oil and Gas Extraction added a net 1,450 jobs (+2.1 percent), peaking

Read More »

CP2 LNG in Louisiana Gets Export Permit

The United States Department of Energy (DOE) has granted final authorization for Venture Global Inc’s CP2 LNG project in Cameron Parish, Louisiana, to export to countries without a free trade agreement (FTA) with Washington, following a conditional permit in March. “Under President Trump’s leadership, DOE has authorized more than 13.8 Bcfd [billion cubic feet a day] of LNG exports – greater than the volume exported today by the world’s second-largest LNG supplier”, DOE said in an online statement Wednesday. “Today, U.S. exports are approximately 15 billion Bcfd, an increase of approximately 25 percent from 2024 levels”. The Arlington, Viriginia-based developer said separately, “The CP2 project construction is well underway and we look forward to continue advancing the project safely and quickly to bring new LNG to the global market at a record pace beginning in 2027”. CP2 LNG already secured an FTA permit April 22, 2022. It can now export a cumulative 1.45 trillion cubic feet a year of natural gas equivalent or about 28 million metric tons per annum of liquefied natural gas (LNG) to FTA and non-FTA nations through 2050. “Because the export volumes authorized in CP2 LNG’s FTA order and this [non-FTA] order reflect the maximum liquefaction capacity of the project as approved by FERC, the non-FTA volume in this order is not additive to CP2 LNG’s FTA order”, DOE said in the order dated October 21, 2025, referring to the Federal Energy Regulatory Commission’s (FERC) siting, construction and operation permit. “Additionally, this final order brings DOE’s cumulative total of approved non-FTA exports of LNG from the lower-48 states to 52.81 Bcfd of natural gas (across 42 final orders, including this order)”. On July 28, 2025, Venture Global announced a positive final investment decision and the closing of $15.1 billion in financing for phase 1 of CP2 LNG and the associated CP Express

Read More »

Fresh USA Sanctions Reignite Strong Upside Momentum in Oil

In a Skandinaviska Enskilda Banken AB (SEB) report sent to Rigzone by the SEB team on Thursday, SEB Commodities Analyst Ole R. Hvalbye outlined that “fresh U.S. sanctions reignited strong upside momentum” in the oil market. “The U.S. has now announced sanctions targeting Russia’s two largest oil producers, Rosneft PJSC and Lukoil PJSC, effectively blacklisting both companies,” Hvalbye said in the report. “Washington cited Moscow’s lack of progress toward peace in Ukraine, marking a significant escalation in pressure on President Vladimir Putin to enter negotiations,” he added. Hvalbye stated in the report that a full blacklisting would, at least in theory, make it very difficult for Rosneft and Lukoil barrels to reach the market. “Rosneft produced nearly 3.7 million barrels per day during the first half of 2025, while Lukoil’s Russian assets contributed roughly 1.6 million barrels per day of oil and condensate output (2024 data),” he noted. “Together, the two companies account for almost half of Russia’s total crude exports, underscoring the scale and impact of Washington’s move. Rosneft, led by Igor Sechin, and privately held Lukoil are by far Russia’s largest oil producers and central to the country’s energy income,” he added. “It is worth noting that oil and gas revenues make up around 24 percent of Russia’s federal budget, highlighting the strategic importance of these sanctions,” Hvalbye continued. The SEB analyst went on to state that, while the sanctions’ effectiveness remains uncertain, the move represents a clear shift in tone from the Trump administration and opens the door for even tougher measures ahead. “Trump has also stated his intention to raise the issue of Chinese purchases of Russian oil when he meets President Xi Jinping in South Korea next week,” Hvalbye highlighted. “Earlier this week, India’s Prime Minister Narendra Modi reportedly signaled that India would begin to

Read More »

EU Adopts New Sanctions on Russian Energy

The European Union adopted a new package of sanctions targeting Russia’s energy infrastructure, joining a new US push to chip away at Moscow’s ability to wage its war against Ukraine. The EU measures will ban LNG imports from 2027, according to a statement from European Commission President Ursula von der Leyen. The EU will also impose a full transaction ban on two major Russian oil companies, Rosneft PJSC and Gazpromneft, and sanction 118 additional so-called shadow fleet vessels, which have enabled Russia to evade previous measures. The move adds momentum to western allies’ renewed push to punish Moscow. On Wednesday, the US announced sanctions on Russia’s two largest oil producers, Rosneft and Lukoil PJSC, after a potential summit between US President Donald Trump and Russian leader Vladimir Putin was called off. Those penalties came a week after the UK also hit the same Russian oil giants. “The 19th package is very important,” Ukrainian President Volodymyr Zelenskiy told reporters on Thursday before joining EU leaders at a summit in Brussels. “But American sanctions are also very important. And this is a good signal to other countries in the world to join the sanctions.”  The potential Trump-Putin gathering had unnerved European officials, who feared Trump may be adopting a more Putin-friendly stance after several months where he publicly threatened Russia with sanctions and appeared receptive to Ukraine’s pleas for additional weapons.  Still, Zelenskiy left a meeting with Trump last week without desired commitments on long-range missiles. Instead, Trump urged both Zelenskiy and Putin to declare an immediate ceasefire and begin negotiations. Zelenskiy is open to that approach and joined a statement with European leaders this week, when they all endorsed the suggestion. “A ceasefire is possible, of course, and I think all of us need a ceasefire,” Zelenskiy said Thursday. “But we need more pressure

Read More »

Inside Google’s multi-architecture revolution: Axion Arm joins x86 in production clusters

Matt Kimball, VP and principal analyst with Moor Insights and Strategy, pointed out that AWS and Microsoft have already moved many workloads from x86 to internally designed Arm-based servers. He noted that, when Arm first hit the hyperscale datacenter market, the architecture was used to support more lightweight, cloud-native workloads with an interpretive layer where architectural affinity was “non-existent.” But now there’s much more focus on architecture, and compatibility issues “largely go away” as Arm servers support more and more workloads. “In parallel, we’ve seen CSPs expand their designs to support both scale out (cloud-native) and traditional scale up workloads effectively,” said Kimball. Simply put, CSPs are looking to monetize chip investments, and this migration signals that Google has found its performance-per-dollar (and likely performance-per-watt) better on Axion than x86. Google will likely continue to expand its Arm footprint as it evolves its Axion chip; as a reference point, Kimball pointed to AWS Graviton, which didn’t really support “scale up” performance until its v3 or v4 chip. Arm is coming to enterprise data centers too When looking at architectures, enterprise CIOs should ask themselves questions such as what instance do they use for cloud workloads, and what servers do they deploy in their data center, Kimball noted. “I think there is a lot less concern about putting my workloads on an Arm-based instance on Google Cloud, a little more hesitance to deploy those Arm servers in my datacenter,” he said. But ultimately, he said, “Arm is coming to the enterprise datacenter as a compute platform, and Nvidia will help usher this in.” Info-Tech’s Jain agreed that Nvidia is the “biggest cheerleader” for Arm-based architecture, and Arm is increasingly moving from niche and mobile use to general-purpose and AI workload execution.

Read More »

AMD Scales the AI Factory: 6 GW OpenAI Deal, Korean HBM Push, and Helios Debut

What 6 GW of GPUs Really Means The 6 GW of accelerator load envisioned under the OpenAI–AMD partnership will be distributed across multiple hyperscale AI factory campuses. If OpenAI begins with 1 GW of deployment in 2026, subsequent phases will likely be spread regionally to balance supply chains, latency zones, and power procurement risk. Importantly, this represents entirely new investment in both power infrastructure and GPU capacity. OpenAI and its partners have already outlined multi-GW ambitions under the broader Stargate program; this new initiative adds another major tranche to that roadmap. Designing for the AI Factory Era These upcoming facilities are being purpose-built for next-generation AI factories, where MI450-class clusters could drive rack densities exceeding 100 kW. That level of compute concentration makes advanced power and cooling architectures mandatory, not optional. Expected solutions include: Warm-water liquid cooling (manifold, rear-door, and CDU variants) as standard practice. Facility-scale water loops and heat-reuse systems—including potential district-heating partnerships where feasible. Medium-voltage distribution within buildings, emphasizing busway-first designs and expanded fault-current engineering. While AMD has not yet disclosed thermal design power (TDP) specifications for the MI450, a 1 GW campus target implies tens of thousands of accelerators. That scale assumes liquid cooling, ultra-dense racks, and minimal network latency footprints, pushing architectures decisively toward an “AI-first” orientation. Design considerations for these AI factories will likely include: Liquid-to-liquid cooling plants engineered for step-function capacity adders (200–400 MW blocks). Optics-friendly white space layouts with short-reach topologies, fiber raceways, and aisles optimized for module swaps. Substation adjacency and on-site generation envelopes negotiated during early land-banking phases. Networking, Memory, and Power Integration As compute density scales, networking and memory bottlenecks will define infrastructure design. Expect fat-tree and dragonfly network topologies, 800 G–1.6 T interconnects, and aggressive optical-module roadmaps to minimize collective-operation latency, aligning with recent disclosures from major networking vendors.

Read More »

Study Finds $4B in Data Center Grid Costs Shifted to Consumers Across PJM Region

In a new report spanning 2022 through 2024, the Union of Concerned Scientists (UCS) identifies a significant regulatory gap in the PJM Interconnection’s planning and rate-making process—one that allows most high-voltage (“transmission-level”) interconnection costs for large, especially AI-scale, data centers to be socialized across all utility customers. The result, UCS argues, is a multi-billion-dollar pass-through that is poised to grow as more data center projects move forward, because these assets are routinely classified as ordinary transmission infrastructure rather than customer-specific hookups. According to the report, between 2022 and 2024, utilities initiated more than 150 local transmission projects across seven PJM states specifically to serve data center connections. In 2024 alone, 130 projects were approved with total costs of approximately $4.36 billion. Virginia accounted for nearly half that total—just under $2 billion—followed by Ohio ($1.3 billion) and Pennsylvania ($492 million) in data-center-related interconnection spending. Yet only six of those 130 projects, about 5 percent, were reported as directly paid for by the requesting customer. The remaining 95 percent, representing more than $4 billion in 2024 connection costs, were rolled into general transmission charges and ultimately recovered from all retail ratepayers. How Does This Happen? When data center project costs are discussed, the focus is usually on the price of the power consumed, or megawatts multiplied by rate. What the UCS report isolates, however, is something different: the cost of physically delivering that power: the substations, transmission lines, and related infrastructure needed to connect hyperscale facilities to the grid. So why aren’t these substantial consumer-borne costs more visible? The report identifies several structural reasons for what effectively functions as a regulatory loophole in how development expenses are reported and allocated: Jurisdictional split. High-voltage facilities fall under the Federal Energy Regulatory Commission (FERC), while retail electricity rates are governed by state public utility

Read More »

OCP Global Summit 2025 Highlights: Advancing Data Center Densification and Security

With the conclusion of the 2025 OCP Global Summit, William G. Wong, Senior Content Director at DCF’s sister publications Electronic Design and Microwaves & RF, published a comprehensive roundup of standout technologies unveiled at the event. For Data Center Frontier readers, we’ve revisited those innovations through the lens of data center impact, focusing on how they reshape infrastructure design and operational strategy. This year’s OCP Summit marked a decisive shift toward denser GPU racks, standardized direct-to-chip liquid cooling, 800-V DC power distribution, high-speed in-rack fabrics, and “crypto-agile” platform security. Collectively, these advances aim to accelerate time-to-capacity, reduce power-distribution losses at megawatt rack scales, simplify retrofits in legacy halls, and fortify data center platforms against post-quantum threats. Rack Design and Cooling: From Ad-Hoc to Production-Grade Liquid Cooling NVIDIA’s Vera Rubin compute tray, newly offered to OCP for standardization, packages Rubin-generation GPUs with an integrated liquid-cooling manifold and PCB midplane. Compared with the GB300 tray, Vera Rubin represents a production-ready module delivering four times the memory and three times the memory bandwidth: a 7.5× performance factor at rack scale, with 150 TB of memory at 1.7 PB/s per rack. The system implements 45 °C liquid cooling, a 5,000-amp liquid-cooled busbar, and on-tray energy storage with power-resilience features such as flexible 100-amp whips and automatic-transfer power-supply units. NVIDIA also previewed a Kyber rack generation targeted for 2027, pivoting from 415/480 VAC to 800 V DC to support up to 576 Rubin Ultra GPUs, potentially eliminating the 200-kg copper busbars typical today. These refinements are aimed at both copper reduction and aisle-level manageability. Wiwynn’s announcements filled in the practicalities of deploying such densities. The company showcased rack- and system-level designs across NVIDIA GB300 NVL72 (72 Blackwell Ultra GPUs with 800 Gb/s ConnectX-8 SuperNICs) for large-scale inference and reasoning, and HGX B300 (eight GPUs /

Read More »

Storage constraints add to AI data center bottleneck

AI deployment uses multiple storage layers, and each one has different requirements, says Dell’Oro’s Fung. For storing massive amounts of unstructured, raw data, cold storage on HDDs makes more sense, he says. SSDs make sense for warm storage, such as for pre-processing data and for post-training and inference. “There’s a place for each type of storage,” he says. Planning ahead According to Constellation’s Mehta, data center managers and other storage buyers should prepare by treating SSD procurement like they do GPUs. “Multi-source, lock in lanes early, and engineer to standards so vendor swaps don’t break your data path.” He recommends qualifying at least two vendors for both QLC and TLC and starting early. TrendForce’s Ao agrees. “It is better to build inventory now,” he says. “It is difficult to lock-in long term deals with suppliers now due to tight supply in 2026.” Based on suppliers’ availability, Kioxia, SanDisk, and Micron are in the best position to support 128-terabyte QLC enterprise SSD solutions, Ao says. “But in the longer term, some module houses may be able to provide similar solutions at a lower cost,” Ao adds. “We are seeing more module houses, such as Phison and Pure Storage, supporting these solutions.” And it’s not just SSD for fast storage and HDD for slow storage. Memory solutions are becoming more complex in the AI era, says Ao. “For enterprise players with smaller-scale business models, it is important to keep an eye on Z-NAND and XL-Flash for AI inference demand,” he says. These are memory technologies that sit somewhere between the SSDs and the RAM working memory. “These solutions will be more cost-effective compared to HBM or even HBF [high bandwidth flash],” he says.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »