Stay Ahead, Stay ONMINE

After Moss Landing, what’s next for battery storage?

The U.S. energy storage industry finds itself at a crossroads in the aftermath of the January blaze at the 300-MW first phase of Vistra’s Moss Landing energy storage facility near Santa Cruz, California.  Nearby residents reported feeling ill in the days after the blaze, and a legal team that includes celebrity environmental activist Erin Brockovich […]

The U.S. energy storage industry finds itself at a crossroads in the aftermath of the January blaze at the 300-MW first phase of Vistra’s Moss Landing energy storage facility near Santa Cruz, California. 

Nearby residents reported feeling ill in the days after the blaze, and a legal team that includes celebrity environmental activist Erin Brockovich cited possible soil contamination in a lawsuit filed earlier this month. One local elected official compared the incident to the 1979 accident at the Three Mile Island nuclear power plant. Another sponsored a bill to increase zoning setbacks for new energy storage facilities. Elsewhere in California, elected officials in San Luis Obispo and Orange counties enacted moratoriums on utility-scale energy storage development.

Energy storage experts note that the Moss Landing facility was housed indoors and used a type of battery more prone to thermal runaway, among other potential safety issues. Utility-scale lithium-ion battery installations’ overall safety track record is impressive, with just 20 fire-related incidents over the past decade despite a 25,000% increase in installed capacity since 2018, a spokesperson for the American Clean Power Association told Utility Dive last month.

But the Moss Landing incident has nevertheless focused utilities, regulators and lawmakers attention on lithium-ion battery safety. It could also create an opening for non-lithium energy storage technologies to compete, some experts say.

Backlash fears spur renewed focus on lithium-ion battery safety

Despite the political backlash, the Moss Landing incident is unlikely to dent demand for battery systems in the long run, according to experts interviewed by Utility Dive.

“We don’t think Moss Landing will have a very material impact,” said Tim Woodward, managing director at Prelude Ventures. Woodward’s firm is an investor in Element Energy, which offers a proprietary battery management system it says significantly improves battery safety, efficiency and longevity.

The industry has made great strides on battery safety since Vistra commissioned Moss Landing One in 2020, said Ravi Manghani, senior director of strategic sourcing at Anza, a solar and storage analytics firm. He cited the advent of national energy storage safety standards like UL 9540, UL 9540A and NFP 855, all of which factor into a model energy storage ordinance framework released in June by the American Clean Power Association.

“We expect the industry to use this incident as a learning opportunity and push the envelope on safe operations of the multiple gigawatt-hours of projects that are projected to go online in the coming years,” Manghani said.

Such envelope-pushing could benefit technology providers like Element, which claims to “eliminate fire risk” while reducing total cost of ownership by 20% in first-life battery storage systems and 40% in second-life systems. Whereas legacy BMS technology treats the entire battery as a static system, Element’s BMS enables real-time monitoring, diagnostics and controls at the cell level, it says. 

“Fundamentally, we think this technology could predict the elements that may result in thermal runaway 50 to 80 cycles in advance,” allowing operators to take cells offline and avoid potentially catastrophic outcomes like Moss Landing, Woodward said. 

This capability is particularly important for second-life battery installations, where individual modules “are already in a state of divergence,” Woodward added. Element has nearly 2 GWh of used electric vehicle batteries in inventory and in November deployed about 900 of them to create the world’s largest second-life stationary storage installation, a 53-MWh facility in West Texas.

Dramatic improvements in lithium-ion battery safety require fundamental changes to battery management and architecture, said Jon Williams, CEO of Viridi.

UL 9540 and UL 9540A are “observation standards based on putting the technology into failure mode … not an acknowledgement of safety [but rather] what you can do to avoid burning everything down,” he said.

Viridi’s mobile and large-scale lithium-ion battery systems have a “defense in depth” approach that uses the company’s proprietary “fail-safe anti-propagation architecture” alongside other physical and software-based safety systems, according to a presentation shared by Williams. A Viridi 50-kWh battery pack has a predicted failure rate of 1 in 158.5 GWh, compared with 1 in 3.3 GWh for traditional BESS, Viridi says. 

Viridi’s systems cost more than standard BESS, but “over the full lifecycle, it’s more cost-effective not to have to build more ventilation and fire suppression,” he said. “It’s also cheaper to run diesel engines with no emissions controls, but we acknowledge that there is more cost embedded in that emission than just the fuel and hardware.”

In the longer term, efforts to develop solid-state lithium-ion vehicle batteries by automakers like Toyota and technology developers like QuantumScape could benefit the stationary storage industry, said Ric O’Connell, founding executive director of GridLab. That’s because stationary storage is a “technology taker” dwarfed by the electric mobility industry, which is likely to continue driving battery innovation.

“You can’t afford to build a technology just for stationary storage,” he said.

Is 2025 the year for electrochemical alternatives?

Some non-lithium battery technology companies would disagree. 

“[Moss Landing] has been a disruptor for the energy storage industry in general [and offers] an opportunity to highlight the alternatives to lithium-ion batteries,” said Giovanni Damato, president of CMBlu Energy’s U.S. subsidiary.

With no high-toxicity materials, almost 50% water in its active chemistry and better performance in extreme weather conditions, safety is a key part of Damato’s pitch for CMBlu’s organic flow battery systems. The modular systems also scale well, making them economical at durations longer than four hours, Damato told Utility Dive last year. And the supply chain is straightforward to localize thanks to off-the-shelf modules and a polymer-based chemistry that can be sourced wherever plastic feedstocks are available, he said this month.

CMBlu recently secured funding to build a 4-GWh factory in Greece that it aims to commission next year, followed by a “copy-paste” production facility in the United States, Damato said. In the meantime, it’s running at least two utility pilots: a 5-MW/50-MWh deployment for Arizona’s Salt River Project and a “1-2 MWh” installation at a WEC Energy Group cogeneration plant in Milwaukee, Wisconsin.

The Wisconsin deployment sits “just feet away from the boiler unit, so that gives you an indication of what they think the safety profile is,” Damato said.

Utilities and other customers are piloting other non-lithium battery technologies as well. In September, the Viejas Band of Kumeyaay Indians and the U.S. Department of Energy Loan Programs Office closed on a $72.8 million loan to build out a microgrid pairing 15 MW of solar with 10 MWh of vanadium flow batteries and 60 MWh of aqueous zinc batteries. Form Energy — another Prelude Ventures portfolio company — is demonstrating its 100-hour iron-air battery system with utilities in California, New York, Washington and Minnesota

Outside the U.S., sodium-ion chemistry is making inroads into the Chinese battery market thanks to a growing cohort of homegrown technology developers and manufacturers, said Cam Dales, cofounder of Peak Energy, which aims to produce and deploy sodium-ion batteries in the U.S. at utility scale.

NFPP, a variant of sodium-ion battery similar to the lithium-iron-phosphate chemistry popular with American BESS developers, “is fantastically suited to stationary storage,” Dales said. Peak is targeting its first deployments with U.S. utilities and independent power producers this year and intends to stand up the country’s first gigawatt-scale sodium-ion battery factory in 2027, according to its website.

Sodium-ion chemistry is less energy-dense than lithium-ion, trading higher stability — and lower risk of thermal runaway — for lower space efficiency. In addition, the U.S. happens to have one of the highest-quality, lowest-cost sources of raw sodium in the trona fields of Wyoming, Dales said, potentially mitigating supply chain risk as trade tensions rise between the U.S. and China, which continues to dominate the lithium battery supply chain.

Sodium-ion batteries also have “drop-in compatibility with Li-ion manufacturing infrastructure,” which “suggests rapid scaling timelines,” Stanford University researchers Adrian Yao, Sally Benson and William Chueh said in a study published last month. But the technology might not be cost-competitive with lower-cost lithium-ion variants until sometime in the 2030s, “assuming that substantial progress can be made along technology roadmaps via targeted research and development,” they said.

An August analysis by DOE likewise cast doubt on sodium-ion’s near-term cost-competitiveness, projecting 2030 costs between $0.23/kWh and $0.553/kWh against $0.067/kWh to $0.143/kWh for lithium-ion.

Dales is more optimistic, predicting sodium-ion batteries would reach cost parity with LFP at the cell level by 2027. And sodium-ion chemistry promises significantly lower 20-year cost of ownership thanks to a simpler balance-of-system, higher round-trip efficiency and “a long list of improvements that the chemistry enables,” he said.

“There’s a lot of chatter across the industry based on incomplete information,” Dales said. “Even today, [NFPP] wins by a large margin on cost at the project level.”  

But as lithium-ion battery prices continue to fall and system safety improves, the technology could prove difficult to dislodge, at least in the energy storage industry, Woodward said.

“We’ve tried in the past to invest on the thesis that people will say lithium is not safe, and it just hasn’t happened,” he said. “[Lithium] keeps coming down the cost curve and will keep getting deployed as people find ways to minimize the risk.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Intel nabs Qualcomm veteran to lead GPU initiative

Intel has struggled for more than two decades to develop a successful GPU/accelerated computing strategy, going all the way back to the aughts and the ill-fated Larrabee effort.  Its most recent efforts centered around Ponte Vecchio and Gaudi chips, neither of which have gained any traction. Still, CEO Lip-Bu Tan

Read More »

New Relic extends observability into ChatGPT-hosted apps

New Relic’s cloud-based observability platform monitors applications and services in real time to provide insights into software, hardware, and cloud performance. The new capability extends the platform’s browser agent into the GPT iframe environment. It captures standard telemetry data, including latency and connectivity of an application within the GPT iframe.

Read More »

AI can’t fix a broken NetOps practice

Data collection errors, inconsistent data formatting issues across vendors, data storage issues, and network monitoring blind spots were the top issues that are impacting this data quality. Bad data leads to bad AI insights. Network teams will need to assess their data before they invest time and money in AI

Read More »

Work-from-office mandate? Expect top talent turnover, culture rot

IT workers value flexibility Ivanti’s survey suggests that IT workers are skeptical of return-to-office (RTO) mandates. Eighty-three percent of IT workers surveyed say flexible work arrangements are either “high value” or “essential,” compared to 73% of office workers. Meanwhile, IT workers facing work-from-office mandates are two to three times more

Read More »

Venezuelan Oil Heads to Europe

Europe is set to receive some of its first shipments of Venezuelan oil in almost a year after traders rolled out offers worldwide to sell cargoes at the behest of the Trump administration.  The ship Poliegos is on its way to pick up Venezuelan oil and deliver it to a port in Italy, according to a shipping report seen by Bloomberg. Energy trader Vitol Group, which together with Trafigura Group was enlisted by the US to sell Venezuelan oil, is listed as owner of the cargo.  Another crude tanker named Folegandros is also scheduled to set sail from Venezuela to the Mediterranean in the coming days, according to people familiar with the matter, adding the vessel would deliver the barrels to Repsol SA’s oil refinery in Cartagena, Spain. A spokesperson for the Madrid-based company declined to comment. The crude, scheduled to arrive in Europe in February, is part of US President Donald Trump’s plan to shore up the Venezuelan economy after three decades of mismanagement, underinvestment and corruption. The pace and scale at which the nation’s output and exports is restored will be an important detail in an oil market that’s dealing with a large supply excess, with global prices stuck near $60 a barrel. Vitol declined to comment.  After the Jan. 3 capture of strongman Nicolás Maduro by US forces, Trump officials enlisted the help of Vitol and Trafigura to help sell as much as 50 million barrels of oil, with proceeds earmarked to help rebuild the country’s economy. The shipments would market the first visible exports of Venezuelan oil to Europe since April, when the Vitol-backed Saras SpA took 1 million barrels of Merey 16 oil to the Sarroch refinery in Italy, according to vessel movements compiled by Bloomberg. The traders are quickly moving Venezuelan barrels as the US

Read More »

Acting CISA chief defends workforce cuts, declares agency ‘back on mission’

The Cybersecurity and Infrastructure Security Agency’s acting leader used a hearing on Wednesday to defend the Trump administration’s mass layoffs at CISA and reassure lawmakers that the agency was still prepared to defend government and critical infrastructure networks from hackers. “A disciplined mission requires the right workforce — not a larger one, but a more capable and skilled one,” Madhu Gottumukkala said during a House Homeland Security Committee hearing that featured him and two other Department of Homeland Security officials. In the coming year, Gottumukkala added, “CISA will continue targeted hiring in mission critical roles while remaining aligned with [DHS’s] broader efforts to control costs and maximize return.” For now, though, he said, “we have the staff that we need.” CISA turmoil over layoffs, transfers CISA has lost more than one-third of its workforce since President Donald Trump took office almost exactly one year ago. The Trump administration has forced out key experts, eliminated a major collaboration framework, withdrawn funding from a state and local cybersecurity group and shuttered offices that managed important partnerships with states, businesses and foreign allies. At least 998 CISA employees have quit or been laid off or transferred since the start of the Trump administration, with 65 receiving forced reassignments to other agencies, according to an internal agency report that House Homeland Security Committee ranking member Bennie Thompson, D-Miss., entered into the record at the end of the hearing. Thompson’s office later provided the report to Cybersecurity Dive. Lawmakers of both parties expressed concern about the turmoil at CISA, with committee chairman Andrew Gabarino, R-N.Y., saying that “workforce continuity, clear leadership, and mission readiness are essential to effective cyber defenses.” Democrats were more critical. Rep. James Walkinshaw, D-Va., said the cuts at CISA “have weakened our defenses and left our critical systems and infrastructure more exposed

Read More »

EIA Sees USA Gasoline Price Under $3 in 2026 and 2027

The U.S. Energy Information Administration (EIA) projected, in its latest short term energy outlook (STEO), that the U.S. regular gasoline retail price will average under $3 per gallon both this year and next year. According to the STEO, which was released on January 13, the EIA sees the U.S. regular gasoline retail price averaging $2.92 per gallon in 2026 and $2.95 per gallon in 2027. In 2025, the U.S. regular gasoline retail price averaged $3.10 per gallon, the STEO showed. A quarterly breakdown included in the EIA’s latest STEO projected that the U.S. regular gasoline retail price will come in at $2.85 per gallon in the first quarter of 2026, $3.02 per gallon in the second quarter, $3.03 per gallon in the third quarter, $2.78 per gallon across the fourth quarter of this year and the first quarter of next year, $3.05 per gallon in the second quarter of 2027, $3.07 per gallon in the third quarter, and $2.89 per gallon in the fourth quarter of 2027. “U.S. retail gasoline prices in our forecast are mostly lower in 2026 and 2027 than they were in 2025,” the EIA said in its January STEO. “We forecast U.S. gasoline prices in 2026 will average $2.92 per gallon, a decrease of 18 cents per gallon, or about six percent, compared with 2025. In 2027, we forecast retail gasoline prices will average $2.95 per gallon. Even with the slight increase in 2027, prices remain below 2025 levels in most regions,” it added. “On both a nominal and percentage basis, we estimate the price decrease in 2026 will be similar in scale to the decreases in 2024 and 2025, in which prices declined by about six percent annually,” the EIA went on to state. “After reaching record highs in 2022, gasoline price decreases reflect both

Read More »

Glenfarne Says Texas LNG Capacity Fully Committed

Glenfarne Group LLC has signed a 20-year deal to supply one million metric tons per annum (MMtpa) of liquefied natural gas (LNG) to RWE Supply & Trading GmbH from the Texas LNG project. The latest offtake completes the marketing process for the project on the Port of Brownsville, according to a joint statement online by the New York City-based developer and Essen-based power developer and energy trader RWE. RWE’s purchase “corresponds to approximately 13 cargos of LNG and 1.4 billion cubic meters [49.44 billion cubic feet] per year of natural gas respectively”, the statement said. “Deliveries can be shipped by RWE to locations in Europe and worldwide”. “Texas LNG features the use of electric drive motors for LNG production, making this project one of the lowest-emitting LNG terminals in the world”, the statement claimed. “The RWE agreement provides a framework to monitor, report and verify greenhouse gas emissions (GHG) from the well head to LNG loading to document how LNG cargoes produced from the Texas LNG terminal support the reduction of GHG emissions across the LNG value chain”. “With the completion of offtake negotiations, Glenfarne is now focusing on finalizing the financing process as we advance toward a final investment decision in early 2026”, said Vlad Bluzer, partner at Glenfarne and co-president of Texas LNG. Last year Macquarie Energy LLC, part of Sydney-based trading and financial services firm Macquarie Group Ltd, signed up for 0.5 MMtpa of LNG for 20 years from Texas LNG, as announced with Glenfarne December 3, 2025. Earlier in 2025 Glenfarne secured a 20-year contract to supply 0.5 MMtpa of LNG from Texas LNG to Gunvor Group Ltd. Supply under the agreement would be delivered to the commodities trader’s Singapore subsidiary on a free-on-board basis, said a joint statement September 10, 2025. On July 23, 2024 Glenfarne announced

Read More »

Henry Hub Surges Sharply Over Past Week

Henry Hub has surged sharply over the past week, driven by a classic winter squeeze rather than a change in long-term fundamentals. That’s what Ole R. Hvalbye, a commodities analyst at Skandinaviska Enskilda Banken AB (SEB), said in a SEB report sent to Rigzone on Thursday, which highlighted the “massive rally in U.S. natural gas”. “Forecasts now point to well below-normal temperatures across large parts of the Lower-48 from around 23 January into early February, particularly in the eastern half of the United States,” Hvalbye stated in the report. “We see temperatures at 5-10 degrees below the 30-year normal! This is naturally lifting heating demand expectations at a sensitive time of year,” he added. “As a result, Henry Hub has seen a launch from … [around] $3 per MMBtu [million British thermal units] on the 16th of January to the current $5.3 per MMBtu, an increase of … [around] 80 percent in only five trading days. Abnormal? Yes, indeed,” he continued. In the report, Hvalbye noted that U.S. supply has tightened at the margin. “Lower-48 dry gas production has dipped toward ~110.5 Bcfpd [billion cubic feet per day], down from above 112 Bcfpd earlier in the week, partly reflecting cold-weather disruptions,” he said. “LNG feedgas demand remains elevated at just over 18 Bcfpd, even though flows at Sabine Pass eased slightly, partly offset by higher intake at Elba Island,” he added. Hvalbye also outlined in the report that positioning has amplified the move higher in Henry Hub. “Futures trading volumes hit record highs, and hedge funds have been forced to cover short positions built during the recent sell-off,” he said. In a media advisory sent to Rigzone late Tuesday by the AccuWeather team, AccuWeather warned that “a major winter storm is expected to bring dangerous ice and snow impacts to more than 150

Read More »

Seplat Starts Up ANOH Gas Project in Nigeria

Seplat Energy PLC and Nigerian Gas Infrastructure Co’s (NGIC) ANOH Gas Project, designed to produce up to 300 million standard cubic feet a day (MMscfd), has begun supplying the Indorama Petrochemical Plant. The Niger Delta project’s four wells had been on standby since November. Flows to Indorama have now begun following the completion of an 11-kilometer (6.84 miles) pipeline and clearance by the Nigerian Upstream Petroleum Regulatory Commission, the Lagos-based company said in a statement on its website. “Since first gas, wet gas production has been stabilizing, delivering 40-52 MMscfd of processed gas directly from the ANOH gas plant to the Indorama Petrochemical Plant”, Seplat said. “Condensate production has reached 2.0-2.5 kboepd and is expected to increase with gas production as the plant ramps up to design capacity. “In addition, preparations are underway to initiate sales of processed gas to the Nigeria LNG with an offtake agreement structured on an interruptible basis and will support the gas plant to further scale production towards full design capacity of 300 MMscfd. “Meanwhile, the construction of the OB3 pipeline export route by NGIC, originally designated as the primary channel for ANOH gas supply to the domestic market, has resumed and a revised completion date will be communicated in due course”. ANOH was developed by ANOH Gas Processing Co (AGPC), a joint venture equally owned by Seplat and NGIC. The integrated plant consists of two 150-MMscfd gas processing units, liquefied petroleum gas recovery units, condensate stabilization units, a 16-megawatt power plant and other supporting facilities, according to Seplat. It has been designed to operate with zero routine flares, the company said. “Across the unitized field of OML [Oil Mining Lease] 53 and OML 21, the ANOH gas plant unlocks an estimated 4.6 Tcf [trillion cubic feet] condensate-rich gas resource base”, Seplat said. “Seplat’s working interest 2P [proven and probable]

Read More »

Blue Origin targets enterprise networks with a multi-terabit satellite connectivity plan

“It’s ideal for remote, sparse, or sensitive regions,” said Manish Rawat, analyst at TechInsights. “Key use cases include cloud-to-cloud links, data center replication, government, defense, and disaster recovery workloads. It supports rapid or temporary deployments and prioritizes fewer customers with high capacity, strict SLAs, and deep carrier integration.” Adoption, however, is expected to largely depend on the sector. For governments and organizations operating highly critical or sensitive infrastructure, where reliability and security outweigh cost considerations, this could be attractive as a redundancy option. “Banks, national security agencies, and other mission-critical operators may consider it as an alternate routing path,” Jain said. “For most enterprises, however, it is unlikely to replace terrestrial connectivity and would instead function as a supplementary layer.” Real-world performance Although satellite connectivity offers potential advantages, analysts note that questions remain around real-world performance. “TeraWave’s 6 Tbps refers to total constellation capacity, not per-user throughput, achieved via multiple optical inter-satellite links and ground gateways,” Rawat said. “Optical crosslinks provide high aggregate bandwidth but not a single terabit-class pipe. Performance lies between fiber and GEO satellites, with lower intercontinental latency than GEO but higher than fiber.” Operational factors could also affect network stability. Jitter is generally low, but handovers, rerouting, and weather conditions can introduce intermittent performance spikes. Packet loss is expected to remain modest but episodic, Rawat added.

Read More »

CyrusOne Hones AI-Era Data Center Strategy for Power, Pace, and Reliability

In the second half of 2025, CyrusOne was racing to secure buildable power and faster time-to-market capacity for AI-era customers. At the same time, its reputation for mission-critical reliability took a very public hit when a disruption at a CyrusOne facility helped knock CME trading offline. The incident forced the company into an unusually open conversation about redundancy, cooling systems, and operational discipline: systems that are meant to disappear in normal operation, and dominate the story when they malfunction. From Projects to a Playbook Which projects, missteps, and strategic moves from 2025 are now shaping how CyrusOne enters 2026? Nowhere is that view clearer than in Texas. There, CyrusOne has been leaning hard into a “power + land + interconnect” model: treating deliverable power and grid position as part of the product, not just a prerequisite. If you map the company’s announcements since late July, Texas reveals the playbook. Secure power, secure substations and grid position, then build multi-phase campuses designed to scale quickly as demand materializes. The Calpine “Powered Land” Deal: From 190 MW to 400 MW in Three Months On July 30, 2025, CyrusOne and Calpine announced a 190-MW agreement tied to a hyperscale campus (DFW10) adjacent to Calpine’s Thad Hill Energy Center in Bosque County, Texas. The structure bundled power, grid connection, and land into a single development package, with CyrusOne saying the site was already under construction and targeting operation by Q4 2026. Just three months later, on November 3–4, the partners announced a second phase, adding 210 MW and taking the campus to 400 MW. The update emphasized coordination to support grid reliability during scarcity; such curtailment and operational-coordination concepts are becoming table stakes for ERCOT-scale megaprojects. Together, the two announcements show CyrusOne placing a large bet on an emerging model: power-ready campuses, or “powered

Read More »

Forrester study quantifies benefits of Cisco Intersight

If IT groups are to be the strategic business partners their companies need, they require solutions that can improve infrastructure life cycle management in the age of artificial intelligence (AI) and heightened security threats. To quantify the value of such solutions, Cisco recently commissioned Forrester Consulting to conduct a Total Economic Impact™ analysis of Cisco Intersight. The comprehensive study found that for a composite organization, Intersight delivered 192% return on investment (ROI) and a payback period of less than six months, along with significant tangible benefits to IT and businesses. Cisco Intersight overview Cisco Intersight is a cloud-native IT operations platform for infrastructure life cycle management. It provides IT teams with comprehensive visibility, control, and automation capabilities for Cisco’s portfolio of compute solutions for data centers, colocation facilities, and edge environments based on the Cisco Unified Computing System (Cisco UCS). Intersight also integrates with leading operating systems, storage providers, hypervisors, and third-party IT service management and security tools. Intersight’s unified, policy-driven approach to infrastructure management helps IT groups automate numerous tasks and, as Forrester found, free up time to dedicate to strategic projects. Forrester study quantifies the benefits of Cisco Intersight  A composite organization using Cisco Intersight achieved:192% ROI and payback in less than six months$3.3M net present value over three years$2.7M from improved uptime and resilience 50% reduction in mean time to resolution $1.7M from increased IT productivity$267K benefit from decreased time to value due to faster project execution and earlier return on infrastructure investments Forrester Total Economic Impact study findings The analyst firm conducted detailed interviews with IT decision-makers and Intersight users at six organizations, from which it created one composite organization: a multinational technology-driven company with $10 billion in annual revenue, 120 branch locations, and a team of six engineers managing its 1,000 servers deployed in several

Read More »

SoftBank launches software stack for AI data center operations

Addressing enterprise challenges The software provides two main services, according to SoftBank. The Kubernetes-as-a-Service component automates the stack from BIOS and RAID settings through the OS, GPU drivers, networking, Kubernetes controllers, and storage, the company said. It reconfigures physical connectivity using Nvidia NVLink and memory allocation as users create, update, or delete clusters, according to the announcement. The system allocates nodes based on GPU proximity and NVLink domain configuration to reduce latency, SoftBank said. Enterprises currently face complex GPU cluster provisioning, Kubernetes lifecycle management, inference scaling, and infrastructure tuning challenges that require deep expertise, according to Dai. SoftBank’s automated approach addresses these pain points by handling BIOS-to-Kubernetes configuration, optimizing GPU interconnects, and abstracting inference into API-based services, he said. This allows teams to focus on model development rather than infrastructure maintenance, Dai said. The Inference-as-a-Service component lets users deploy inference services by selecting large language models without configuring Kubernetes or underlying infrastructure, according to the company. It provides OpenAI-compatible APIs and scales across multiple nodes on platforms including the GB200 NVL72, SoftBank said. The software includes tenant isolation through encrypted communications, automated system monitoring and failover, and APIs for connecting to portal, customer management, and billing systems, according to the announcement.

Read More »

OpenAI shifts AI data center strategy toward power-first design

The shift to ‘energy sovereignty’  Analysts say the move reflects a fundamental shift in data center strategy, moving from “fiber-first” to “power-first” site selection. “Historically, data centers were built near internet exchange points and urban centers to minimize latency,” said Ashish Banerjee, senior principal analyst at Gartner. “However, as AI training requirements reach the gigawatt scale, OpenAI is signaling that they will prioritize regions with ‘energy sovereignty’, places where they can build proprietary generation and transmission, rather than fighting for scraps on an overtaxed public grid.” For network architecture, this means a massive expansion of the “middle mile.” By placing these behemoth data centers in energy-rich but remote locations, the industry will have to invest heavily in long-haul, high-capacity dark fiber to connect these “power islands” back to the edge. “We should expect a bifurcated network: a massive, centralized core for ‘cold’ model training located in the wilderness, and a highly distributed edge for ‘hot’ real-time inference located near the users,” Banerjee added. Manish Rawat, a semiconductor analyst at TechInsights, also noted that the benefits may come at the cost of greater architectural complexity. “On the network side, this pushes architectures toward fewer mega-hubs and more regionally distributed inference and training clusters, connected via high-capacity backbone links,” Rawat said. “The trade-off is higher upfront capex but greater control over scalability timelines, reducing dependence on slow-moving utility upgrades.”

Read More »

CleanArc’s Virginia Hyperscale Bet Meets the Era of Pay-Your-Way Power

What CleanArc’s Project Really Signals About Scaling in Virginia The more important story is what the project signals about how developers believe they can still scale in Virginia at hyperscale magnitude. To wit: 1) The campus is sized like a grid project, not a real estate project At 900 MW, CleanArc is not simply building a few facilities. It is effectively planning a utility-interface program that will require staged substation, transmission, and interconnection work over many years. The company describes the campus as a “flagship” designed for scalable demand and sustainability-focused procurement. Power delivery is planned in three 300 MW phases: the first targeted for 2027, the second for 2030, and the final block sometime between 2033 and 2035. That scale changes what “site selection” really means. For projects of this magnitude, the differentiator is no longer “Can we entitle buildings?” but “Can we secure a credible path for large power blocks, with predictable commercial terms, while regulators are rewriting the rules?” 2) It’s being marketed as sustainability-forward in a market that increasingly requires it CleanArc frames the campus as aligned with sustainability-focused infrastructure: a posture that is no longer optional for hyperscale procurement teams. That does not mean the grid power itself is automatically carbon-free. It means the campus is being positioned to support the modern contracting stack, involving renewables, clean-energy attributes, and related structures, while still delivering what hyperscalers buy first: capacity, reliability, and delivery certainty. 3) The timing is strategic as Virginia tightens around very large load CleanArc is launching its flagship in the nation’s premier data center corridor at the same moment Virginia has moved to formalize a large-customer category that explicitly includes data centers. The implication is not that Virginia has become anti-data center. It is that the state is entering a phase where it

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »