Stay Ahead, Stay ONMINE

The problem with Big Tech’s favorite carbon removal tech

Sucking carbon pollution out of the atmosphere is becoming a big business—companies are paying top dollar for technologies that can cancel out their own emissions. Today, nearly 70% of announced carbon removal contracts are for one technology: bioenergy with carbon capture and storage (BECCS). Basically, the idea is to use trees or some other types of biomass for energy, and then capture the emissions when you burn it. While corporations, including tech giants like Microsoft, are betting big on this technology, there are a few potential problems with BECCS, as my colleague James Temple laid out in a new story. And some of the concerns echo similar problems with other climate technologies we cover, like carbon offsets and alternative jet fuels. Carbon math can be complicated. To illustrate one of the biggest issues with BECCS, we need to run through the logic on its carbon accounting. (And while this tech can use many different forms of biomass, let’s assume we’re talking about trees.) When trees grow, they suck up carbon dioxide from the atmosphere. Those trees can be harvested and used for some intended purpose, like making paper. The leftover material, which might otherwise be waste, is then processed and burned for energy. This cycle is, in theory, carbon neutral. The emissions from burning the biomass are canceled out by what was removed from the atmosphere during plants’ growth. (Assuming those trees are replaced after they’re harvested.) So now imagine that carbon-scrubbing equipment is added to the facility that burns the biomass, capturing emissions. If the cycle was logically carbon neutral before, now it’s carbon negative: On net, emissions are removed from the atmosphere. Sounds great, no notes.  There are a few problems with this math, though. For one, it leaves out the emissions that might be produced while harvesting, transporting, and processing wood. And if projects require clearing land to plant trees or grow crops, that transformation can wind up releasing emissions too. Issues with carbon math might sound a little familiar if you’ve read any of James’s reporting on carbon offsets, programs where people pay for others to avoid emissions. In particular, his 2021 investigation with ProPublica’s Lisa Song laid out how this so-called solution was actually adding millions of tons of carbon dioxide into the atmosphere. Carbon capture may entrench polluting facilities. One of the big benefits of BECCS is that it can be added to existing facilities. There’s less building involved than there might be in something like a facility that vacuums carbon directly out of air. That helps keep costs down, so BECCS is currently much cheaper than direct air capture and other forms of carbon removal. But keeping legacy equipment running might not be a great thing for emissions or local communities in the long run. Carbon dioxide is far from the only pollutant spewing out of these facilities. Burning biomass or biofuels can release emissions that harm human health, like particulate matter, sulfur dioxide, and carbon monoxide. Carbon capture equipment might trap some of these pollutants, like sulfur dioxide, but not all. Assuming that waste material wouldn’t be used for something else might not be right. It sounds great to use waste, but there’s a major asterisk lurking here, as James lays out in the story: But the critical question that emerges with waste is: Would it otherwise have been burned or allowed to decompose, or might some of it have been used in some other way that kept the carbon out of the atmosphere?  Biomass can be used for other things, like making plastic, building material, or even soil additives that can help crops get more nutrients. So the assumption that it’s BECCS or nothing is flawed. Moreover, a weird thing happens when you start making waste valuable: There’s an incentive to produce more of it. Some experts are concerned that companies could wind up trimming more trees or clearing more forests than what’s needed to make more material for BECCS. These waste issues remind me of conversations around sustainable aviation fuels. These alternative fuels can be made from a huge range of materials, including crop waste or even used cooking oil. But as demand for these clean fuels has ballooned, things have gotten a little wonky—there are even some reports of fraud, where scammers try to pass off newly made oil from crops as used cooking oil. BECCS is a potentially useful technology, but like many things in climate tech, it can quickly get complicated.  James has been reporting on carbon offsets and carbon removal for years. As he put it to me this week when we were chatting about this story: “Just cut emissions and stop messing around.” This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Sucking carbon pollution out of the atmosphere is becoming a big business—companies are paying top dollar for technologies that can cancel out their own emissions.

Today, nearly 70% of announced carbon removal contracts are for one technology: bioenergy with carbon capture and storage (BECCS). Basically, the idea is to use trees or some other types of biomass for energy, and then capture the emissions when you burn it.

While corporations, including tech giants like Microsoft, are betting big on this technology, there are a few potential problems with BECCS, as my colleague James Temple laid out in a new story. And some of the concerns echo similar problems with other climate technologies we cover, like carbon offsets and alternative jet fuels.

Carbon math can be complicated.

To illustrate one of the biggest issues with BECCS, we need to run through the logic on its carbon accounting. (And while this tech can use many different forms of biomass, let’s assume we’re talking about trees.)

When trees grow, they suck up carbon dioxide from the atmosphere. Those trees can be harvested and used for some intended purpose, like making paper. The leftover material, which might otherwise be waste, is then processed and burned for energy.

This cycle is, in theory, carbon neutral. The emissions from burning the biomass are canceled out by what was removed from the atmosphere during plants’ growth. (Assuming those trees are replaced after they’re harvested.)

So now imagine that carbon-scrubbing equipment is added to the facility that burns the biomass, capturing emissions. If the cycle was logically carbon neutral before, now it’s carbon negative: On net, emissions are removed from the atmosphere. Sounds great, no notes. 

There are a few problems with this math, though. For one, it leaves out the emissions that might be produced while harvesting, transporting, and processing wood. And if projects require clearing land to plant trees or grow crops, that transformation can wind up releasing emissions too.

Issues with carbon math might sound a little familiar if you’ve read any of James’s reporting on carbon offsets, programs where people pay for others to avoid emissions. In particular, his 2021 investigation with ProPublica’s Lisa Song laid out how this so-called solution was actually adding millions of tons of carbon dioxide into the atmosphere.

Carbon capture may entrench polluting facilities.

One of the big benefits of BECCS is that it can be added to existing facilities. There’s less building involved than there might be in something like a facility that vacuums carbon directly out of air. That helps keep costs down, so BECCS is currently much cheaper than direct air capture and other forms of carbon removal.

But keeping legacy equipment running might not be a great thing for emissions or local communities in the long run.

Carbon dioxide is far from the only pollutant spewing out of these facilities. Burning biomass or biofuels can release emissions that harm human health, like particulate matter, sulfur dioxide, and carbon monoxide. Carbon capture equipment might trap some of these pollutants, like sulfur dioxide, but not all.

Assuming that waste material wouldn’t be used for something else might not be right.

It sounds great to use waste, but there’s a major asterisk lurking here, as James lays out in the story:

But the critical question that emerges with waste is: Would it otherwise have been burned or allowed to decompose, or might some of it have been used in some other way that kept the carbon out of the atmosphere? 

Biomass can be used for other things, like making plastic, building material, or even soil additives that can help crops get more nutrients. So the assumption that it’s BECCS or nothing is flawed.

Moreover, a weird thing happens when you start making waste valuable: There’s an incentive to produce more of it. Some experts are concerned that companies could wind up trimming more trees or clearing more forests than what’s needed to make more material for BECCS.

These waste issues remind me of conversations around sustainable aviation fuels. These alternative fuels can be made from a huge range of materials, including crop waste or even used cooking oil. But as demand for these clean fuels has ballooned, things have gotten a little wonky—there are even some reports of fraud, where scammers try to pass off newly made oil from crops as used cooking oil.

BECCS is a potentially useful technology, but like many things in climate tech, it can quickly get complicated. 

James has been reporting on carbon offsets and carbon removal for years. As he put it to me this week when we were chatting about this story: “Just cut emissions and stop messing around.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Three options for wireless power in the enterprise

Sensors such as these can be attached to pallets to track its location, says Srivastava. “People in Europe are very conscious about where their food is coming from and, to comply with regulations, companies need to have sensors on the pallets,” he says. “Or they might need to know that

Read More »

IBM unveils advanced quantum computer in Spain

IBM executives and officials from the Basque Government and regional councils in front of Europe’s first IBM Quantum System Two, located at the IBM-Euskadi Quantum Computational Center in San Sebastián, Spain. The Basque Government and IBM unveil the first IBM Quantum System Two in Europe at the IBM-Euskadi Quantum Computational

Read More »

Oracle Taps VoltaGrid for Data Center Power Infrastructure

VoltaGrid LLC said Wednesday it has secured an agreement with Oracle Corp to deliver technology to enable the supply of natural gas-derived electricity to the information technology major’s data centers. The Houston, Texas-based gas power solutions provider will deploy 2.3 gigawatts (GW) of “cutting-edge, ultra-low-emissions infrastructure, supplied by Energy Transfer’s pipeline network, to support the energy demands of Oracle Cloud Infrastructure’s (OCI) next-generation artificial intelligence (AI) data centers”, it said in a press release. “The VoltaGrid power infrastructure will be delivered through the proprietary VoltaGrid platform – a modular, high-transient-response system developed by VoltaGrid with key suppliers, including INNIO Jenbacher and ABB”. “This power plant deployment is being supplied with firm natural gas from Energy Transfer’s expansive pipeline and storage systems”, VoltaGrid added. OCI executive vice president Mahesh Thiagarajan said, “AI workloads are uniquely power-intensive and highly variable, often creating swings in demand. By collaborating with VoltaGrid, we’re engineering innovations that dampen these swings, making AI’s power usage more stable, predictable and grid-friendly”. “VoltaGrid’s platform joins OCI’s broad energy portfolio to bolster our leading-edge AI infrastructure with dependable power that can be effortlessly scaled”, Thiagarajan added. Earlier this year VoltaGrid and Vantage Data Centers signed an agreement to deploy over one GW of power generation capacity across the latter’s North American portfolio using VoltaGrid’s gas microgrid technology. “This collaboration will set a new benchmark for speed, reliability and energy access, meeting the growing demand for data center energy solutions driven by hyperscalers and large cloud providers without putting additional cost or strain on grid systems”, a joint statement said February 11. Vantage Data Centers operates campuses in the United States and Canada that electrify and connect cloud providers and large enterprises, according to the company. “Cloud and AI technologies require the rapid development of additional data center infrastructure”, commented Dana Adams,

Read More »

Ukraine Claims Russia Refinery Strike

Ukraine’s General Staff claimed a strike on Rosneft PJSC’s Saratov refinery as NATO allies ramp up pressure on Russia’s energy industry to bring President Vladimir Putin to the negotiating table. Ukrainian military forces attacked the facility in Russia’s Volga region overnight, the General Staff said in a statement on Telegram, without providing details on the extent of any damage. Bloomberg couldn’t independently verify the claim and Rosneft didn’t immediately respond to a request for a comment. In recent weeks, Ukraine has stepped up strikes on Russia’s energy infrastructure, from refineries to crude pipelines and sea terminals. The attacks come as the Kremlin has intensified its own assaults on Ukraine and shown little intention of negotiating an agreement to end the war. A number of countries are exerting pressure on Moscow to stop military actions and resume peace talks. The UK on Wednesday imposed sanctions on Russia’s two largest oil producers — Rosneft and Lukoil PJSC — as well as Chinese energy firms importing Russian crude and liquefied gas. Meanwhile US President Donald Trump said Indian Prime Minister Narendra Modi had vowed to halt purchases of Russian oil, even though the government in New Delhi later said consumer interests remain its top priority in shaping energy-import policy. The Saratov refinery is able to process about 140,000 barrels of crude a day. It’s a key supplier of gasoline and diesel to Russia’s European regions, where most of the country’s population lives. The facility has been a target of Ukrainian drones several times this year, most recently on Sept. 20. Since the start of August, Kiyv has carried out at least 30 attacks on Russian refineries, compared with 21 from January to July, according to data compiled by Bloomberg. It will continue such strikes, Defense Minister Denys Shmyhal told reporters in Brussels on Wednesday. WHAT DO YOU THINK? Generated

Read More »

SEB Expects OPEC to Cut Production Soon

In a report sent to Rigzone by the Skandinaviska Enskilda Banken AB (SEB) team on Tuesday, SEB Chief Commodities Analyst Bjarne Schieldrop outlined that SEB expects the OPEC group to cut production soon. “We … expect OPEC to implement cuts to avoid a large increase in inventories in Q1-26,” Schieldrop said in the report. “The group will probably revert to cuts either at its early December meeting when they discuss production for January or in early January when they discuss production for February,” he added. “The oil price will likely head yet lower until the group reverts to cuts,” Schieldrop warned. In the report, Schieldrop highlighted that, in its recently released monthly report, the International Energy Agency (IEA) “estimates that the need for crude oil from OPEC in 2026 will be 25.4 million barrels per day versus production by the group in September of 29.1 million barrels per day”. “The group thus needs to do some serious cutting at the end of 2025 if it wants to keep the market balanced and avoid inventories from skyrocketing – given that IEA is correct that is,” he added. “We do however expect OPEC to implement cuts,” Schieldrop highlighted in the SEB report. “We think OPEC(+) will trim/cut production as needed into 2026 to prevent a huge build-up in global oil stocks and a crash in prices but for now we are still heading lower. Into the $50ies per barrel,” he added. In a separate SEB report sent to Rigzone on October 7, Schieldrop said “the message from OPEC+ over the [October 4-5] weekend was we are still on a weakening path with rising supply from the group”. He added, however, that “there is nothing we have seen from the group so far which indicates that they will close their eyes, let the world drown

Read More »

GB Energy Launches Aberdeen Energy Taskforce

In a statement posted on its website recently, Great British Energy (GBE) announced that it has launched the “Aberdeen Energy Taskforce”. The taskforce is defined in the statement as “a new leadership group designed to ensure that the energy transition delivers for the Northeast of Scotland – securing good local jobs, investment, and opportunity as Britain moves to clean power”. In the statement, GBE said the taskforce will act as a bridge between national ambition and local opportunity. “It will advise the company’s board and executive team on how to ensure GBE’s investment reflects the strengths, needs, and aspirations of Aberdeen and the wider region,” the statement noted. “The move supports the Government’s Clean Power Mission to secure home-grown energy and achieve clean power by 2030, while ensuring that communities are not left behind in the transition,” it added. GBE highlighted in the statement that Aberdeen has been the energy capital of Europe for decades but said job losses and market volatility in oil and gas have hit the region hard. “The taskforce will help ensure the wealth of skills and experience developed in oil and gas fuels Britain’s next generation of clean energy industries – from offshore wind and green hydrogen to carbon capture and storage (CCUS),” the statement noted. According to the GBE statement, the taskforce’s core objectives include; “championing Aberdeen’s global role in the clean energy transition across offshore wind, hydrogen, CCUS, and workforce reskilling”; “securing a fair transition, ensuring that GBE investment delivers secure, well-paid, low-carbon jobs and skills for oil and gas workers, young people, and underrepresented groups”; maximizing regional value by helping shape capital and procurement decisions that unlock local supply chains, innovation, and manufacturing”; “embedding community benefit at the heart of GBE delivery, through engagement with local authorities, anchor institutions, and residents”; and

Read More »

UK Sanctions Major Russian Oil Producers

The UK slapped sanctions on Russia’s biggest oil producers and two Chinese energy firms that deal with Moscow as London seeks to intensify pressure on the Kremlin over the war in Ukraine. Britain blacklisted state-run oil giant Rosneft PJSC and Lukoil PJSC on Wednesday, the Office of Financial Sanctions Implementation said in a statement. It also targeted Chinese firms that handle Russian energy for the first time: a terminal handling Russian liquefied gas and an oil refiner. Western nations are turning the screws on Russia’s energy sector in a bid to curb the flow of petrodollars to the Kremlin and limit President Vladimir Putin’s ability to finance the war. Taxes from the oil and gas industries account for about a quarter of the federal budget. “As Putin’s aggression intensifies, we are stepping up our response,” UK Chancellor Rachel Reeves said in a separate statement.  The UK sanctioned China’s Beihai liquefied natural gas terminal, which has become the key offloading point for cargoes from Russia’s Arctic LNG 2 project, as well as Chinese oil processor Shandong Yulong. While the UK previously imposed wide-ranging sanctions on tankers transporting Russian oil and gas, the targeting of two big oil producers – as well as Chinese firms – marks an escalation.  Rosneft and Lukoil account for more than half of all oil produced in Russia and undertake business of “strategic significance” to the government, the UK government said. The UK also sanctioned a liquefied natural gas import facility and a company that processes Russian oil. Of the three major sanctioning authorities targeting Russia – the others being the US and EU – the UK’s measures have had the least impact on Russia’s oil tankers, so it’s not clear how effective these measures will be. A greater concern for Moscow might be if Washington and Brussels followed suit. The sanctioning of

Read More »

Sable Says Court Ruling Won’t Affect Santa Ynez Operations

Sable Offshore Corp said Wednesday the Santa Barbara Superior Court had issued a tentative ruling indicating the court would deny the company’s claims against the California Coastal Commission (CCC), in a permitting dispute over repairs on the Santa Ynez Unit (SYU) pipeline system. However, Houston, Texas-based Sable insisted even if the court decision becomes final, “the ruling would have no impact on the resumption of petroleum transportation through the Las Flores Pipeline System”. “Additionally, oil and gas production from the federal Santa Ynez Unit and the flow of petroleum from the Santa Ynez Unit to the Las Flores Canyon processing facilities or to a potential offshore storage and treating vessel (OS&T) would be unaffected by rulings in the Coastal Commission litigation”, it said in a statement on its website. SYU is Sable’s sole operation. SYU ceased flows 2015 after an oil spill that, according to the CCC, released 123,000 gallons of oil and caused environmental damage to 150 miles of coastline. SYU was then owned by Plains Pipeline LP, which sold it to Exxon Mobil Corp 2022. Sable acquired SYU from ExxonMobil February 2024. Nonetheless Sable plans to escalate such a final judgment by the Superior Court to the California Court of Appeal. “Sable is suing the Coastal Commission for the damages it has caused Sable by erroneously issuing cease and desist orders during Sable’s anomaly repair program on the Las Flores Pipeline System”, the statement said. “The anomaly repair program and hydrotesting of the Las Flores Pipeline System was [sic] completed in May 2025 in accordance with the Federal Consent Decree. “Sable intends to continue its pursuit of the writ of mandate in the Court of Appeal as well as declaratory relief and inverse condemnation claims in excess of approximately $347 million”. Sable added it “continues to work diligently with the

Read More »

Oracle’s big bet for AI: Zettascale10

“OCI Zettascale10 was designed with the goal of integrating large-scale generative AI use cases, including training and running large language models,” said Info-Tech’s Palanichamy. Oracle also introduced new capabilities in Oracle Acceleron, its OCI networking stack, that it said helps customers run workloads more quickly and cost-effectively. They include dedicated network fabrics, converged NICs, and host-level zero-trust packet routing that Oracle says can double network and storage throughput while cutting latency and cost. Oracle’s zettascale supercomputer is built on the Acceleron RoCE (RDMA over Converged Ethernet) architecture and Nvidia AI infrastructure. This allows it to deliver what Oracle calls “breakthrough” scale, “extremely low” GPU-to-GPU latency, and improved price/performance, cluster use, and overall reliability. The new architecture has a “wide, shallow, resilient” fabric, according to Oracle, and takes advantage of switching capabilities built into modern GPU network interface cards (NICs). This means it can connect to multiple switches at the same time, but each switch stays on its own isolated network plane. Customers can thus deploy larger clusters, faster, while running into fewer stalls and checkpoint restarts, because traffic can be shifted to different network planes and re-routed when the system encounters unstable or contested paths. The architecture also features power-efficient optics and is “hyper-optimized” for density, as its clusters are located in large data center campuses within a two-kilometer radius, Oracle said.

Read More »

Q&A: IBM’s Mikel Díez on hybridizing quantum and classical computing

And, one clarification. Back in 2019, when we launched our first quantum computer, with between 5 and 7 qubits, what we could attempt to do with that capacity could be perfectly simulated on an ordinary laptop. After the advances of these years, being able to simulate problems requiring more than 60 or 70 qubits with classical technology is not possible even on the largest classical computer in the world. That’s why what we do on our current computers, with 156 qubits, is run real quantum circuits. They’re not simulated: they run real circuits to help with artificial intelligence problems, optimization of simulation of materials, emergence of models… all that kind of thing. The Basque Government’s BasQ program includes three types of initiatives or projects. The first are related to the evolution of quantum technology itself: how to continue improving error correction, how to identify components of quantum computers, and how to optimize both these and the performance of these devices. From a more scientific perspective, we are working on how to represent the behavior of materials so that we can improve the resistance of polymers, for example. This is useful in aeronautics to improve aircraft suspension. We are also working on time crystals, which, from a scientific perspective, seek to improve precision, sensor control, and metrology. Finally, a third line relates to the application of this technology in industry; for example, we are exploring how to improve the investment portfolio for the banking sector, how to optimize the energy grid , and how to explore logistics problems. What were the major challenges in launching the machine you’re inaugurating today? Why did you choose the Basque Country to implement your second Quantum System Two? Before implementing a facility of this type in a geographic area, we assess whether it makes sense based on

Read More »

Preparing for 800 VDC Data Centers: ABB, Eaton Support NVIDIA’s AI Infrastructure Evolution

Vendors and operators are already preparing for AI campuses measured in gigawatts. ABB’s announcement underscores the scale of this transition—not incremental retrofits, but entirely new development models for multi-GW AI infrastructure. How ABB Is Supporting the Move to 800-V DC Data Centers ABB says its joint work with NVIDIA will focus on advanced power solutions to enable 800-V DC architectures supporting 1-MW racks. Expect DC-rated breakers, protection relays, busways, and power shelves engineered for higher DC voltages, along with interfaces for liquid-cooled rack busbars. In parallel with the NVIDIA partnership, ABB has introduced an AI-ready refresh of its MNS® low-voltage switchgear, integrating SACE Emax 3 breakers with enhanced sensing and analytics to reduce footprint while improving selectivity and uptime. These components form the foundational building blocks of the higher-density electrical rooms and prefabricated skids that will define next-generation data centers. ABB’s MegaFlex UPS line already targets hyperscale and colocation environments with megawatt-class modules (UL 415/480-V variants), delivering high double-conversion efficiency and seamless integration with ABB’s Ability™ Data Center Automation platform—unifying BMS, EPMS, and DCIM functions. As racks transition to 800-V DC and liquid-cooled buses, continuous thermal-electrical co-optimization becomes essential. In this new paradigm, telemetry and controls will matter as much as copper and coolant. NVIDIA’s technical brief positions 800-V DC as the remedy for today’s inefficiencies—reducing space, cable mass, and conversion losses that accompany rising rack densities of 200 to 600 kW and beyond. The company’s 800-V rollout is targeted for 2027, with ecosystem partners spanning the entire electrical stack. Early signals from the OCP Global Summit 2025 confirm that vendors are moving rapidly to align their products and architectures with this vision. The Demands of Next-Generation GPUs NVIDIA’s Vera Rubin NVL144 rack design previews what the next phase of AI infrastructure will require: 45 °C liquid cooling, liquid-cooled busbars,

Read More »

Nvidia’s DGX Spark desktop supercomputer is on sale now, but hard to find

Industrial demand Nvidia’s DGX chips are in high demand in industry, though, and it’s more likely that Micro Center’s one-Spark limit is to prevent businesses scooping them up by the rack-load to run AI applications in their data centers. The DGX Spark contains an Nvidia GB10 Grace Blackwell chip, 128GB of unified system memory, a ConnectX-7 smart NIC for connecting two Spark’s in parallel, and up to 4TB of storage in a package just 150mm (about 6 inches) square. It consumes 240W of electrical power and delivers 1 petaflop of performance at FP4 precision — that’s one million billion floating point operations with four-bit precision per second. In comparison, Nvidia said, its original DGX-1 supercomputer based on its Pascal chip architecture and launched in 2016 delivered 170 teraflops (170,000 billion operations per second) at FP16 precision, but cost $129,000 and consumed 3,200W. It also weighed 60kg, compared to the Spark’s 1.2kg or 2.65 pounds. Nvidia won’t be the only company selling compact systems based on the DGX Spark design: It said that partner systems will be available from Acer, Asus, Dell Technologies, Gigabyte, HP, Lenovo, and MSI. This article originally appeared on Computerworld.

Read More »

Florida’s Data Center Moment: Power, Policy, and Potential

Florida is rapidly positioning itself as one of the next major frontiers for data center development. With extended tax incentives, proactive utilities, and a strategic geographic advantage, the state is aligning power, policy, and economic development in ways that echo the early playbook of Northern Virginia. In the latest episode of The Data Center Frontier Show, Buddy Rizer, Executive Director of Loudoun County Economic Development, and Lila Jaber, Founder of the Florida’s Women in Energy Leadership Forum and former Chair of the Florida Public Service Commission, join DCF to explore the opportunities and lessons shaping Florida’s emergence as a data center powerhouse. Energy and Infrastructure: A Strong Starting Position Unlike regions grappling with grid strain, Florida begins its data center growth story with energy abundance. While Loudoun County, Virginia—home to the world’s largest concentration of data centers—faced a 600 MW power deficit last year and could reach 12 GW of demand by 2030, Florida maintains excess generation capacity and robust renewable energy integration. Utilities like Florida Power & Light (FPL) and Duke Energy are already preparing for hyperscale and AI-driven loads, filing new large-load tariff structures to balance growth with ratepayer protection. Over the past decade, Florida utilities have also invested billions to harden their grids against hurricanes and extreme weather, resulting in some of the most resilient energy infrastructure in the country. Florida’s 10-year generation planning requirement, which ensures a diverse portfolio including nuclear, solar, and battery storage, further positions the state to meet growing digital infrastructure needs through hybrid on-site generation and demand-response capabilities. Economic and Workforce Advantages The state’s renewed sales tax exemptions for data centers through 2037—and the raised 100 MW IT load threshold—signal a strong bid to attract hyperscale operators and large-scale AI campuses. Florida also offers a competitive electricity rate structure comparable to Virginia’s

Read More »

Inside Blackstone’s Electrification Push: From Shermco to the Power Backbone of AI Data Centers

According to the National Electrical Manufacturers Association (NEMA), U.S. energy demand is projected to grow 50% by 2050. Electrical manufacturers have invested more than $10 billion since 2021 in new technologies to expand grid and manufacturing capacity, also reducing reliance on materials from China by 32% since 2018. Power access, sustainable infrastructure, and land acquisition have become critical factors shaping where and how data center facilities are built. As we previously reported in Data Center Frontier, investors realized this years ago, viewing these facilities both as technology assets and a unique convergence of real estate, utility infrastructure, and mission-critical systems that can also generate revenue. One of those investors is global asset manager Blackstone, which through its Energy Transition Partners private equity arm, recently acquired Shermco Industries for $1.6 billion. Announced August 21, the deal is part of Blackstone’s strategy to invest in companies that support the growing demand for electrification and a more reliable power grid. The goal is to strengthen data center infrastructure reliability and expand critical electrical services. Founded in 1974, Texas-based Shermco is one of the largest electrical testing organizations accredited by the InterNational Electrical Testing Association (NETA). The company operates in a niche yet important space: providing lifecycle electrical services, including maintenance, testing, commissioning, repair, and design, in support of data centers, utilities, and industrial clients. It has more than 40 service centers in the U.S. and Canada. In addition to helping Blackstone support its electrification and power grid reliability goals, the Shermco purchase is also part of Blackstone’s strategy to increase scale and resources—revenue increases without a substantial increase in resources—thus expanding its footprint and capabilities within the essential energy services sector.  As data centers expand globally, become more energy intensive, and are pressured to incorporate renewables and modernize grids, Blackstone’s leaders plan to leverage Shermco’s

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »