Stay Ahead, Stay ONMINE

From code to current: How to keep AI data centers in check for a sustainable grid

Manav Mittal is a senior project manager at Consumers Energy. As artificial intelligence continues to transform industries, from healthcare and finance to autonomous vehicles and smart cities, the demand for data processing is skyrocketing. AI-driven data centers, which power the algorithms behind these innovations, are the backbone of this revolution. However, with the expansion of […]

Manav Mittal is a senior project manager at Consumers Energy.

As artificial intelligence continues to transform industries, from healthcare and finance to autonomous vehicles and smart cities, the demand for data processing is skyrocketing. AI-driven data centers, which power the algorithms behind these innovations, are the backbone of this revolution. However, with the expansion of AI capabilities comes a growing concern: how will these energy-hungry facilities affect our already strained power grids?

Take Meta’s $10 billion AI-optimized data center in Louisiana, for example. This enormous facility, designed to handle the massive computational load required by AI, will demand a staggering amount of electricity. As AI becomes more integrated into our everyday lives, the strain on the power grid is only set to increase. But here’s the thing — AI doesn’t have to be a burden on the grid. With thoughtful strategies and a proactive approach, we can minimize the environmental and infrastructural costs of these data centers. The question isn’t whether AI will disrupt the grid, but how we can make it work for us without sacrificing sustainability.

Energy efficiency: The first line of defense

It’s easy to think of data centers as mere consumers of energy, but the truth is, they’re not all created equal. There’s plenty of room for improvement when it comes to energy efficiency. The first step in minimizing AI data center impacts on the grid is simply making these centers run more efficiently.

Cooling systems alone account for a huge chunk of energy consumption in data centers. Traditionally, large HVAC systems keep servers at optimal temperatures, but these systems are often inefficient. Thankfully, innovative cooling methods — like liquid cooling and even immersion cooling — are beginning to replace outdated systems. These newer technologies can significantly reduce energy usage, which is crucial when every watt counts.

And it’s not just cooling that needs to be rethought. Advances in hardware, such as more energy-efficient processors and GPUs, are improving the performance-to-energy ratio of data centers. These small innovations might not make the headlines, but their cumulative impact on energy consumption could be profound. Data centers should be incentivized to adopt these energy-saving technologies, not only to reduce their operating costs but to lessen their impact on the grid.

Renewable energy: A cleaner, greener future

Let’s be clear — data centers don’t have to rely on fossil fuels to power their operations. In fact, many major tech companies, including Meta, have made ambitious commitments to run their data centers on 100% renewable energy. This shift to clean energy is one of the most impactful ways to reduce the strain on the grid. If AI data centers can be powered by wind, solar and other renewable sources, we’re looking at a win-win situation: energy demand is met without contributing to greenhouse gas emissions.

However, making this transition requires more than just goodwill — it requires collaboration with renewable energy developers and utilities. Power purchase agreements are a vital tool here. These long-term contracts allow data centers to secure renewable energy directly from producers, ensuring that their electricity needs are met without disrupting the grid. The beauty of this approach is that it supports the broader goal of transitioning to a clean energy economy, all while minimizing the impact on local power infrastructure.

But let’s not stop there. Data centers should also consider on-site renewable energy generation. Installing solar panels or wind turbines at their facilities can reduce their reliance on the grid during peak demand periods. In fact, on-site energy production, combined with energy storage, could allow data centers to be largely self-sufficient, alleviating much of the pressure on local grids.

Modernizing the grid: Building for the future

While improving the energy efficiency of data centers and shifting to renewable energy are essential steps, we can’t ignore the infrastructure itself. The grid, as it exists today, was not built to handle the enormous, and sometimes unpredictable, energy demands of AI data centers. As data centers become larger and more prevalent, the grid needs to evolve to accommodate them.

Here’s where smart grids come into play. These modernized grids use sensors and real-time data to better manage energy distribution. With a smart grid, utilities can dynamically adjust power flow based on demand, ensuring that energy is directed where it’s needed most. By integrating AI into grid management, utilities can anticipate and respond to shifts in energy demand caused by data centers, ensuring a more stable grid overall.

In addition to smart grids, we need to consider energy storage. Renewable energy is intermittent by nature — solar panels don’t generate electricity at night, and wind turbines are silent on calm days. By incorporating energy storage systems, such as large-scale batteries, data centers can store excess energy generated during off-peak hours and use it when demand is high. This will help to smooth out the fluctuations in energy supply and ensure that data centers are less reliant on the grid during peak times.

Demand response: A shared responsibility

But why stop with data centers? AI-driven facilities have a responsibility to participate in demand response programs. These programs incentivize businesses and consumers to reduce their energy usage during periods of peak demand, which helps prevent grid overloads. Data centers are prime candidates for demand response because they can adjust their operations — such as shifting workloads to off-peak hours — without negatively impacting performance. By participating in these programs, AI data centers can significantly ease pressure on the grid, especially during high-demand periods, like hot summer afternoons when air conditioning use is at its peak.

The key here is that grid stability is a shared responsibility. While AI data centers are heavy consumers of electricity, they also have the tools to manage their consumption intelligently. Rather than adding to the grid’s burden, these facilities can be part of the solution. Through demand response, they can reduce their energy use when it’s most needed, helping to balance supply and demand and prevent power outages.

Collaboration: A holistic approach to grid sustainability

It’s clear that minimizing the impact of AI data centers on the power grid isn’t a task for data center operators alone. This challenge requires collaboration among technology companies, utilities, policymakers and local communities. Governments must provide the right incentives to encourage the adoption of clean energy and energy-efficient technologies. At the same time, utility companies must modernize the grid to accommodate the growing demands of AI data centers and other large energy consumers.

We also need to prioritize transparency and dialogue with communities. Local governments and residents should be included in conversations about how AI data centers impact energy infrastructure. Through collaboration, we can ensure that these facilities contribute positively to both the local economy and the environment.

Conclusion: A vision for a sustainable future

The rise of AI presents enormous opportunities for innovation, but it also poses significant challenges, particularly when it comes to energy consumption. AI data centers are indispensable to the future of technology, but they must be built in a way that minimizes their impact on the power grid and the environment.

By focusing on energy efficiency, incorporating renewable energy, modernizing grid infrastructure and participating in demand response programs, we can reduce the strain AI data centers place on the grid. Ultimately, it’s about balancing progress with sustainability. As we move toward a cleaner, smarter and more connected future, we must ensure that the rise of AI doesn’t come at the expense of our planet — or our power systems.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Equinix launches AI platform to simplify control of distributed AI resources

Fabric Intelligence is a software layer that enhances Equinix Fabric, the company’s on-demand global interconnection service, with real-time awareness and automation for AI and multicloud workloads. It is integrated with AI orchestration tools to automate connectivity decisions, taps into live telemetry for deep observability, and dynamically adjusts routing and segmentation

Read More »

AI’s need for speed, optical connectivity in focus at OFC 2026

“While the scale-up domain today is largely serviced by passive copper, data rates and rack densities are necessitating a shift to alternatives,” Naji wrote. “While many of the optical providers like Marvell (following its acquisition of Celestial AI), Broadcom, and Nvidia believe that co-packaged optic is the right solution, others

Read More »

Arm shifts course, moves into silicon business

With a Thermal Design Power (TDP) of 300 watts, the AGI CPU draw significantly less power than X86 based CPUs from Intel and AMD. It supports high-density 1U server chassis that allow air-cooled deployments with up to 8,160 cores per rack, and liquid-cooled systems delivering 45,000+ cores per rack. Meta

Read More »

NextDecade contractor Bechtel awards ABB more Rio Grande LNG automation work

NextDecade Corp. contractor Bechtel Corp. has awarded ABB Ltd. additional integrated automation and electrical solution orders, extending its scope to Trains 4 and 5 of NextDecade’s 30-million tonne/year (tpy)  Rio Grande LNG (RGLNG) plant in Brownsville, Tex. The orders were booked in third- and fourth-quarters 2025 and build on ABB’s Phase 1 work with Trains 1-3, totaling 17 million tpy.  The scope for RGLNG Trains 4 and 5 includes deployment of an integrated control and safety system consisting of a distributed control system, emergency shutdown, and fire and gas systems. An electrical controls and monitoring system will provide unified visibility of the plant’s electrical infrastructure. These two overarching solutions will provide a common automation platform. ABB will also supply medium-voltage drives, synchronous motors, transformers, motor controllers and switchgear.  The orders also include local equipment buildings—two for Train 4 and one for Train 5— housing critical control and electrical systems in prefabricated modules to streamline installation and commissioning on site. The solutions being delivered to Bechtel use ABB adaptive execution, a methodology for capital projects designed to optimize engineering work and reduce delivery timelines. Phase 1 of RGLNG is under construction and expected to begin operations in 2027. Operations at Train 4 are expected in 2030 and Train 5 in 2031. ABB’s senior vice-president for the Americas, Scott McCay, confirmed to Oil & Gas Journal at CERAWeek by S&P Global in Houston that the company is doing similar work through Tecnimont for Argent LNG’s planned 25-million tpy plant in Port Fourchon, La.; 10-million tpy Phase 1 and 15-million tpy Phase 2. Argent is targeting 2030 completion for its plant.

Read More »

Persistent oil flow imbalances drive Enverus to increase crude price forecast

Citing impacts from the Iran war, near-zero flows through the Strait of Hormuz, accelerating global stock draws, and expectations for a muted US production response despite higher prices, Enverus Intelligence Research (EIR) raised its Brent crude oil price forecast. EIR now expects Brent to average $95/bbl for the remainder of 2026 and $100/bbl in 2027, reflecting what it described as a persistent global oil flow imbalance that continues to draw down inventories. “The world has an oil flow problem that is draining stocks,” said Al Salazar, director of research at EIR. “Whenever that oil flow problem is resolved, the world is left with low stocks. That’s what drives our oil price outlook higher for longer.” The outlook assumes the Strait of Hormuz remains largely closed for 3 months. EIR estimates that each month of constrained flows shifts the price outlook by about $10–15/bbl, underscoring the scale of the disruption and uncertainty around its duration. Despite West Texas Intermediate (WTI) prices of $90–100/bbl, EIR does not expect US producers to materially increase output. The firm forecasts US liquids production growth of 370,000 b/d by end-2026 and 580,000 b/d by end-2027, citing drilling-to-production lags, industry consolidation, and continued capital discipline. Global oil demand growth for 2026 has been reduced to about 500,000 b/d from 1.0 million b/d as higher energy prices and anticipated supply disruptions weigh on economic activity. Cumulative global oil stock draws are estimated at roughly 1 billion bbl through 2027, with non-OECD inventories—particularly in Asia—absorbing nearly half of the impact. A 60-day Jones Act waiver may provide limited short-term US shipping flexibility, but EIR said the measure is unlikely to materially affect global oil prices given broader market forces.

Read More »

Equinor begins drilling $9-billion natural gas development project offshore Brazil

Equinor has started drilling the Raia natural gas project in the Campos basin presalt offshore Brazil. The $9-billion project is Equinor’s largest international investment, its largest project under execution, and marks the deepest water depth operation in its portfolio. The drilling campaign, which began Mar. 24 with the Valaris DS‑17 drillship, includes six wells in the Raia area 200 km offshore in water depths of around 2,900 m. The area is expected to hold recoverable natural gas and condensate reserves of over 1 billion boe. Raia’s development concept is based on production through wells connected to a 126,000-b/d floating production, storage and offloading unit (FPSO), which will treat produced oil/condensate and gas. Natural gas will be transported through a 200‑km pipeline from the FPSO to Cabiúnas, in the city of Macaé, Rio de Janeiro state. Once in operation, expected in 2028, the project will have the capacity to export up to 16 million cu m/day of natural gas, which could represent 15% of Brazil’s natural gas demand, the company said in a release Mar. 24. “While drilling takes place, integration and commissioning activities on the FPSO are progressing well putting us on track towards a safe start of operations in 2028,” said Geir Tungesvik, executive vice-president, projects, drilling and procurement, Equinor. The Raia project is operated by Equinor (35%), in partnership with Repsol Sinopec Brasil (35%) and Petrobras (30%).

Read More »

Woodfibre LNG receives additional modules as construction advances

Woodfibre LNG LP has received two major modules within a week for its under‑construction, 2.1‑million tonne/year (tpy) LNG export plant near Squamish, British Columbia, advancing construction to about 65% complete. The deliveries include the liquefaction module—the project’s heaviest and most critical process unit—and the powerhouse module, which will serve as the plant’s central power and control hub. The liquefaction module, delivered aboard the heavy cargo vessel Red Zed 1, is the 15th of 19 modules scheduled for installation at the site, the company said in a Mar. 24 release. Weighing about 10,847 metric tonnes and occupying a footprint roughly equivalent to a football field, it is among the largest modules fabricated for the project. Once installed and commissioned, the liquefaction module will cool natural gas to about –162°C, converting it into LNG for export. Shortly after the liquefaction module’s arrival, Woodfibre LNG received the powerhouse module, the 16th module delivered to site. Weighing more than 4,200 metric tonnes, the powerhouse module will function as a power and control system, receiving electricity from BC Hydro and managing and distributing power to the plant’s electric‑drive compressors. The Woodfibre LNG project is designed as the first LNG export plant to use electric‑drive motors for liquefaction, replacing conventional gas‑turbine‑driven compressors. The Siemens electric‑drive system will be powered by renewable hydroelectricity from BC Hydro, eliminating the largest operational source of greenhouse gas emissions typically associated with liquefaction, the company said. The project is being built near the community of Squamish on the traditional territory of the Sḵwx̱wú7mesh Úxwumixw (Squamish Nation) and is regulated in part by the Indigenous government.  All 19 modules are expected to arrive on site by spring 2026. Construction is scheduled for completion in 2027. Woodfibre LNG is owned by Woodfibre LNG Ltd. Partnership, which is 70% owned by Pacific Energy Corp.

Read More »

ExxonMobil begins Turrum Phase 3 drilling off Australia’s east coast

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Esso Australia Pty Ltd., a subsidiary of ExxonMobil Corp. and current operator of the Gippsland basin oil and gas fields in Bass Strait offshore eastern Victoria, has started drilling the Turrum Phase 3 project in Australia. This $350-million investment will see the VALARIS 107 jack-up rig drill five new wells into Turrum and North Turrum gas fields within Production License VIC/L03 to support Australia’s east coast domestic gas market. The new wells will be drilled from Marlin B platform, about 42 km off the Gippsland coastline, southeast of Lakes Entrance in water depths of about 60 m, according to a 2025 information bulletin.   <!–> Turrum Phase 3, which builds on nearly $1 billion in recent investment across the Gippsland basin, is expected to be online before winter 2027, the company said in a post to its LinkedIn account Mar. 24. In 2025, Esso made a final investment decision to develop the Turrum Phase 3 project targeting underdeveloped gas resources. The Gippsland Basin joint venture is a 50-50 partnership between Esso Australia Resources and Woodside Energy (Bass Strait) and operated by Esso Australia.  ]–><!–> ]–>

Read More »

The Golden Rule of the oil market: Understanding global price dynamics and emerging exceptions

Mark FinleyBaker Institute, Rice University  In recent weeks, questions surrounding the oil market crisis have been framed around a core principle described as the Golden Rule of the Oil Market: it is a global market. When conditions change anywhere—positively or negatively—prices respond everywhere. That framework helps explain why gasoline prices are rising in the US despite limited direct imports from the Middle East and the US’s status as a significant net exporter of oil. It also explains why oil cargoes that Iran permits to transit the Strait of Hormuz reduce Iran’s leverage over global oil prices, and by extension over US consumers and policymakers concerned about prices at the pump. Alongside its own exports, Iran has allowed a handful of additional tankers to transit the Strait, including several tankers destined for China and LPG shipments for India. The greater the volume of oil transiting the Strait, the smaller the disruption to the global oil market and the less upward pressure on global prices. The same logic applies to US efforts to ease sanctions on Iranian and Russian oil cargoes already at sea, which are unlikely to provide meaningful relief for rising oil prices. Under the Golden Rule, those barrels—having already been produced and shipped—would have found buyers regardless of sanctions, with price discounts sufficient to offset the risk of US penalties, as has been the case for Russian oil since 2022. Exceptions The Golden Rule has described oil market dynamics effectively for decades. However, a small number of potential exceptions have begun to emerge. For now, those exceptions remain relatively inconsequential, though larger risks may be developing. The non-market player There are two ways that supply and demand can be equalized. In a global market, it is achieved by price changes. Prices rise or fall to ensure that there is

Read More »

Executive Roundtable: The AI Infrastructure Credibility Test

For the fourth installment of DCF’s Executive Roundtable for the First Quarter of 2026, we turn to a question that increasingly sits alongside power and capital as a defining constraint. Credibility. As AI-driven data center development accelerates, public scrutiny is rising in parallel. Communities, regulators, and policymakers are taking a closer look at the industry’s footprintin terms of its energy consumption, its land use, and its broader impact on local infrastructure and ratepayers. What was once a relatively low-profile sector has become a visible and, at times, contested presence in regional economies. This shift reflects the sheer scale of the current build cycle. Multi-hundred-megawatt and gigawatt campuses are no longer theoretical in any sense. They are actively being proposed and constructed across key markets. With that scale comes heightened expectations around transparency, accountability, and tangible community benefit. At the same time, the industry faces a more complex regulatory and political landscape. Questions around grid capacity, rate structures, environmental impact, and economic incentives are increasingly being debated in public forums, from state utility commissions to local zoning boards. In this environment, the ability to secure approvals is no longer assured, even in historically favorable markets. The concept of a “social license to operate” has therefore moved to the forefront. Beyond technical execution, developers and operators must now demonstrate that AI infrastructure can be deployed in a way that aligns with community priorities and delivers shared value. In this roundtable, our panel of industry leaders explores what will define that credibility in the years ahead and what the data center industry must do to sustain its momentum in an era of growing public scrutiny.

Read More »

International Data Center Day: Future Frontiers 2030-2070

In honor of this year’s International Data Center Day 2026 (Mar 25), Data Center Frontier presents a forward-looking vision of what the next era of digital infrastructure education—and imagination—could become. As the media partner of 7×24 Exchange, DCF is committed to elevating both the technical rigor and the human story behind the systems that power the AI age. What follows is not reportage, but a plausible future: a narrative exploration of how the next generation might learn to build, operate, and ultimately redefine data centers—from tabletop scale to lunar megacampuses. International Data Center Day, 2030 The Little Grid That Could They called it “Build the Cloud.” Which, to the adults in the room, sounded like branding. To the kids, it sounded literal. On a gymnasium floor somewhere in suburban Ohio (though it could just as easily have been Osaka, or Rotterdam, or Lagos) thirty-two teams of middle school students crouched over sprawling tabletop worlds the size of model train layouts. Only these weren’t towns with plastic trees and HO-scale diners. These were data centers. Tiny ones. Living ones. Or trying to be. Each team had been given the same kit six weeks earlier: modular rack frames no taller than a juice box, fiber spools thin as thread, micro solar arrays, a handful of millimeter-scale wind turbines, and a small fleet of programmable robotic “operators”—wheeled, jointed, blinking with LED status lights. The assignment had been deceptively simple: Design, build, and operate a self-sustaining data center campus. Then make it come alive. Now it was International Data Center Day, 2030, and the judging had begun. The Sound of Small Machines Thinking If you stood at the edge of the gym and closed your eyes, it didn’t sound like a science fair. It sounded like… something else. A low hum of micro-inverters stepping

Read More »

Superconducting the AI Era: Rethinking Power Delivery for Gigawatt Data Centers

For the data center industry, the AI era has already rewritten the rules around capital deployment, site selection, and infrastructure scale. But as the build cycle accelerates into the gigawatt range, a deeper constraint is coming into focus; one that sits beneath generation, beneath interconnection queues, and even beneath permitting. It is the physical act of moving power. The challenge is no longer simply how to procure energy, but how to deliver it efficiently from the grid edge to the campus, across buildings, and ultimately into racks that are themselves becoming industrial-scale power consumers. In this emerging reality, traditional copper-based distribution systems are beginning to show signs of strain not just economically, but physically. In the latest episode of the Data Center Frontier Show Podcast, MetOx CEO Bud Vos frames this moment as a structural turning point for the industry, one where superconducting technologies may begin to shift from theoretical to practical. “When you start looking at gigawatt-type campuses,” Vos explains, “you find three fundamental constraints in the power distribution problem: the grid interconnect, the campus distribution, and then delivery inside the data hall.” Each of these layers compounds the difficulty of scaling infrastructure in a copper-based world. More capacity means more cables, more trenching, more materials, and more complexity in an exponential expansion of the physical systems required to support AI workloads. A Different Kind of Conductor High-temperature superconducting (HTS) wire offers a radically different path forward. Developed from research originating at the University of Houston and now manufactured through advanced thin-film processes, HTS replaces bulk conductive material with a highly efficient layered structure capable of carrying dramatically higher current densities. Vos describes the manufacturing approach in familiar terms for a data center audience: “You can think of it as a semiconductor process. We’re creating thin film depositions on

Read More »

DCF Poll: AI Data Center Assumptions

Matt Vincent is Editor in Chief of Data Center Frontier, where he leads editorial strategy and coverage focused on the infrastructure powering cloud computing, artificial intelligence, and the digital economy. A veteran B2B technology journalist with more than two decades of experience, Vincent specializes in the intersection of data centers, power, cooling, and emerging AI-era infrastructure. Since assuming the EIC role in 2023, he has helped guide Data Center Frontier’s coverage of the industry’s transition into the gigawatt-scale AI era, with a focus on hyperscale development, behind-the-meter power strategies, liquid cooling architectures, and the evolving energy demands of high-density compute, while working closely with the Digital Infrastructure Group at Endeavor Business Media to expand the brand’s analytical and multimedia footprint. Vincent also hosts The Data Center Frontier Show podcast, where he interviews industry leaders across hyperscale, colocation, utilities, and the data center supply chain to examine the technologies and business models reshaping digital infrastructure. Since its inception he serves as Head of Content for the Data Center Frontier Trends Summit. Before becoming Editor in Chief, he served in multiple senior editorial roles across Endeavor Business Media’s digital infrastructure portfolio, with coverage spanning data centers and hyperscale infrastructure, structured cabling and networking, telecom and datacom, IP physical security, and wireless and Pro AV markets. He began his career in 2005 within PennWell’s Advanced Technology Division and later held senior editorial positions supporting brands such as Cabling Installation & Maintenance, Lightwave Online, Broadband Technology Report, and Smart Buildings Technology. Vincent is a frequent moderator, interviewer, and keynote speaker at industry events including the HPC Forum, where he delivers forward-looking analysis on how AI and high-performance computing are reshaping digital infrastructure. He graduated with honors from Indiana University Bloomington with a B.A. in English Literature and Creative Writing and lives in southern New Hampshire with

Read More »

A Faster Path to Power: What Natrium’s NRC Approval Means for AI Infrastructure

The race to build AI infrastructure at scale has exposed a deeper constraint than capital or compute: power that can be delivered on predictable timelines. That constraint is now colliding with a system that has historically moved at the pace of decades. But in early March, a key signal emerged that the equation may be starting to change. A Regulatory Breakthrough at the Moment of Peak Power Demand TerraPower’s Natrium reactor cleared a major milestone with the Nuclear Regulatory Commission, which approved a construction permit for Kemmerer Power Station Unit 1 in Wyoming, representing the company’s first commercial-scale plant. It is the first reactor construction approval the NRC has granted in nearly a decade, and the first for a commercial non-light-water reactor in more than 40 years. More significantly, it is the first advanced reactor to reach this stage under the modern U.S. licensing framework. For an industry increasingly defined by gigawatt-scale AI campuses and compressed build cycles, that milestone lands with unusual timing. Construction Approved — But Not Yet ‘Power Delivered’ The distinction between construction approval and operational readiness is critical. TerraPower has not received a license to generate electricity. What the NRC has granted is permission to begin nuclear-related construction at the Kemmerer site, following safety and environmental review. Before the plant can operate, TerraPower’s subsidiary, US SFR Owner, must still secure a separate operating license. But in practical terms, this is the moment when a project transitions from concept to execution. It is a regulatory green light not for power generation, but for steel, concrete, and capital deployment. And in the context of advanced nuclear, that step has historically been the hardest to reach. An 18-Month Signal to the Market The speed of that approval may ultimately matter as much as the approval itself. TerraPower submitted its construction

Read More »

Return of the PTT: Poste Italiane looks to snap up telco TIM

Poste Italiane sees opportunities in reuniting with the former state-owned telecommunications business: “The creation of an integrated group strategic pillar for the national economy, Italy’s largest connected infrastructure with leading positions in financial and insurance services,” it said in a news release. The company is looking to build some complementary services. “The transaction aims to scale and enhance Poste Italiane’s platform by adding three significant assets: a nationwide fixed and mobile network, a leading position in the country’s cloud and data center infrastructure and the ability to offer secure and seamless connectivity to all stakeholders,” it said. Poste Italiane was already the largest stakeholder in TIM and, as the government is the largest stakeholder in Poste Italiane, we’re getting back to the status quo of the 1980s. There is no sign, however, of other European governments following suit.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »