Stay Ahead, Stay ONMINE

The AI energy challenge is coming to a head

Krishna Rangasayee is CEO of SiMa.ai, a software-centric, embedded edge machine learning system-on-chip company. As the last year falls further into the rearview and we are full steam ahead into 2025, the discussion surrounding AI’s massive energy consumption has reached an inflection point. The rapid advancement of AI has resulted in unprecedented demands on global […]

Krishna Rangasayee is CEO of SiMa.ai, a software-centric, embedded edge machine learning system-on-chip company.

As the last year falls further into the rearview and we are full steam ahead into 2025, the discussion surrounding AI’s massive energy consumption has reached an inflection point. The rapid advancement of AI has resulted in unprecedented demands on global energy infrastructure, threatening to outpace our ability to deliver power — and AI benefits — where they’re needed most. 

With AI already accounting for up to 4% of U.S. electricity use (a figure projected to nearly triple to 11% by 2030), reducing the strain on our energy systems is a priority, requiring a thorough reexamination of how AI’s energy needs could affect our long-term climate goals, infrastructure, resource availability and the scale at which this technology operates.

And the hype doesn’t look to be slowing down anytime soon. A new executive order was issued last month to prioritize and speed up the development of AI infrastructure, including data centers and other power facilities, while proposing new restrictions on exports of AI chips to keep innovation local. 

While political debates rage about energy sources and environmental regulation, the fundamental challenge lies in the stark mismatch between AI’s accelerating power requirements and our aging energy distribution infrastructure. This has now become a race against time, and though AI has inevitably become a “problem” — it also offers the path to a solution.

The scale of AI’s energy challenge

Most AI applications we use today — from chatbots to image generators — rely on the cloud to run models and process queries. These AI data centers serve as a hub, steadily accounting for around 4.4% of U.S. electrical demand, potentially increasing to more than a tenth of the total U.S. electrical demand by 2028.

Recent data points to the growth of EV infrastructure and the expansion of AI data centers, with more on the way, as the primary contributors to these growing numbers. As edge computing proliferates across industries, it offers a partial solution to the pressing issues seen with the growing AI energy demands. While not immune to the power struggle, edge computing can reduce power use and latency by enabling localized AI deployments, offering a more efficient and less energy-intensive approach than our current infrastructure.

A report from Goldman Sachs revealed a stunning statistic: the average ChatGPT query requires nearly 10 times as much electricity to process compared to a Google search. While an alarming number by itself, especially when considering the rapid adoption of the chatbot, data like this underscores the fundamental mismatch between AI development cycles, which operate in 100-day sprints, and energy infrastructure projects that typically span 100-year timelines.

Regulatory changes are a false flag

On their first day in office, the new administration began taking action by not only withdrawing the United States from the Paris Agreement — limiting the country’s access to clean energy and green tech markets — but also vowing to end the “electric vehicle mandate” and reversing a 2021 executive order, which aimed to ensure that half of all new vehicles sold in the U.S. by 2030 would be electric, potentially eliminating EV tax credits.

Many anticipate the government repealing the existing executive order on AI will accelerate AI development, especially in Silicon Valley. Pundits presume that the 2021 Bipartisan Infrastructure Law, which supports the development of green energy projects, will eventually be rolled back, with fund disbursement already on hold.

The recent appointment of Lee Zeldin to head the EPA, along with the potential rollback of initiatives from the Inflation Reduction Act, may ease some environmental restrictions for now. But these policy shifts won’t address the fundamental challenges of energy distribution — the massive infrastructure required to distribute energy where it’s needed most. The critical bottleneck isn’t the energy source itself; it’s the massive infrastructure required to deliver it where needed.

It is these limitations, not any policy changes, that will continue stifling innovation in the space. In fact, the continued advancement of electric and autonomous vehicles will only emphasize the need for additional fuel sources. If energy efficiency continues improving at its current pace, the electricity required by in-vehicle computers could reach 26 terawatt-hours by 2040 — equivalent to the total consumption of about 59 million desktop PCs.

Market forces driving change

The aggressive competition for limited power infrastructure is already creating a natural selection process for providers in AI deployment. The growing gap between AI’s rapid development cycle and infrastructure’s glacial pace of change is forcing companies to innovate in three key areas:

  • Energy-Efficient AI Architecture: Companies are investing in specialized, smaller language models that can operate within existing power constraints while delivering targeted business value. 
  • Geographic Strategy: Businesses are making AI deployment decisions based primarily on power availability and distribution capabilities.
  • Competitive Innovation: As companies realize that waiting for infrastructure catch-up isn’t viable, they’re turning to new approaches to AI deployment that treat energy efficiency as a core competitive advantage rather than a compliance requirement. 

The data center capacity limitations we see today are forcing companies to make hard choices about where and how to deploy AI resources. It won’t be long until a complete reevaluation of how we distribute these resources becomes necessary.

Edge computing is the catalyst for change

So what’s the solution? As traditional infrastructure scaling remains inadequate when faced with the rise in demand, companies are increasingly turning to edge computing solutions that distribute computational loads closer to end users, reducing the strain on centralized data centers.

The emergence of smaller, specialized language models reflects a growing recognition that energy efficiency must be a core consideration in AI development. These specialized models not only reduce power use but often provide better performance for specific tasks, suggesting a future where AI development may be shaped as much by energy constraints as by technological capabilities.

Energy efficiency is the future

When looking at what is on the horizon for AI, it’s become obvious that industry collaboration on energy-efficient computing has evolved beyond an environmental imperative and into a business necessity as infrastructure limitations threaten to bottleneck AI deployment. 

There must be a seismic shift in how the industry defines and measures AI advancement, pivoting our idea of technological evolution from raw computing power to energy efficiency. The impact of edge computing goes beyond the macro level. Less bandwidth and lower network operating costs make energy more efficient and cost-effective, freeing up resources elsewhere to help propel necessary progress. 

The future of AI development will be shaped not by political decisions about energy sources, but by the physical realities of power distribution infrastructure. While this perspective may challenge popular narratives about environmental regulation being the primary constraint on AI development, the mathematics of energy distribution cannot be overcome by the wave of a politician’s magic wand.

Industry leaders must confront these infrastructure limitations head-on, driving innovation in both AI efficiency and power distribution solutions, ensuring we leave this planet better than we found it.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

F5 to acquire CalypsoAI for advanced AI security capabilities

CalypsoAI’s platform creates what the company calls an Inference Perimeter that protects across models, vendors, and environments. The offers several products including Inference Red Team, Inference Defend, and Inference Observe, which deliver adversarial testing, threat detection and prevention, and enterprise oversight, respectively, among other capabilities. CalypsoAI says its platform proactively

Read More »

HomeLM: A foundation model for ambient AI

Capabilities of a HomeLM What makes a foundation model like HomeLM powerful is its ability to learn generalizable representations of sensor streams, allowing them to be reused, recombined and adapted across diverse tasks. This fundamentally differs from traditional signal processing and machine learning pipelines in RF sensing, which are typically

Read More »

Cisco’s Splunk embeds agentic AI into security and observability products

AI-powered observability enhancements Cisco also announced it has updated Splunk Observability to use Cisco AgenticOps, which deploys AI agents to automate telemetry collection, detect issues, identify root causes, and apply fixes. The agentic AI updates help enterprise customers automate incident detection, root-cause analysis, and routine fixes. “We are making sure

Read More »

Oil Gains on Russian Tensions

Oil rose, extending a gain from last week, as traders weighed moves to crack down on Russian flows against forecasts for a surplus later in the year. West Texas Intermediate advanced 1% to settle above $63 a barrel, after adding 1.3% last week. Ukraine is ramping up its attacks on Russia’s oil infrastructure, with drones striking the Kinef refinery, one of the nation’s largest, over the weekend. The strike comes days after an attack on the key Baltic export port of Primorsk. US President Donald Trump also reiterated calls that Europe must stop buying oil from Russia, after earlier saying he’s prepared to move ahead with “major” sanctions on crude supply from the OPEC+ member if NATO countries do the same. US Treasury Secretary Scott Bessent later commented that the US wouldn’t follow through with threats to penalize Russian oil unless Europe also does so. Oil has traded in a range of about $5 a barrel since early August, with prices buffeted by the competing forces of geopolitical risks and bearish fundamentals, which led hedge funds to cut their bullish position on US crude to the lowest on record. OPEC+ has started to bring back a new tranche of oil production ahead of schedule — leading the International Energy Agency to project a record surplus next year. The latest Ukrainian drone attacks “reinforced price support above $61,” said Razan Hilal, market analyst at Forex.com. “While this has kept the bullish narrative intact, the chart does not yet confirm a clean uptrend, as demand risks and trade instability weigh on sentiment — especially as OPEC’s recent supply cut unwinds reflect slowdown.” Traders are now looking ahead to a high-stakes US central bank meeting, where the Federal Reserve is largely expected to resume its interest-rate cutting cycle. A rate cut could spur

Read More »

Trump Supports Sanctions If NATO Stops Buying Russian Oil

US President Donald Trump said he’s prepared to move ahead with “major” sanctions on Russian oil if NATO countries do the same. Trump, a day after he said he was losing patience with President Vladimir Putin over the war in Ukraine, said he’s “ready to do major Sanctions on Russia when all NATO Nations have agreed, and started, to do the same thing, and when all NATO Nations STOP BUYING OIL FROM RUSSIA,” in a post on his Truth Social site early Saturday. Many European nations have cut back or stopped purchasing Russian oil, but several NATO allies — including Hungary — have blocked more stringent proposals by the European Union to target Russia’s energy sector.  Bloomberg reported on Friday that the US planned to urge allies in the Group of Seven to impose tariffs as high as 100% on China and India for their purchases of Russian oil, as part of an effort to convince Putin to end Russia’s invasion of its neighbor.   “This, plus NATO, as a group, placing 50% to 100% TARIFFS ON CHINA, to be fully withdrawn after the WAR with Russia and Ukraine is ended, will also be of great help in ENDING this deadly, but RIDICULOUS, WAR,” Trump wrote.  Trump has at times adopted a softer tone toward China as he continues to push for a summit with President Xi Jinping and a trade deal with the world’s second-largest economy. And any move to impose sanctions on China would likely draw a strong retaliatory response from Beijing and disrupt the tentative trade war truce between the US and China.  Treasury Secretary Scott Bessent and US Trade Representative Jamieson Greer are set to meet with Chinese officials in Madrid in the coming days. G-7 finance ministers discussed how to increase pressure on Russia during

Read More »

U.S. Secretary of Energy Chris Wright Delivers U.S. National Statement at the General Conference of the International Atomic Energy Agency in Vienna, Austria

VIENNA, AUSTRIA— U.S. Secretary of Energy Chris Wright today delivered the U.S. National Statement at the General Conference of the International Atomic Energy Agency (IAEA) in Vienna, Austria. Secretary Wright’s full remarks from the International Atomic Energy Agency (IAEA) General Conference are below: I am honored to represent the United States of America at the 69th IAEA General Conference. I want to thank Director General Grossi and the Secretariat for your leadership. The United States welcomes the Republic of Maldives as the newest member of the IAEA. As both a lifelong energy entrepreneur and now the U.S. Secretary of Energy, I am uniquely aware of the transformative power of energy, its ability to lift billions out of poverty, drive economic growth and expand opportunity across the globe. I am also acutely aware of the challenge our world faces today in meeting rising demand for affordable, reliable and secure energy—particularly the need for baseload electric power to drive rapid progress in Artificial Intelligence. AI is rapidly emerging as the next highly energy-intensive manufacturing industry. AI manufactures intelligence out of electricity. The nations that lead in this space will also lead transformative progress in technology, healthcare, national security and innovation across the board. The energy required to power this revolution is immense—and progress will be accelerated by rapidly unlocking and deploying commercial nuclear power. The world needs more energy to meet the AI challenge and drive human progress—and the United States is boldly leading the way. With President Trump’s leadership, we are advancing American energy policies that accelerate growth, prioritize safety and enhance global security. Earlier this year, President Trump issued four Executive Orders aimed at reinvigorating America’s nuclear energy industry by modernizing regulation, streamlining reactor testing, deploying reactors for national security, and reinvigorating the nuclear industrial base. As part of these

Read More »

The hidden cost of ambiguous energy software terminology

Sneha Vasudevan is a project management lead at Uplight. In the face of rapid load growth, the electricity sector is experiencing unprecedented investment in advanced technologies as organizations try to balance reliability, affordability and decarbonization. Transformation is happening on both sides of the grid, with the scale of consumer adoption of distributed energy resources approaching that of utility-scale generation capacity. Residential customers are installing heat pumps, electric vehicles and charging equipment, solar panels, and home batteries while food corporations, logistics companies and school districts electrify their vehicle fleets and implement sophisticated energy management systems.  The consumer distributed energy resource hardware investment boom is resulting in increased utility spending on sophisticated software platforms to manage thousands of independently owned energy assets. Unlike the hardware world — where there is broad agreement on technical specifications of a solar panel or EV or battery — software solutions lack definitional clarity. Terms like “virtual power plant,” “fleet energy management system,” and “distributed energy resource management system” mean different things to different vendors and utilities. Successfully adapting to load growth and DER adoption hinges on the successful, scalable deployment of these software solutions. This depends on clear, mutual understanding of requirements, capabilities and outcomes among all parties. Despite the best intentions of utilities and vendors, without definitional clarity across energy software solutions, the industry remains stuck in endless scope changes and cost overruns instead of building the grid of the future. Where the industry gets lost in translation The lack of industry-wide consensus on standardized definitions for software technologies, capabilities and associated service offerings represents more than a communications issue — it’s a major barrier to meeting the increased load demand. Without shared definitions, the industry duplicates effort, misses synergies and stalls the transition to smarter energy systems. For utilities, this creates operational blind spots where

Read More »

Primorsk Port Resumes Oil Loadings After Drone Attacks

At least two tankers have completed loadings at Russia’s Primorsk, showing that the Baltic Sea port has resumed operations in the aftermath of Friday’s drone attacks on the facility by Ukraine. Two crude tankers – Walrus and Samos – completed loadings at Primorsk over the weekend, according to ship tracking data compiled by Bloomberg. Walrus has left the terminal, while Samos is still anchored although is showing Aliaga in Turkey as its final destination. A third tanker Jagger is moored at the terminal.  Loadings were temporarily suspended at the facility following the attacks. Three pumping stations pushing crude to Ust-Luga, another vital export terminal in the Baltic, were also hit.  Ukraine has ramped up attacks on Russia’s energy facilities in the past few weeks. Kyiv has said it aims to curtail Russia’s ability to supply fuel to its front lines, while also hurting its export revenues. Primorsk is the largest Baltic oil terminal in Russia. It loaded about 970,000 barrels a day of Urals crude in August, according to Bloomberg ship tracking data. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

North America Adds Rigs for 2 Straight Weeks

North America added seven rigs week on week, according to Baker Hughes’ latest North America rotary rig count, which was released on September 12. The U.S. added two rigs and Canada added five rigs week on week, taking the total North America rig count up to 725, comprising 539 rigs from the U.S. and 186 rigs from Canada, the count outlined. Of the total U.S. rig count of 539, 524 rigs are categorized as land rigs, 13 are categorized as offshore rigs, and two are categorized as inland water rigs. The total U.S. rig count is made up of 416 oil rigs, 118 gas rigs, and five miscellaneous rigs, according to Baker Hughes’ count, which revealed that the U.S. total comprises 471 horizontal rigs, 56 directional rigs, and 12 vertical rigs. Week on week, the U.S. offshore and inland water rig counts remained unchanged and the country’s land rig count increased by two, Baker Hughes highlighted. The U.S. oil rig count increased by two and its gas and miscellaneous rig counts remained unchanged week on week, the count showed. The U.S. directional rig count increased by two, week on week, while its horizontal rig count increased by one and its vertical rig count declined by one during the same period, the count revealed. A major state variances subcategory included in the rig count showed that, week on week, New Mexico, Ohio, and Texas each added one rig and Oklahoma dropped one rig. A major basin variances subcategory included in Baker Hughes’ rig count showed that, week on week, the Eagle Ford basin added three rigs and the Cana Woodford and Utica basins each added one rig. Canada’s total rig count of 186 is made up of 126 oil rigs, 59 gas rigs, and one miscellaneous rig, Baker Hughes pointed out.

Read More »

Arista touts liquid cooling, optical tech to reduce power consumption for AI networking

Both technologies will likely find a role in future AI and optical networks, experts say, as both promise to reduce power consumption and support improved bandwidth density. Both have advantages and disadvantages as well – CPOs are more complex to deploy given the amount of technology included in a CPO package, whereas LPOs promise more simplicity.  Bechtolsheim said that LPO can provide an additional 20% power savings over other optical forms. Early tests show good receiver performance even under degraded conditions, though transmit paths remain sensitive to reflections and crosstalk at the connector level, Bechtolsheim added. At the recent Hot Interconnects conference, he said: “The path to energy-efficient optics is constrained by high-volume manufacturing,” stressing that advanced optics packaging remains difficult and risky without proven production scale.  “We are nonreligious about CPO, LPO, whatever it is. But we are religious about one thing, which is the ability to ship very high volumes in a very predictable fashion,” Bechtolsheim said at the investor event. “So, to put this in quantity numbers here, the industry expects to ship something like 50 million OSFP modules next calendar year. The current shipment rate of CPO is zero, okay? So going from zero to 50 million is just not possible. The supply chain doesn’t exist. So, even if the technology works and can be demonstrated in a lab, to get to the volume required to meet the needs of the industry is just an incredible effort.” “We’re all in on liquid cooling to reduce power, eliminating fan power, supporting the linear pluggable optics to reduce power and cost, increasing rack density, which reduces data center footprint and related costs, and most importantly, optimizing these fabrics for the AI data center use case,” Bechtolsheim added. “So what we call the ‘purpose-built AI data center fabric’ around Ethernet

Read More »

Network and cloud implications of agentic AI

The chain analogy is critical here. Realistic uses of AI agents will require core database access; what can possibly make an AI business case that isn’t tied to a company’s critical data? The four critical elements of these applications—the agent, the MCP server, the tools, and the data— are all dragged along with each other, and traffic on the network is the linkage in the chain. How much traffic is generated? Here, enterprises had another surprise. Enterprises told me that their initial view of their AI hosting was an “AI cluster” with a casual data link to their main data center network. With AI agents, they now see smaller AI servers actually installed within their primary data centers, and all the traffic AI creates, within the model and to and from it, now flows on the data center network. Vendors who told enterprises that AI networking would have a profound impact are proving correct. You can run a query or perform a task with an agent and have that task parse an entire database of thousands or millions of records. Someone not aware of what an agent application implies in terms of data usage can easily create as much traffic as a whole week’s normal access-and-update would create. Enough, they say, to impact network capacity and the QoE of other applications. And, enterprises remind us, if that traffic crosses in/out of the cloud, the cloud costs could skyrocket. About a third of the enterprises said that issues with AI agents generated enough traffic to create local congestion on the network or a blip in cloud costs large enough to trigger a financial review. MCP tool use by agents is also a major security and governance headache. Enterprises point out that MCP standards haven’t always required strong authentication, and they also

Read More »

There are 121 AI processor companies. How many will succeed?

The US currently leads in AI hardware and software, but China’s DeepSeek and Huawei continue to push advanced chips, India has announced an indigenous GPU program targeting production by 2029, and policy shifts in Washington are reshaping the playing field. In Q2, the rollback of export restrictions allowed US companies like Nvidia and AMD to strike multibillion-dollar deals in Saudi Arabia.  JPR categorizes vendors into five segments: IoT (ultra-low-power inference in microcontrollers or small SoCs); Edge (on-device or near-device inference in 1–100W range, used outside data centers); Automotive (distinct enough to break out from Edge); data center training; and data center inference. There is some overlap between segments as many vendors play in multiple segments. Of the five categories, inference has the most startups with 90. Peddie says the inference application list is “humongous,” with everything from wearable health monitors to smart vehicle sensor arrays, to personal items in the home, and every imaginable machine in every imaginable manufacturing and production line, plus robotic box movers and surgeons.  Inference also offers the most versatility. “Smart devices” in the past, like washing machines or coffee makers, could do basically one thing and couldn’t adapt to any changes. “Inference-based systems will be able to duck and weave, adjust in real time, and find alternative solutions, quickly,” said Peddie. Peddie said despite his apparent cynicism, this is an exciting time. “There are really novel ideas being tried like analog neuron processors, and in-memory processors,” he said.

Read More »

Data Center Jobs: Engineering, Construction, Commissioning, Sales, Field Service and Facility Tech Jobs Available in Major Data Center Hotspots

Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting. Looking for Data Center Candidates? Check out Pkaza’s Active Candidate / Featured Candidate Hotlist (and coming soon free Data Center Intern listing). Data Center Critical Facility Manager Impact, TX There position is also available in: Cheyenne, WY; Ashburn, VA or Manassas, VA. This opportunity is working directly with a leading mission-critical data center developer / wholesaler / colo provider. This firm provides data center solutions custom-fit to the requirements of their client’s mission-critical operational facilities. They provide reliability of mission-critical facilities for many of the world’s largest organizations (enterprise and hyperscale customers). This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning Engineer New Albany, OH This traveling position is also available in: Richmond, VA; Ashburn, VA; Charlotte, NC; Atlanta, GA; Hampton, GA; Fayetteville, GA; Cedar Rapids, IA; Phoenix, AZ; Dallas, TX or Chicago, IL. *** ALSO looking for a LEAD EE and ME CxA Agents and CxA PMs. *** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They have a mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits.  Data Center Engineering Design ManagerAshburn, VA This opportunity is working directly with a leading mission-critical data center developer /

Read More »

Modernizing Legacy Data Centers for the AI Revolution with Schneider Electric’s Steven Carlini

As artificial intelligence workloads drive unprecedented compute density, the U.S. data center industry faces a formidable challenge: modernizing aging facilities that were never designed to support today’s high-density AI servers. In a recent Data Center Frontier podcast, Steven Carlini, Vice President of Innovation and Data Centers at Schneider Electric, shared his insights on how operators are confronting these transformative pressures. “Many of these data centers were built with the expectation they would go through three, four, five IT refresh cycles,” Carlini explains. “Back then, growth in rack density was moderate. Facilities were designed for 10, 12 kilowatts per rack. Now with systems like Nvidia’s Blackwell, we’re seeing 132 kilowatts per rack, and each rack can weigh 5,000 pounds.” The implications are seismic. Legacy racks, floor layouts, power distribution systems, and cooling infrastructure were simply not engineered for such extreme densities. “With densification, a lot of the power distribution, cooling systems, even the rack systems — the new servers don’t fit in those racks. You need more room behind the racks for power and cooling. Almost everything needs to be changed,” Carlini notes. For operators, the first questions are inevitably about power availability. At 132 kilowatts per rack, even a single cluster can challenge the limits of older infrastructure. Many facilities are conducting rigorous evaluations to decide whether retrofitting is feasible or whether building new sites is the more practical solution. Carlini adds, “You may have transformers spaced every hundred yards, twenty of them. Now, one larger transformer can replace that footprint, and power distribution units feed busways that supply each accelerated compute rack. The scale and complexity are unlike anything we’ve seen before.” Safety considerations also intensify with these densifications. “At 132 kilowatts, maintenance is still feasible,” Carlini says, “but as voltages rise, data centers are moving toward environments where

Read More »

Google Backs Advanced Nuclear at TVA’s Clinch River as ORNL Pushes Quantum Frontiers

Inside the Hermes Reactor Design Kairos Power’s Hermes reactor is based on its KP-FHR architecture — short for fluoride salt–cooled, high-temperature reactor. Unlike conventional water-cooled reactors, Hermes uses a molten salt mixture called FLiBe (lithium fluoride and beryllium fluoride) as a coolant. Because FLiBe operates at atmospheric pressure, the design eliminates the risk of high-pressure ruptures and allows for inherently safer operation. Fuel for Hermes comes in the form of TRISO particles rather than traditional enriched uranium fuel rods. Each TRISO particle is encapsulated within ceramic layers that function like miniature containment vessels. These particles can withstand temperatures above 1,600 °C — far beyond the reactor’s normal operating range of about 700 °C. In combination with the salt coolant, Hermes achieves outlet temperatures between 650–750 °C, enabling efficient power generation and potential industrial applications such as hydrogen production. Because the salt coolant is chemically stable and requires no pressurization, the reactor can shut down and dissipate heat passively, without external power or operator intervention. This passive safety profile differentiates Hermes from traditional light-water reactors and reflects the Generation IV industry focus on safer, modular designs. From Hermes-1 to Hermes-2: Iterative Nuclear Development The first step in Kairos’ roadmap is Hermes-1, a 35 MW thermal demonstration reactor now under construction at TVA’s Clinch River site under a 2023 NRC license. Hermes-1 is not designed to generate electricity but will validate reactor physics, fuel handling, licensing strategies, and construction techniques. Building on that experience, Hermes-2 will be a 50 MW electric reactor connected to TVA’s grid, with operations targeted for 2030. Under the agreement, TVA will purchase electricity from Hermes-2 and supply it to Google’s data centers in Tennessee and Alabama. Kairos describes its development philosophy as “iterative,” scaling incrementally rather than attempting to deploy large fleets of units at once. By

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »