Stay Ahead, Stay ONMINE

Reimagining the utility operating model for a digital future

The energy industry is undergoing rapid and profound change. The rise of AI-driven data centers, the acceleration of the energy transition, and the growing frequency of external risks are reshaping the energy landscape faster than ever before. At the same time, customers, employees, and stakeholders expect more—greater transparency, better experiences, and faster innovation. This comes […]

The energy industry is undergoing rapid and profound change. The rise of AI-driven data centers, the acceleration of the energy transition, and the growing frequency of external risks are reshaping the energy landscape faster than ever before.

At the same time, customers, employees, and stakeholders expect more—greater transparency, better experiences, and faster innovation. This comes at a time when utilities remain under pressure to keep services affordable, satisfy regulators, and overcome workforce challenges.

Meeting these demands requires more than incremental improvements. Digital transformation is not just about new tools or technologies; it’s about rethinking the entire digital operating model.

By embedding digital into the core of strategy, operations, and culture, utilities can build the adaptability, efficiency, and resilience needed to thrive in this new environment. The digital operating model provides the blueprint for this transformation.

Meeting customer and employee expectations

One of the most pressing challenges facing utilities is meeting rising expectations from both customers and employees. Customers want digital interactions that are as seamless and intuitive as the experiences they have with retailers or banks. Employees, for their part, expect modern tools that help them do their jobs more effectively and without frustration.

To achieve this, utilities must start designing technology with people at the center. Human-centered design ensures that digital tools are intuitive, practical, and widely adopted, rather than sitting unused because they were built for systems instead of end-users. But it’s not only about design; it’s also about delivery. Many utilities are still encumbered by slow-moving processes.

Embracing agile ways of working at scale can help teams deliver more quickly and adapt when circumstances change. Just as important, shifting away from inflexible, monolithic systems toward composable platforms allows organizations to innovate rapidly and bring new capabilities online without waiting for massive, multi-year overhauls.

What this means for your utility: People-centric delivery drives technology adoption, which fuels value realization. By prioritizing the needs of both customers and employees, utilities can create experiences that build trust, improve satisfaction, and enhance productivity. This approach not only accelerates the adoption of new tools but also ensures that investments in technology deliver measurable and sustainable outcomes.

Addressing customer affordability and regulatory challenges

Even as expectations increase, utilities must carefully manage the tension between affordability and regulatory oversight. Every digital investment must be justified and shown to create value not only for customers but also for regulators who control the pace of change. This means utilities shifting the focus from activity to outcomes. Establishing clear value realization practices ensures that business cases stand up to scrutiny and that benefits are tracked, measured, and communicated.

At the same time, proactive storytelling is essential. Regulators are more likely to support transformation efforts when they understand the tangible benefits for customers, whether that means improved reliability, faster service, or more affordable energy. Data-driven storytelling allows utilities to articulate these benefits clearly and consistently. Beyond communication, utilities also need to anticipate regulatory changes before they occur.

Advanced analytics and AI can provide foresight into potential policy shifts, allowing organizations to adjust their strategies ahead of time rather than scrambling to react. And when it comes to technology infrastructure, moving away from a “cloud-first” mindset toward a “cloud-smart” approach enables utilities to balance innovation with cost efficiency by leveraging practices like Cloud FinOps to keep costs transparent and predictable.

What this means for your utility: Customer affordability will continue to be a focus for Regulators. Defining and articulating how technology can optimize cost structure while also improving operational excellence and creating customer value is critical. By combining clear value realization practices with proactive, data-driven storytelling, utilities can build stronger regulatory support for transformation initiatives.

Accelerating digital delivery with a robust foundation

No matter how compelling the vision for transformation may be, it will falter without a solid digital foundation. Many utilities have attempted piecemeal approaches in the past, only to encounter rising complexity and limited impact. What’s required now is a robust backbone that allows digital delivery to scale with speed, security, and resilience.

At the heart of this foundation is data. A strong data backbone makes it possible to govern and manage information effectively, while also unlocking insights that drive decision-making. This becomes particularly important in the context of grid modernization. Utilities building the grid of the future must process vast amounts of data to ensure safety, reliability, and seamless integration of renewable energy sources. Modernized platforms that are composable and flexible provide the agility to adapt quickly, while automation and AI bring continuous optimization—ensuring that innovation isn’t a one-off but becomes a sustained part of the organization’s DNA.

What this means for your utility: A balanced technology investment portfolio is necessary.  You cannot build digital value on a rocky foundation. Establishing a strong digital backbone powered by modernized, flexible platforms and robust data management is critical to scaling innovation. By prioritizing agility, automation, and continuous optimization, utilities can not only address immediate challenges like grid modernization but also create a lasting framework for sustained transformation and long-term success.

Overcoming talent gaps

The utility industry is feeling the strain of an aging workforce and a competitive labor market, making it harder to attract and retain the skills needed for the future. AI offers a path forward by augmenting the workforce, automating routine tasks, and freeing up employees to focus on higher-value work.

Rather than relying heavily on offshore talent, utilities can use AI-driven productivity gains to bridge capability gaps and improve efficiency at scale. But technology alone isn’t the answer—lasting change comes from empowering people. Upskilling programs and thoughtful organizational design are critical to ensure internal teams can manage and evolve digital platforms directly. By investing in workforce resilience, utilities reduce dependency on third-party vendors and build stronger, more adaptable organizations that can keep pace with change.

What this means for your utility: Talent shortages are growing each year, and traditional talent retention and acquisition strategies will continue to fall short.  Organizations must re-think their internal team structure and strategic partnerships—leveraging AI as talent enabler. By combining AI with targeted upskilling efforts, utilities can create a future-ready workforce that is equipped to drive innovation and adapt to evolving industry demands.

Climate resilience challenges

At the same time, utilities must strengthen their defenses against the growing threat of climate change. Extreme weather events are becoming more frequent and more disruptive, testing the limits of aging infrastructure. To meet this challenge, utilities need systems that are not only reliable but also designed to withstand unpredictable shocks. This requires building resilience into every layer of the grid, supported by robust data and digital capabilities.

Cybersecurity is equally important, as physical risks increasingly converge with digital ones. Implementing Zero Trust principles ensure systems remain secure and operational even under stress. By investing in resilience now, utilities position themselves to safeguard communities, maintain reliability, and reinforce trust in their ability to deliver power no matter the circumstances.

What this means for your utility: A mindset shift of architecting resiliency and security into all levels of your technology stack will be necessary as storms continue to become more frequent and powerful – and cyber-attacks become more sophisticated and damaging. Proactively addressing these challenges will not only protect critical infrastructure but also strengthen customer and stakeholder confidence in your ability to deliver reliable service under any conditions.

The path forward

Transitioning to a digital utility is no longer optional; it is essential for navigating challenges and creating lasting value. The forces reshaping the industry are moving too quickly for reactive approaches to suffice. Utilities that hesitate risk falling behind, while those that act decisively have an opportunity to lead.

By rethinking your digital operating model, utilities can align with customer expectations, manage affordability and regulatory pressures, build a robust foundation for innovation, and create resilience against both workforce and climate challenges.

This is a moment for the industry to act boldly. Those who embrace holistic digital transformation will not only keep pace with change, but they will also set the pace, delivering lasting value to customers, employees, regulators, and communities alike.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Mobile demands spur enterprise Wi-Fi upgrades

Performance requirements (reduced latency, jitter, packet drops): 67.1% Increased bandwidth consumption: 59.9% User mobility (roaming, broader coverage): 53.3% Connectivity for operational technology (industrial systems, medical imaging, video surveillance): 50.0% Return-to-office policies driving up office occupancy: 48.7% Density requirements (users congregating): 44.7% End-of-life network equipment: 40.8% Need for location-based services: 40.1%

Read More »

Anthropic signs billion-dollar deal with Google Cloud

US-based AI company Anthropic has signed a major deal with Google Cloud that is said to be worth tens of billions of dollars. As part of the deal, Anthropic will have access to up to one million of Google’s purpose-built Tensor Processing Unit (TPU) AI accelerators. “Anthropic and Google have

Read More »

EIA Shows Production Has Outweighed Demand All Year

In its latest short term energy outlook (STEO), which was released on October 7, the U.S. Energy Information Administration (EIA) showed that world petroleum and other liquid fuels production outweighed consumption in the first, second, and third quarters of this year. According to the STEO, in the third quarter, production averaged 107.43 million barrels per day and consumption averaged 104.83 million barrels per day. In the second quarter, production came in at 105.06 million barrels per day and consumption was 104.05 million barrels per day, and in the first quarter, production averaged 103.62 million barrels per day and consumption was 102.33 million barrels per day, the STEO highlighted. The last time consumption came in higher than demand was in the third quarter of 2024, according to the STEO, which showed that, in that quarter, production averaged 103.09 million barrels per day and consumption averaged 103.45 million barrels per day. In the first quarter of last year, production was 102.60 million barrels per day and consumption was 101.79 million barrels per day, in the second quarter of 2024 production was 103.23 million barrels per day and consumption was 102.93 million barrels per day, and in the fourth quarter of last year production was 103.83 million barrels per day and consumption was 103.46 million barrels per day, the STEO showed. Looking ahead, the STEO projected that production will average 107.31 million barrels per day in the fourth quarter of this year. The EIA forecast in the STEO that consumption will come in at 104.72 million barrels per day in the fourth quarter. In its latest STEO, the EIA projected that production will average 106.39 million barrels per day in the first quarter of next year, 106.97 million barrels per day in the second quarter, 107.55 million barrels per day in the third

Read More »

IEA Boss Says Oil Glut Will ‘Moderate’ Prices After Recent Gains

Oil prices are expected to “moderate” because of ample supply, according to the International Energy Agency.  “I don’t expect a major shake-up in the oil markets thanks to growing production coming from the Americas,” OPEC+’s change in policy to increase output and slowing demand growth, IEA Executive Director Fatih Birol said in an interview with Jennifer Zabasajja on Bloomberg Television. “As a result of all these trends I expect moderate oil prices in the next days and weeks to come.” Speculation that the US and China will reach a trade deal this week will only provide a “slight boost” to oil prices, barring any other major geopolitical events, he said.  The oil market will be in surplus as output from the “American quintet” — the US, Canada, Brazil, Guyana and Argentina — outpaces the growth in demand, largely driven by China’s pivot away from heavy industry and combustion vehicles, Birol said. The IEA raised its estimate for a record oil glut in 2026 earlier this month.  Crude prices jumped almost 8% last week after fresh US sanctions on Russia’s biggest producers triggered concerns over physical flows. Refiners in India, a major market for Russian oil, said they’d stop buying while some in China also hit the panic button. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Oil Glut Enabled Tougher Sanctions, JPM Says

In a report sent to Rigzone on Friday by Natasha Kaneva, head of global commodities strategy at J.P. Morgan, analysts at the company, including Kaneva, highlighted that the “oil glut enable[d]… tougher sanctions”. “WTI [West Texas Intermediate] prices falling into the $50s created an opportunity for the Trump administration to step up economic measures against Russia and pursue a more assertive approach to sanctions,” the J.P. Morgan analysts said in the report. “Mirroring last week’s action by the UK, the U.S. announced on Wednesday sanctions against Russia’s largest oil producers, blacklisting state-run Rosneft and privately held Lukoil, as well as their subsidiaries,” they added. “Until now, the U.S. has refrained from sanctioning Russia’s leading oil producers, viewing it as a ‘nuclear option’ due to concerns that their size could trigger a spike in oil prices and destabilize global energy markets,” they continued. In the report, the J.P. Morgan analysts noted that flows at risk are material. “Together, these two companies account for nearly half of Russia’s crude production and a similar share of its exports,” they said. “With Wednesday’s announcement, all four of Russia’s largest oil companies – Rosneft, Lukoil, Gazprom Neft, and Surgutneftegas – are now subject to U.S. curbs, following earlier measures imposed on Gazprom Neft and Surgutneftegas by the Biden administration in January,” they added. “In effect, 70 percent of Russia’s 2024 production and exports are now under sanctions. Transactions involving Rosneft and Lukoil must be wound down by November 21. In a coordinated move, the EU also unveiled its 19th package of sanctions, further targeting Russia’s energy revenues,” they continued. The analysts went on to state in the report that the effectiveness of these measures will depend on three key factors, “how well they are enforced; the response of major players in India and China; and

Read More »

Tenaris Bags Pipe Contract for Trion Field offshore Mexico

Woodside Energy Group Ltd has awarded Tenaris SA a contract for the supply of casing and tubing for the Trion oilfield in Mexican waters. “Tenaris will also provide line pipe and coatings for bends, flowlines and risers as part of the project’s subsea infrastructure”, the Luxembourg-headquartered industrial and energy pipe contractor said in a press release. “Under the Rig Direct® service model, Tenaris will supply 12,000 tons of casing and tubing, including 1,600 tons in the Super 13 Chrome steel grade”, Tenaris said. “For the line pipe portion, Tenaris will deliver approximately 16,000 tons of pipe for flowlines and risers, including the application of TenarisShawcor Marine 5-Layer Syntactic and Solid Polypropylene for flow assurance, and TenarisShawcor Fusion Bonded Epoxy, Three-Layer Polypropylene, and Liquid Epoxy coatings for corrosion protection. Line pipe and coatings will be supplied along with One Line® project solutions”. Pablo Gomez, Tenaris commercial vice president in Mexico, said, “This project underscores the strength of our customer partnerships and our ability to deliver advanced technological solutions for the most demanding offshore environments”. Woodside expects Trion to start producing 2028, targeting overseas markets. The Australian company, which operates the project with a 60 percent stake, announced the final investment decision (FID) on Trion in 2023. Also that year Woodside received approval from Mexico’s National Hydrocarbons Commission and secured an agreement with 40 percent partner Petroleos Mexicanos, which discovered Trion 2012, to inject $7.2 billion in capital. “Following the approval of the FDP [field development plan], Woodside has booked proved (1P) undeveloped reserves of 324.7 MMboe gross (194.8 MMboe Woodside share) and proved plus probable (2P) undeveloped reserves of 478.7 MMboe gross (287.2 MMboe Woodside share)”, Woodside said in a statement August 30, 2023. The reserves will be tapped through a floating production unit (FPU) with an output capacity of 100,000 barrels

Read More »

OEUK Announces Awards Host

In a release sent to Rigzone recently, industry body Offshore Energies UK (OEUK) announced that television presenter and radio host Stephen Mulhern has been confirmed as the host of this year’s OEUK awards ceremony. The ceremony is scheduled to take place at the P&J Live in Aberdeen, Scotland, on November 20. In the release, OEUK said the awards “celebrate the people and companies driving progress across the UK’s offshore energy sector – including oil and gas, wind, hydrogen, and carbon capture and storage” and added that the event “will once again spotlight the industry’s brightest stars, with more than 120 nominations received this year”.  OEUK highlighted in the release that Mulhern is known for presenting “a range of iconic shows”, including Dancing On Ice, Deal Or No Deal, Britain’s Got More Talent, In For a Penny, Catchphrase, and You Bet. “He also fronted the hugely popular Ant vs Dec segment on Ant and Dec’s Saturday Night Takeaway,” OEUK said in the release. “Alongside his television work, Stephen is a regular voice on Virgin Radio, covering for Chris Evans and Graham Norton,” it added. “An accomplished all-round performer, he has toured the UK with his variety and magic shows, is a BAFTA award-winning magician, and co-creator and producer of P&O Cruises’ exclusive magic and illusion show Astonishing,” OEUK continued. In the release, OEUK CEO David Whitehouse said, “we’re delighted to have Stephen Mulhern join us to host this year’s OEUK Awards”. “His energy, warmth and talent for engaging audiences will make the evening a truly memorable celebration of the people driving our industry forward,” he added. “The awards are about recognizing the innovation, commitment and collaboration that underpin the UK’s offshore energy success – and Stephen’s presence will add an extra spark to what promises to be an inspiring night,” Whitehouse noted. The full list of finalists for OEUK’s awards ceremony can be seen below: Apprentice of the YearCallum Duncan,

Read More »

Baker Hughes Invests $10MM in Tamboran

Tamboran Resources Corp said Monday it had raised $52.5 million net from issuing nearly three million common shares on the New York Stock Exchange (NYSE), including $10 million from Baker Hughes Co. “The public offering was supported by cornerstone investors, including a $10 million investment from new strategic partner, Baker Hughes, a leading energy technology company”, the Sydney-based early-stage natural gas exploration and production company said in a filing with the Australian Securities Exchange (ASX). The placement had a public offering price of $21 per share. The volume at closing included nearly 350,000 shares bought by underwriters under an option in their underwriting agreement with Tamboran. According to a prospectus filing with the United States Securities and Exchange Commission last Friday, the underwriters were RBC Capital Markets LLC, Wells Fargo Securities LLC, BofA Securities Inc, Johnson Rice & Co LLC, PEP Advisory LLC, Piper Sandler & Co and Northland Securities Inc. Besides its share subscription, Baker Hughes also signed a “preferred services agreement” with Tamboran under which the Houston, Texas-based company will deliver oilfield services and efficiency improvements “in Tamboran’s initial development of the Beetaloo Basin”, Tamboran said. “This activity is limited to a pre-set number of wells in the basin with an expiration period of the later to occur of three years or 20 wells”, Tamboran said. “The strategic relationship with Baker Hughes is established to provide industry-leading oilfield services and to Tamboran’s Beetaloo Basin operations, including drilling and completion fluids, drilling services, well design and construction, wireline services, cementing and completions intervention to improve well delivery and economics in the upcoming drilling and completions program”. Tamboran added, “Concurrently with the closing of the public offering, Tamboran entered into subscription agreements with certain investors with expected gross proceeds of up to $29.3 million in a PIPE [Private Investment in Public Equity],

Read More »

Intel sees supply shortage, will prioritize data center technology

“Capacity constraints, especially on Intel 10 and Intel 7 [Intel’s semiconductor manufacturing process], limited our ability to fully meet demand in Q3 for both data center and client products,” said Zinsner, adding that Intel isn’t about to add capacity to Intel 10 and 7 when it has moved beyond those nodes. “Given the current tight capacity environment, which we expect to persist into 2026, we are working closely with customers to maximize our available output, including adjusting pricing and mix to shift demand towards products where we have supply and they have demand,” said Zinsner. For that reason, Zinzner projects that the fourth quarter will be roughly flat versus the third quarter in terms of revenue. “We expect Intel products up modestly sequentially but below customer demand as we continue to navigate supply environment,” said Zinsner. “We expect CCG to be down modestly and PC AI to be up strongly sequentially as we prioritize wafer capacity for server shipments over entry-level client parts.”

Read More »

How to set up an AI data center in 90 days

“Personally, I think that a brownfield is very creative way to deal with what I think is the biggest problem that we’ve got right now, which is time and speed to market,” he said. “On a brownfield, I can go into a building that’s already got power coming into the building. Sometimes they’ve already got chiller plants, like what we’ve got with the building I’m in right now.” Patmos certainly made the most of the liquid facilities in the old printing press building. The facility is built to handle anywhere from 50 to over 140 kilowatts per cabinet, a leap far beyond the 1–2 kW densities typical of legacy data centers. The chips used in the servers are Nvidia’s Grace Blackwell processors, which run extraordinarily hot. To manage this heat load, Patmos employs a multi-loop liquid cooling system. The design separates water sources into distinct, closed loops, each serving a specific function and ensuring that municipal water never directly contacts sensitive IT equipment. “We have five different, completely separated water loops in this building,” said Morgan. “The cooling tower uses city water for evaporation, but that water never mixes with the closed loops serving the data hall. Everything is designed to maximize efficiency and protect the hardware.” The building taps into Kansas City’s district chilled water supply, which is sourced from a nearby utility plant. This provides the primary cooling resource for the facility. Inside the data center, a dedicated loop circulates a specialized glycol-based fluid, filtered to extremely low micron levels and formulated to be electronically safe. Heat exchangers transfer heat from the data hall fluid to the district chilled water, keeping the two fluids separate and preventing corrosion or contamination. Liquid-to-chip and rear-door heat exchangers are used for immediate heat removal.

Read More »

INNIO and VoltaGrid: Landmark 2.3 GW Modular Power Deal Signals New Phase for AI Data Centers

Why This Project Marks a Landmark Shift The deployment of 2.3 GW of modular generation represents utility-scale capacity, but what makes it distinct is the delivery model. Instead of a centralized plant, the project uses modular gas-reciprocating “power packs” that can be phased in step with data-hall readiness. This approach allows staged energization and limits the bottlenecks that often stall AI campuses as they outgrow grid timelines or wait in interconnection queues. AI training loads fluctuate sharply, placing exceptional stress on grid stability and voltage quality. The INNIO/VoltaGrid platform was engineered specifically for these GPU-driven dynamics, emphasizing high transient performance (rapid load acceptance) and grid-grade power quality, all without dependence on batteries. Each power pack is also designed for maximum permitting efficiency and sustainability. Compared with diesel generation, modern gas-reciprocating systems materially reduce both criteria pollutants and CO₂ emissions. VoltaGrid markets the configuration as near-zero criteria air emissions and hydrogen-ready, extending allowable runtimes under air permits and making “prime-as-a-service” viable even in constrained or non-attainment markets. 2025: Momentum for Modular Prime Power INNIO has spent 2025 positioning its Jenbacher platform as a next-generation power solution for data centers: combining fast start, high transient performance, and lower emissions compared with diesel. While the 3 MW J620 fast-start lineage dates back to 2019, this year the company sharpened its data center narrative and booked grid stability and peaking projects in markets where rapid data center growth is stressing local grids. This momentum was exemplified by an 80 MW deployment in Indonesia announced earlier in October. The same year saw surging AI-driven demand and INNIO’s growing push into North American data-center markets. Specifications for the 2.3 GW VoltaGrid package highlight the platform’s heat tolerance, efficiency, and transient response, all key attributes for powering modern AI campuses. VoltaGrid’s 2025 Milestones VoltaGrid’s announcements across 2025 reflect

Read More »

Inside Google’s multi-architecture revolution: Axion Arm joins x86 in production clusters

Matt Kimball, VP and principal analyst with Moor Insights and Strategy, pointed out that AWS and Microsoft have already moved many workloads from x86 to internally designed Arm-based servers. He noted that, when Arm first hit the hyperscale datacenter market, the architecture was used to support more lightweight, cloud-native workloads with an interpretive layer where architectural affinity was “non-existent.” But now there’s much more focus on architecture, and compatibility issues “largely go away” as Arm servers support more and more workloads. “In parallel, we’ve seen CSPs expand their designs to support both scale out (cloud-native) and traditional scale up workloads effectively,” said Kimball. Simply put, CSPs are looking to monetize chip investments, and this migration signals that Google has found its performance-per-dollar (and likely performance-per-watt) better on Axion than x86. Google will likely continue to expand its Arm footprint as it evolves its Axion chip; as a reference point, Kimball pointed to AWS Graviton, which didn’t really support “scale up” performance until its v3 or v4 chip. Arm is coming to enterprise data centers too When looking at architectures, enterprise CIOs should ask themselves questions such as what instance do they use for cloud workloads, and what servers do they deploy in their data center, Kimball noted. “I think there is a lot less concern about putting my workloads on an Arm-based instance on Google Cloud, a little more hesitance to deploy those Arm servers in my datacenter,” he said. But ultimately, he said, “Arm is coming to the enterprise datacenter as a compute platform, and Nvidia will help usher this in.” Info-Tech’s Jain agreed that Nvidia is the “biggest cheerleader” for Arm-based architecture, and Arm is increasingly moving from niche and mobile use to general-purpose and AI workload execution.

Read More »

AMD Scales the AI Factory: 6 GW OpenAI Deal, Korean HBM Push, and Helios Debut

What 6 GW of GPUs Really Means The 6 GW of accelerator load envisioned under the OpenAI–AMD partnership will be distributed across multiple hyperscale AI factory campuses. If OpenAI begins with 1 GW of deployment in 2026, subsequent phases will likely be spread regionally to balance supply chains, latency zones, and power procurement risk. Importantly, this represents entirely new investment in both power infrastructure and GPU capacity. OpenAI and its partners have already outlined multi-GW ambitions under the broader Stargate program; this new initiative adds another major tranche to that roadmap. Designing for the AI Factory Era These upcoming facilities are being purpose-built for next-generation AI factories, where MI450-class clusters could drive rack densities exceeding 100 kW. That level of compute concentration makes advanced power and cooling architectures mandatory, not optional. Expected solutions include: Warm-water liquid cooling (manifold, rear-door, and CDU variants) as standard practice. Facility-scale water loops and heat-reuse systems—including potential district-heating partnerships where feasible. Medium-voltage distribution within buildings, emphasizing busway-first designs and expanded fault-current engineering. While AMD has not yet disclosed thermal design power (TDP) specifications for the MI450, a 1 GW campus target implies tens of thousands of accelerators. That scale assumes liquid cooling, ultra-dense racks, and minimal network latency footprints, pushing architectures decisively toward an “AI-first” orientation. Design considerations for these AI factories will likely include: Liquid-to-liquid cooling plants engineered for step-function capacity adders (200–400 MW blocks). Optics-friendly white space layouts with short-reach topologies, fiber raceways, and aisles optimized for module swaps. Substation adjacency and on-site generation envelopes negotiated during early land-banking phases. Networking, Memory, and Power Integration As compute density scales, networking and memory bottlenecks will define infrastructure design. Expect fat-tree and dragonfly network topologies, 800 G–1.6 T interconnects, and aggressive optical-module roadmaps to minimize collective-operation latency, aligning with recent disclosures from major networking vendors.

Read More »

Study Finds $4B in Data Center Grid Costs Shifted to Consumers Across PJM Region

In a new report spanning 2022 through 2024, the Union of Concerned Scientists (UCS) identifies a significant regulatory gap in the PJM Interconnection’s planning and rate-making process—one that allows most high-voltage (“transmission-level”) interconnection costs for large, especially AI-scale, data centers to be socialized across all utility customers. The result, UCS argues, is a multi-billion-dollar pass-through that is poised to grow as more data center projects move forward, because these assets are routinely classified as ordinary transmission infrastructure rather than customer-specific hookups. According to the report, between 2022 and 2024, utilities initiated more than 150 local transmission projects across seven PJM states specifically to serve data center connections. In 2024 alone, 130 projects were approved with total costs of approximately $4.36 billion. Virginia accounted for nearly half that total—just under $2 billion—followed by Ohio ($1.3 billion) and Pennsylvania ($492 million) in data-center-related interconnection spending. Yet only six of those 130 projects, about 5 percent, were reported as directly paid for by the requesting customer. The remaining 95 percent, representing more than $4 billion in 2024 connection costs, were rolled into general transmission charges and ultimately recovered from all retail ratepayers. How Does This Happen? When data center project costs are discussed, the focus is usually on the price of the power consumed, or megawatts multiplied by rate. What the UCS report isolates, however, is something different: the cost of physically delivering that power: the substations, transmission lines, and related infrastructure needed to connect hyperscale facilities to the grid. So why aren’t these substantial consumer-borne costs more visible? The report identifies several structural reasons for what effectively functions as a regulatory loophole in how development expenses are reported and allocated: Jurisdictional split. High-voltage facilities fall under the Federal Energy Regulatory Commission (FERC), while retail electricity rates are governed by state public utility

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »