Stay Ahead, Stay ONMINE

DCF Trends Summit 2025: AI for Good – How Operators, Vendors and Cooling Specialists See the Next Phase of AI Data Centers

At the 2025 Data Center Frontier Trends Summit (Aug. 26-28) in Reston, Va., the conversation around AI and infrastructure moved well past the hype. In a panel sponsored by Schneider Electric—“AI for Good: Building for AI Workloads and Using AI for Smarter Data Centers”—three industry leaders explored what it really means to design, cool and […]

At the 2025 Data Center Frontier Trends Summit (Aug. 26-28) in Reston, Va., the conversation around AI and infrastructure moved well past the hype. In a panel sponsored by Schneider Electric—“AI for Good: Building for AI Workloads and Using AI for Smarter Data Centers”—three industry leaders explored what it really means to design, cool and operate the new class of AI “factories,” while also turning AI inward to run those facilities more intelligently.

Moderated by Data Center Frontier Editor in Chief Matt Vincent, the session brought together:

  • Steve Carlini, VP, Innovation and Data Center Energy Management Business, Schneider Electric

  • Sudhir Kalra, Chief Data Center Operations Officer, Compass Datacenters

  • Andrew Whitmore, VP of Sales, Motivair

Together, they traced both sides of the “AI for Good” equation: building for AI workloads at densities that would have sounded impossible just a few years ago, and using AI itself to reduce risk, improve efficiency and minimize environmental impact.

From Bubble Talk to “AI Factories”

Carlini opened by acknowledging the volatility surrounding AI investments, citing recent headlines and even Sam Altman’s public use of the word “bubble” to describe the current phase of exuberance.

“It’s moving at an incredible pace,” Carlini noted, pointing out that roughly half of all VC money this year has flowed into AI, with more already spent than in all of the previous year. Not every investor will win, he said, and some companies pouring in hundreds of billions may not recoup their capital.

But for infrastructure, the signal is clear: the trajectory is up and to the right.

  • GPU generations are cycling faster than ever.

  • Densities are climbing from high double-digits per rack toward hundreds of kilowatts.

  • The hyperscale “AI factories,” as NVIDIA calls them, are scaling to campus capacities measured in gigawatts.

Carlini reminded the audience that in 2024, a one-megawatt rack still sounded exotic. In 2025, it feels much closer to reality—and the roadmap is already targeting the next step.

AI’s Opportunity—and Operational Reality

For Compass Datacenters’ operations chief Sudhir Kalra, AI sits in a long line of transformative technologies.

“AI has the potential to fundamentally shift our daily lives like electricity did, like automobiles did,” he said. Whether that happens in five, ten or fifteen years is less important than the direction of travel.

Kalra underscored two realities:

  1. The data center industry is the enabler.
    AI’s boom is a primary driver of today’s construction pipeline. Where a 1 MW facility once felt large in enterprise, hyperscale customers now casually discuss gigawatt campuses. A 1 MW data hall is small. One-megawatt racks are nearly here.

  2. The workforce gap is widening.
    Despite fears that AI will eliminate jobs, Kalra’s daily challenge is the opposite: he can’t find enough people who want to do and can competently perform critical facilities work. Even in creative fields, he noted, a recent Billy Joel video produced with AI ultimately required more people than a conventional workflow.

“All of this AI technology doesn’t just appear out of nowhere,” he said. “Someone has to design it, build it, code it, and fix it when it breaks.”

That shortage of skilled hands is exactly why operational teams are looking to AI—not to replace people, but to make the people they do have more effective and less error-prone.

Cooling the First Inning of the AI Era

From Motivair’s vantage point, the AI build-out is still at the very beginning.

“This is the first pitch of the first inning of probably a three-game series,” said Andrew Whitmore. Demand is “insatiable,” use cases are multiplying, and every part of the ecosystem—utilities, developers, operators, supply chain—is being asked to raise the bar.

Nowhere is that more tangible than in cooling.

Whitmore traced the arc of the last two decades:

“The closer you are to the heat source,” he said, “the more effective and efficient you can be.”

Yet most of the world’s data centers weren’t built for AI racks drawing 100–600 kW. They’re brownfield sites, often with rack limits of 15–20 kW and air-only cooling. In the U.S. alone, nearly 5,300 existing facilities will need to be revitalized, not bulldozed.

That’s where Whitmore sees a huge role for liquid cooling infrastructure:

  • Introducing direct-to-chip cooling into existing air-cooled facilities.

  • Using liquid-to-air CDUs to bridge old and new architectures.

  • Designing solutions that can be installed and maintained without crippling day-to-day operations.

“There’s no monotony in the day-to-day,” he said. “No two designs or challenges are the same.”

Designing for 132 kW Racks—and 600 kW on the Horizon

Carlini then pulled back the curtain on Schneider Electric’s ongoing collaboration with NVIDIA to keep up with this densification.

NVIDIA supplies a chip roadmap; Schneider builds digital twins of the future clusters and runs full-scale testing before releasing any reference design. That process has already surfaced a critical reality: the real world is hotter than the spec sheet.

  • Early plans for Blackwell GB200 racks assumed ~120 kW per rack.

  • When Schneider tested, the actual number rose to ~132 kW due to higher real-world power draw and thermal behavior.

Looking ahead:

  • Blackwell Ultra (GB300) is expected to push densities even higher.

  • Future architectures like Vera Rubin and Kyber point toward rack densities in the hundreds of kilowatts—Carlini suggested 600 kW per rack is in sight.

“There just aren’t enough data center designers in the world to reinvent power and cooling for every new chip generation,” he said. That’s why validated, documented reference designs—from substation to rack manifold—are becoming essential for anyone trying to deploy AI clusters at scale and at speed.

Prefab, Modularity and the Manufacturing Mindset

As racks grow more intense and systems more complex, the panelists agreed: the industry has to stop thinking of data centers as one-off construction projects, and start thinking like manufacturers.

Whitmore emphasized the role of prefabricated modular blocks:

  • Electrical blocks and data hall blocks built in controlled factory environments.

  • Integrated piping, busway, fire protection and leak detection designed as a system.

  • Consistent builds that shorten construction schedules and eliminate trades “stepping on each other’s toes.”

This modularity not only increases speed to market, it makes unknowns more manageable. Hyperscalers today often provision for a 50/50 split between air and liquid cooling, knowing the roadmap could tip either way; a modular approach lets them adapt as strategies evolve.

Kalra picked up that theme from Compass’ perspective.

“We don’t think of ourselves as constructing a data center,” he said. “We think of it as assembling one.”

Compass designs buildings as 100-year shells wrapped around modular technology:

  • Technology stacks refreshed roughly every 4–7 years.

  • Facility stacks (power and cooling) refreshed every 20–25 years.

  • Precast panels instead of poured walls.

  • Modular electrical and cooling systems that can be swapped one block or one hall at a time.

The goal is simple: refresh without disrupting operations. That’s only achievable if modularity is baked in from the start.

Carlini added that the same principle applies at the rack and cluster level. An AI “pod” of NVL72 racks—$3 million each and weighing ~5,000 pounds—must arrive as a well-engineered system: pre-integrated liquid cooling, manifolds, sensors, leak detection, and control logic. “Time to cooling” is one thing, he said; “time to optimization” is another.

AI Inside the Operations: From Calendar-Based to Condition-Based

If “AI for good” begins with building AI factories more intelligently, it continues with using AI to run them more safely and efficiently.

Kalra highlighted one of the most stubborn realities in critical facilities: human error.

Uptime Institute has long observed that roughly 60–70% of facility incidents are tied to human factors. Combine that with a shortage of experienced staff, and the question becomes: how do you keep hands out of the machines unless they truly need to be there?

For Compass, the answer is a shift from:

Drawing on lessons from aviation’s Reliability Centered Maintenance (RCM), Kalra described Compass’ work with Schneider Electric to embed sensors into electrical and mechanical equipment and feed those signals into ML models that can:

  • Detect emerging anomalies before failures.

  • Recommend when to intervene and when to safely defer.

  • Reduce unnecessary truck rolls and invasive maintenance on healthy equipment.

The result, as Compass has shared publicly, has been a roughly 40% reduction in manual, on-site maintenance interventions and about a 20% decrease in OPEX—without compromising reliability.

Compass still uses a hybrid approach, he stressed, keeping some periodic checks where the models aren’t yet mature. But the direction is clear: AI-driven, condition-based maintenance will become ubiquitous in data center operations, particularly as densities and consequences of failure escalate.

Carlini added that as racks push past 100 kW, the margin for error shrinks dramatically. At 132 kW per rack and beyond, “your buffer for overheating and shutting down dramatically shortens,” which makes advance warning—not post-mortems—absolutely critical.

AI, Energy and Sustainability: From Grid to Chip

When the conversation turned to sustainability, Carlini zoomed out to the grid.

Modern utility distribution networks are increasingly complex and distributed. Schneider’s software platforms already embed AI to help grid operators:

  • Forecast loads across weather patterns, local events and diverse customer types.

  • Manage growing fleets of battery energy storage systems (BESS) and behind-the-meter assets.

  • Deal with curtailment of renewables by strategically charging and discharging storage.

For large AI campuses, he noted, a one-gigawatt facility could require “ten football fields of batteries” for multi-hour backup—assets that can be leveraged not just for resiliency, but for grid support. Meanwhile, regulations governing standby generators and fuel types are evolving to allow more flexible, lower-carbon participation.

Schneider participates in initiatives like EPRI’s DC Flex program, where utilities and data center operators collaborate on frameworks for flexible, grid-aware operations.

Kalra, staying in his lane, focused on build-time sustainability:

  • Using AI to optimize construction sequencing.

  • Maximizing offsite manufacturing to reduce heavy truck rolls and on-site disruption.

  • Treating the build as a “Lego project” rather than traditional construction, shortening time-to-ready and the period of community impact around a site.

Whitmore emphasized how liquid cooling itself advances sustainability. By moving heat transfer closer to the chip, facilities can:

  • Reduce fan horsepower and large-scale air movements.

  • Use tighter control through extensive temperature and pressure sensing.

  • Enable heat reuse schemes—for example, piping low-grade heat to nearby greenhouses or district systems.

“With the predictive analytics and the instrumentation in these systems,” he said, “you’re not just getting your cooling—you’re optimizing an ecosystem.”

Five Years Out: Power, Ubiquity and Standards

In the session’s final question, the panelists were asked to imagine one AI-driven innovation that would reshape data centers for a net-positive impact five years from now.

Carlini’s answer centered on power architecture. The industry, he said, is up against the limits of what can practically be delivered at 400 V. AI factories are already driving a shift to 800 V DC architectures—an approach Google and NVIDIA have publicly championed. Power supplies will move into side-mounted “power pump” units feeding dense, liquid-cooled GPU trays, fundamentally changing how power is distributed and protected in the white space. The entire industry—from switchgear to busway to rack—will have to adapt.

Kalra declined to pick a single innovation.

“I think AI will be everywhere,” he said, comparing it to the Industrial Revolution. Just as no one can point to a solitary invention that defined that era, he expects AI to permeate every part of design, construction, operations and community impact, for good or ill depending on how it’s applied.

Whitmore focused on standardization as the quiet but crucial innovation. Groups like ASHRAE TC 9.9 and international collaborators are already developing standards for coolant quality, temperature approaches and interoperability in liquid-cooled systems. That kind of global alignment, he argued, will be essential to scaling AI infrastructure “effectively, efficiently and sustainably” rather than as a collection of incompatible one-offs.

AI for Good: Beyond the Buzzword

If there was a through-line to the panel, it was that “AI for Good” is less about slogans and more about hard engineering and disciplined operations:

  • Designing power and cooling systems that can realistically support 100–600 kW racks.

  • Revitalizing thousands of existing data centers with liquid cooling and modular upgrades.

  • Using AI to reduce human error, target maintenance, and cut waste—both in operations and in construction.

  • Working with utilities and standards bodies to ensure that AI campuses are grid assets, not liabilities.

The AI wave is still in its early innings, but as Carlini, Kalra and Whitmore made clear in Reston, the industry’s choices now will determine whether today’s AI factories become tomorrow’s regret—or enduring examples of AI applied for good across technology, economics and the communities that host them.

This feature is the first in Data Center Frontier’s series recapping key sessions from the DCF Trends Summit 2025.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Intel aims advanced Xeon 6+ at AI edge computing

At the Mobile World Conference show in Barcelona, Intel showcased its most advanced processor yet, the Xeon 6+ processor, codenamed “Clearwater Forest.” Technically, it is one of Intel’s most complex chiplet designs, with a package that combines a total of 12 compute chiplets manufactured on a mix of Intel 18A

Read More »

Nvidia partners with telecom providers for open 6G networks

Nvidia has partnered with a variety of global telecom providers for a commitment to build 6G on open and secure artificial intelligence-native platforms, bringing software-defined networking to telecommunications. Announced at the Mobile World Congress conference, the list of Nvidia partners is a who’s who of telecom — Booz Allen, BT

Read More »

U.S. Department of Energy Brings Together Vertical Gas Corridor Countries to Strengthen Energy Coordination

WASHINGTON, DC — The U.S. Department of Energy (DOE) today hosted officials from Bulgaria, Greece, Romania, Moldova, Ukraine, and the European Commission to advance work on the Vertical Gas Corridor. The meeting built on progress made at the Partnership for Transatlantic Energy Cooperation Summit in Athens in November 2025 and the Transatlantic Gas Security Summit in Washington, D.C. in February 2026.  “By partnering with the countries of the Vertical Corridor, we are opening major opportunities to expand U.S. LNG exports to Central and Eastern Europe,” said Joshua Volz. “This effort is so important to our President and Secretary because it aligns with our nation’s strengths and commitment to supporting friends and allies across Europe.” The technical discussion brought together Energy Ministries, national regulators, and Transmission System Operators (TSOs) to address key objectives essential to unlocking the Vertical Gas Corridor’s capacity to enable the northbound flow of regasified U.S. LNG from Greece and expand access to European markets:  Resolving regulatory friction points that impact long-term planning Harmonizing tariffs to achieve cost competitiveness Reviewing strategic infrastructure investments necessary to enable full corridor capacity Today’s meeting reinforces DOE’s commitment to strengthening U.S. energy leadership and helping allies secure reliable alternatives to adversarial energy suppliers. By reducing barriers to U.S. LNG exports, DOE continues to support America’s role as a leading global energy provider.                                                                                               ###

Read More »

Equinor lets EPC contract for Gullfaks field

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Equinor Energy AS has let an engineering, procurement, and construction (EPC) contract to SLB to upgrade the subsea compression system for Gullfaks field in the Norwegian North Sea. Under the contract, SLB OneSubsea will deliver two next-generation compressor modules to replace the units originally supplied in 2015 as part of the world’s first multiphase subsea compression system. The upgraded modules will increase differential pressure and flow capacity, enhancing recovery and extending field life, SLB said, while installation within the existing subsea infrastructure will minimize downtime and reduce overall campaign costs, the company continued. Gullfaks field lies in block 34/10 in the northern part of the North Sea. Three large production platforms with concrete substructures make up the development solution for the main field.

Read More »

Oxy cutting oil-and-gas capex by $300 million, eyes 1% production growth

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Occidental Petroleum Corp., Houston, will spend $5.5-5.9 billion on capital projects this year, an 8% drop from 2025 and $800 million less than executives’ early forecast late last year, as the company continues to emphasize efficiency gains. Spending on oil-and-gas operations will be $300 million less than last year. Sunil Mathew, chief financial officer, late last week told investors and analysts that Occidental’s capital spending budget for 2026 (adjusted for the recently completed divestiture of OxyChem) will focus on short-cycle projects and be roughly 70% devoted to US onshore assets. Still, onshore capex will drop by $400 million from last year in part because of a drop in Permian basin activities and efficiency improvements. Other elements of Occidental’s spending plan include: A reduction of about $100 million compared to last year for exploration work A $250 million drop in spending at the company’s Low Carbon Ventures group housing Stratos Mathew said capex, which will be weighted a little to the first half, sets up Occidental’s production to average 1.45 MMboe/d for the full year, a tick up from 2025’s average of 1.434 MMboe/d but down from the roughly 1.48

Read More »

Diamondback’s Van’t Hof growing ‘more confident about the macro’

The early Barnett production will help Diamondback slightly increase its oil production this year from 2025’s average of 497,200 b/d. Van’t Hof and his team are eyeing 505,000 b/d this year with total expected production of 926,000-962,000 boe/d versus last year’s 921,000 boe/d. On a Feb. 24 conference call with analysts and investors, Van’t Hof said he’s feeling better than in recent quarters about that production number possibly moving up. The bigger picture for the oil-and-gas sector, he said, has grown a bit brighter. “Some people have been talking about [oversupplying the market] for 2 years. It just hasn’t seemed to happen as aggressively as some expected,” Van’t Hof said. “As we turn to higher demand in the summer and driving season […] people will start to find reasons to be less bearish […] In general, we just feel more confident about the macro after a couple of big shocks last year on the supply side and the demand side.” In the last 3 months of 2025, Diamondback posted a net loss of more than $1.4 billion due to a $3.6 billion impairment charge because of lower commodity prices’ effect on the company’s reserves. Adjusted EBITA fell to $2.0 billion from $2.5 billion in late 2024 and revenues during the quarter slipped to nearly $3.4 billion from $3.7 billion. Shares of Diamondback (Ticker: FANG) were essentially flat at $173.68 in early-afternoon trading on Feb. 24. Over the past 6 months, they are still up more than 20% and the company’s market value is now $50 billion.

Read More »

Vaalco Energy advances offshore drilling, development in Gabon and Ivory Coast

Vaalco Energy Inc. is drilling Etame field offshore Gabon and a preparing a field development plan (FDP) off Ivory Coast.  In Gabon, Vaalco drilled, completed, and placed Etame 15H-ST development well on production in Etame oil field in 1V block. The well has a 250 m lateral interval of net pay in high-quality Gamba sands near the top of the reservoir. The well had a stabilized flow rate of about 2,000 gross b/d of oil with a 38% water cut through a 42/64-in. choke and ESP at 54 Hz, confirming expectations from the ET-15P pilot well results. The company is working to stabilize pressure and manage the reservoir. West Etame step out exploration well spudded in mid-February. Drilling the well from the S1 slot on the Etame platform Etame West (ET-14P) exploration prospect has a 57% chance of geologic success and is expected to reach the target zone by mid-March. Etame Marin block lies in Congo basin about 32 km off the coast of Gabon. The license area is spread over five fields covering about 187 sq km. Vaalco is operator at the block with 58.8% interest. In Ivory Coast, Vaalco has been confirmed as operator (60%) of Kossipo field on the CI-40 Block southwest of Baobab field with partner PetroCI holding the remaining 40%. An FDP is expected to be completed in second-half 2026. New ocean bottom node (OBN) seismic data is expected to drive and derisk Vaalco’s updated evaluation and development plan. Estimated Gross 2C resources are 102-293 MMboe in place. The Baobab Ivorien (formerly MV10) floating production storage and offloading vessel (FPSO) is currently off the East coast of Africa and is expected to return to Ivory Coast by late March.  

Read More »

Ovintiv sets 2026 plan around Permian, Montney after declaring portfolio shift ‘complete’

2026 guidance For 2026, Ovintiv plans to invest $2.25–2.35 billion, up slightly from the $2.147 billion spent in 2025. McCracken said capital spend will be highest in first-quarter 2026 at about $625 million, “largely due to $50 million of capital allocated to the Anadarko and some drilling activity in the Montney that we inherited from NuVista.” The program is designed to deliver 205,000–212,000 b/d of oil and condensate, some 2 bcfd of natural gas, and 620,000–645,000 boe/d total company production. For full-year 2025, the company produced 614,500 boe/d.  The company is pursuing a “stay‑flat” oil strategy, maintaining liquids output through steady activity rather than aggressive volume growth.  Permian Ovintiv plans to run 5 rigs and 1-2 frac crews in the Permian basin this year, bringing 125–135 net wells online. Oil and condensate volumes are expected to average 117,000–123,000 b/d, with natural gas production of 270–295 MMcfd. The company projects 2026 drilling and completion costs below $600/ft, about $25/ft lower than 2025. Chief operating officer Gregory Givens credited faster cycle times and ongoing application of surfactant technology. Ovintiv has now deployed surfactants in about 300 Permian wells, generating a 9% uplift in oil productivity versus comparable control wells. Givens also reiterated that Ovintiv remains committed to its established cube‑development model. Responding to an analyst question, he said the company continues completing entire cubes at once, then returning “18 months later” to develop adjacent cubes—an approach that stabilizes well performance and reduces parent‑child degradation, he said. “We are getting the whole cube at the same time, and that is working quite well for us,” he said. The company plans to drill its first Barnett Woodford test well across Midland basin acreage in 2026. Ovintiv holds Barnett rights across roughly 100,000 acres and intends to move cautiously given the zone’s depth, higher pressure,

Read More »

Why network bandwidth matters a lot

One interesting point about VPNs is raised by fully a third of capacity-hungry enterprises: SD-WAN is the cheapest and easiest way to increase capacity to remote sites. Yes, service reliability of broadband Internet access for these sites is highly variable, so enterprises say they need to pilot test in a target area to determine whether even business-broadband Internet is reliable enough, but if it is, high capacity is both available and cheap. Clearly data center networking is taking the prime position in enterprise network planning, even without any contribution from AI. Will AI contribute? Enterprises generally believe that self-hosted AI will indeed require more network bandwidth, but again think this will be largely confined to the data center. AI, they say, has a broader and less predictable appetite for data, and business applications involving the data that’s subject to governance, or that’s already data-center hosted, are likely to be hosted proximate to the data. That was true for traditional software, and it’s likely just as true for AI. Yes, but…today, three times as many enterprises say that they’d use AI needs simply to boost justification for capacity expansion as think they currently need it. AI hype has entered, and perhaps even dominates, capital network project justifications. These capacity trends don’t impact enterprises alone, they also reshape the equipment space. Only 9% of enterprises say they have invested in white-box devices to build capacity and data center configuration flexibility, but the number that say they would evaluate them in 2026 is double that. This may be what’s behind Cisco’s decision to push its new G300 chip. AI’s role in capital project justifications may also be why Cisco positions the G300 so aggressively as an AI facilitator. Make no mistake, though; this is really all about capacity and QoE, even for AI.

Read More »

JLL: Hyperscale and AI Demand Push North American Data Centers Toward Industrial Scale

JLL’s North America Data Center Report Year-End 2025 makes a clear argument that the sector is no longer merely expanding but has shifted into a phase of industrial-scale acceleration driven by hyperscalers, AI platforms, and capital markets that increasingly treat digital infrastructure as core, bond-like collateral. The report’s central thesis is straightforward. Structural demand has overwhelmed traditional real estate cycles. JLL supports that claim with a set of reinforcing signals: Vacancy remains pinned near zero. Most new supply is pre-leased years ahead. Rents continue to climb. Debt markets remain highly liquid. Investors are engineering new financial structures to sustain growth. Author Andrew Batson notes that JLL’s Data Center Solutions team significantly expanded its methodology for this edition, incorporating substantially more hyperscale and owner-occupied capacity along with more than 40 additional markets. The subtitle — “The data center sector shifts into hyperdrive” — serves as an apt one-line summary of the report’s posture. The methodological change is not cosmetic. By incorporating hyper-owned infrastructure, total market size increases, vacancy compresses, and historical time series shift accordingly. JLL is explicit that these revisions reflect improved visibility into the market rather than a change in underlying fundamentals; and, if anything, suggest prior reports understated the sector’s true scale. The Market in Three Words: Tight, Pre-Leased, Relentless The report’s key highlights page serves as an executive brief for investors, offering a concise snapshot of market conditions that remain historically constrained. Vacancy stands at just 1%, unchanged year over year, while 92% of capacity currently under construction is already pre-leased. At the same time, geographic diversification continues to accelerate, with 64% of new builds now occurring in so-called frontier markets. JLL also notes that Texas, when viewed as a unified market, could surpass Northern Virginia as the top data center market by 2030, even as capital

Read More »

7×24 Exchange’s Dennis Cronin on the Data Center Workforce Crisis: The Talent Cliff Is Already Here

The data center industry has spent the past two years obsessing over power constraints, AI density, and supply chain pressure. But according to longtime mission critical leader Dennis Cronin, the sector’s most consequential bottleneck may be far more human. In a recent episode of the Data Center Frontier Show Podcast, Cronin — a founding member of 7×24 Exchange International and board member of the Mission Critical Global Alliance (MCGA) — delivered a stark message: the workforce “talent cliff” the industry keeps discussing as a future risk is already impacting operations today. A Million-Job Gap Emerging Cronin’s assessment reframes the workforce conversation from a routine labor shortage to what he describes as a structural and demographic challenge. Based on recent analysis of open roles, he estimates the industry is currently short between 467,000 and 498,000 workers across core operational positions including facilities managers, operations engineers, electricians, generator technicians, and HVAC specialists. Layer in emerging roles tied to AI infrastructure, sustainability, and cyber-physical security, and the potential demand rises to roughly one million jobs. “The coming talent cliff is not coming,” Cronin said. “It’s here, here and now.” With data center capacity expanding at roughly 30% annually, the workforce pipeline is not keeping pace with physical buildout. The Five-Year Experience Trap One of the industry’s most persistent self-inflicted wounds, Cronin argues, is the widespread requirement for five years of experience in roles that are effectively entry level. The result is a closed-loop hiring dynamic: New workers can’t get hired without experience They can’t gain experience without being hired Operators end up poaching from each other Workers may benefit from the resulting 10–20% salary jumps, but the overall talent pool remains stagnant. “It’s not helping us grow the industry,” Cronin said. In a market defined by rapid expansion and increasing system complexity, that

Read More »

Aeroderivative Turbines Move to the Center of AI Data Center Power Strategy

From “Backup” to “Bridging” to Behind-the-Meter Power Plants The most important shift is conceptual: these systems are increasingly blurring the boundary between emergency backup and primary power supply. Traditionally, data center electrical architecture has been clearly tiered: UPS (seconds to minutes) to ride through utility disturbances and generator start. Diesel gensets (minutes to hours or days) for extended outages. Utility grid as the primary power source. What’s changing is the rise of bridging power:  generation deployed to energize a site before the permanent grid connection is ready, or before sufficient utility capacity becomes available. Providers such as APR Energy now explicitly market turbine-based solutions to data centers seeking behind-the-meter capacity while awaiting utility build-out. That framing matters because it fundamentally changes expected runtime. A generator that operates for a few hours per year is one regulatory category. A turbine that runs continuously for weeks or months while a campus ramps is something very different; and it is drawing increased scrutiny from regulators who are beginning to treat these installations as material generation assets rather than temporary backup systems. The near-term driver is straightforward. AI workloads are arriving faster than grid infrastructure can keep pace. Data Center Frontier and other industry observers have documented the growing scramble for onsite generation as interconnection queues lengthen and critical equipment lead times expand. Mainstream financial and business media have taken notice. The Financial Times has reported on data centers turning to aeroderivative turbines and diesel fleets to bypass multi-year power delays. Reuters has likewise covered large gas-turbine-centric strategies tied to hyperscale campuses, underscoring how quickly the co-located generation model is moving into the mainstream. At the same time, demand pressure is tightening turbine supply chains. Industry reporting points to extended waits for new units, one reason repurposed engine cores and mobile aeroderivative packages are gaining

Read More »

Cooling’s New Reality: It’s Not Air vs. Liquid Anymore. It’s Architecture.

By early 2026, the data center cooling conversation has started to sound less like a product catalog and more like a systems engineering summit. The old framing – air cooling versus liquid cooling – still matters, but it increasingly misses the point. AI-era facilities are being defined by thermal constraints that run from chip-level cold plates to facility heat rejection, with critical decisions now shaped by pumping power, fluid selection, reliability under ambient extremes, water availability, and manufacturing throughput. That full-stack shift is written all over a grab bag of recent cooling announcements. On one end of the spectrum we see a Department of Energy-funded breakthrough aimed directly at next-generation GPU heat flux. On the other, it’s OEM product launches built to withstand –20°F to 140°F operating conditions and recover full cooling capacity within minutes of a power interruption. In between we find a major acquisition move for advanced liquid cooling IP, a manufacturing expansion that more than doubles footprint, and the quiet rise of refrigerants and heat-transfer fluids as design-level considerations. What’s emerging is a new reality. Cooling is becoming one of the primary constraints on AI deployment technically, economically, and geographically. The winners will be the players that can integrate the whole stack and scale it. 1) The Chip-level Arms Race: Single-phase Fights for More Runway The most “pure engineering” signal in this news batch comes from HRL Laboratories, which on Feb. 24, 2026 unveiled details of a single-phase direct liquid cooling approach called Low-Chill™. HRL’s framing is pointed: the industry wants higher GPU and rack power densities, but many operators are wary of the cost and operational complexity of two-phase cooling. HRL says Low-Chill was developed under the U.S. Department of Energy’s ARPA-E COOLERCHIPS program, and claims a leap that goes straight at the bottleneck. It can increase

Read More »

Policy Shock: Big Tech Told to Power Its Own AI Buildout

The AI data center boom has been colliding with grid reality for more than two years. This week, the issue moved closer to the policy front lines. The White House is advancing a “ratepayer protection” framework that has gained visibility in recent days, aimed at ensuring large AI data center projects do not shift grid upgrade costs onto residential customers. It’s a signal widely interpreted by industry observers as encouraging hyperscalers to bring dedicated power solutions to the table. The Power Question Moves to Center Stage Washington now appears poised to push the industry toward a structural response to the data center power conundrum. The new federal impetus for major technology companies to shoulder the cost of their own power infrastructure is quickly emerging as one of the most consequential policy developments for the digital infrastructure sector in 2026. If formalized, the initiative would effectively codify a shift already underway which has found hyperscale and AI developers moving aggressively toward behind-the-meter generation and dedicated energy strategies. For an industry already grappling with interconnection delays, utility pushback, and mounting community scrutiny, the signal is unmistakable. The era of relying primarily on shared grid capacity for large AI campuses may be ending. From Market Trend to Policy Direction Large tech firms, including the biggest cloud and AI players, have been under increasing pressure from regulators and utilities concerned about ratepayer exposure and grid reliability. Policymakers are now signaling that future large-load approvals may hinge on whether developers can demonstrate energy self-sufficiency or dedicated supply. The logic is straightforward. AI campuses are arriving at hundreds of megawatts to gigawatt scale. Transmission upgrades are measured in multi-year timelines. Utilities face growing political pressure to protect residential customers. In that context, the emerging federal posture does not create a new trend so much as accelerate

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »