Stay Ahead, Stay ONMINE

Google’s AI-Powered Grid Revolution: How Data Centers Are Reshaping the U.S. Power Landscape

Google Unveils Groundbreaking AI Partnership with PJM and Tapestry to Reinvent the U.S. Power Grid In a move that underscores the growing intersection between digital infrastructure and energy resilience, Google has announced a major new initiative to modernize the U.S. electric grid using artificial intelligence. The company is partnering with PJM Interconnection—the largest grid operator […]

Google Unveils Groundbreaking AI Partnership with PJM and Tapestry to Reinvent the U.S. Power Grid

In a move that underscores the growing intersection between digital infrastructure and energy resilience, Google has announced a major new initiative to modernize the U.S. electric grid using artificial intelligence. The company is partnering with PJM Interconnection—the largest grid operator in North America—and Tapestry, an Alphabet moonshot backed by Google Cloud and DeepMind, to develop AI tools aimed at transforming how new power sources are brought online.

The initiative, detailed in a blog post by Alphabet and Google President Ruth Porat, represents one of Google’s most ambitious energy collaborations to date. It seeks to address mounting challenges facing grid operators, particularly the explosive backlog of energy generation projects that await interconnection in a power system unprepared for 21st-century demands.

“This is our biggest step yet to use AI for building a stronger, more resilient electricity system,” Porat wrote.

Tapping AI to Tackle an Interconnection Crisis

The timing is critical. The U.S. energy grid is facing a historic inflection point. According to the Lawrence Berkeley National Laboratory, more than 2,600 gigawatts (GW) of generation and storage projects were waiting in interconnection queues at the end of 2023—more than double the total installed capacity of the entire U.S. grid.

Meanwhile, the Federal Energy Regulatory Commission (FERC) has revised its five-year demand forecast, now projecting U.S. peak load to rise by 128 GW before 2030—more than triple the previous estimate.

Grid operators like PJM are straining to process a surge in interconnection requests, which have skyrocketed from a few dozen to thousands annually. This wave of applications has exposed the limits of legacy systems and planning tools. Enter AI.

Tapestry’s role is to develop and deploy AI models that can intelligently manage and streamline the complex process of interconnecting power sources—renewables, storage, and conventional generation—across PJM’s vast network, which spans 13 states and the District of Columbia, serving 67 million people.

A Unified, AI-Powered Grid Management Platform

The partnership’s multi-year roadmap aims to cut the interconnection approval process from years to months. Key pillars of the effort include:

  • Accelerating capacity additions: By automating time-intensive verification and modeling processes, AI tools from Tapestry will help PJM quickly assess and approve new energy projects. This could significantly reduce the development cycle for grid-connected power, addressing bottlenecks that have plagued renewable developers in particular.
  • Driving cost-effective grid expansion: Tapestry will integrate disparate databases and modeling tools into a single secure platform. The goal is to create a unified model of PJM’s network where grid planners and developers can collaborate seamlessly, boosting transparency and planning agility.
  • Integrating diverse energy resources: With variable renewables such as solar and wind comprising a large share of PJM’s queue, Tapestry’s AI solutions aim to enable more precise modeling and faster incorporation of these intermittent resources into the grid mix.

Strategic Implications for Data Centers and the AI Economy

For the data center industry—especially as AI workloads dramatically reshape infrastructure demand—Google’s announcement is more than a technical achievement. It’s a signal that the rules of engagement for grid interaction are changing. As hyperscalers seek not only to power their operations sustainably but also to help shape the energy systems around them, partnerships like this one may become a template.

Google is also backing complementary technologies such as advanced nuclear and enhanced geothermal, with the long-term goal of unlocking new, firm capacity. These efforts align with the industry’s growing push for direct grid participation and innovative procurement strategies to manage skyrocketing power needs.

As Porat noted, “Creative solutions from across the private and public sectors are crucial to ensure the U.S. has the energy capacity, affordability, and reliability needed to capitalize on the opportunity for growth.”

Backing PJM’s Long-Term Planning Reforms

This collaboration arrives at a critical juncture for PJM, which is already deep into a multi-year effort to reform its planning and interconnection processes. The AI-powered partnership with Google and Tapestry is designed to complement and accelerate this work—especially as PJM processes the final 67 GW of projects remaining in its interconnection transition phase, part of a broader 200 GW backlog.

“Innovation will be critical to meeting the demands on the future grid, and we’re leveraging some of the world’s best capabilities with these cutting-edge tools,” said Aftab Khan, Executive Vice President of Operations, Planning & Security at PJM. “PJM is committed to bringing new generation onto the system as quickly and reliably as possible.”

PJM plans to launch a new cycle-based process for interconnection applications in early 2026, and the AI partnership is expected to play a foundational role in that effort. As part of its broader grid modernization push, PJM has also rolled out the Reliability Resource Initiative, aimed at expediting selected projects within its current queue.

Tapestry General Manager Page Crahan described the effort as one that “will enable PJM to make faster decisions with greater confidence, making more energy capacity available to interconnect in shorter time frames.” For Google, this isn’t just about grid optimization—it’s a strategic necessity for a digital economy whose energy appetite is growing exponentially.

“This initiative brings together our most advanced technologies to help solve one of the greatest challenges of the AI era—evolving our electricity systems to meet this moment,” said Amanda Peterson Corio, Head of Data Center Energy for Google.

Looking Ahead: A Blueprint for Grid Innovation

Google’s collaboration with PJM and Tapestry represents more than a software upgrade. It’s an architectural rethinking of how intelligence and infrastructure must co-evolve. At the heart of this shift is the belief that AI isn’t just a driver of data center demand—it may also be the key to making that demand sustainable.

By aligning cutting-edge AI innovation with PJM’s operational depth and Tapestry’s moonshot ambition, this partnership lays the groundwork for something much larger: a national model for grid modernization. It represents a fusion of deep tech, institutional coordination, and real-world urgency—the very factors that will define the power landscape of the AI era.

For the data center industry and beyond, it’s a clear signal that the future grid won’t just be bigger—it will need to be smarter, faster, and more adaptive to the surging complexity of energy demand. As data center operators, energy developers, and policymakers look to the future, this initiative offers a compelling glimpse into what a smarter, faster, and more dynamic grid could look like—with AI at the helm.

Hyperscalers’ Growing Role in Grid Modernization: Expanding AI-Driven Initiatives

As Google embarks on its bold collaboration to modernize the U.S. power grid through artificial intelligence (AI), other hyperscalers are following suit with initiatives aimed at addressing the challenges posed by an increasingly strained energy infrastructure. The intersection of AI, data centers, and energy resilience is rapidly emerging as a central focus for major players like Microsoft, Amazon, and Meta, who are aligning their strategies to accelerate grid modernization and optimization.

Microsoft’s AI-Powered Grid Optimization

Microsoft is another hyperscaler at the forefront of AI applications in grid management. The company has been exploring the potential of AI for grid optimization as part of its broader commitment to sustainability and energy efficiency.

In partnership with the Bonneville Power Administration (BPA) and other utility providers in the Pacific Northwest, Microsoft is leveraging AI to forecast and balance electricity demand across the region. The initiative, known as the “Grid Optimization Project,” aims to reduce energy waste and enhance grid reliability by predicting shifts in energy consumption with unprecedented accuracy.

By applying machine learning algorithms to real-time grid data, Microsoft’s AI tools can anticipate fluctuations in renewable energy generation, such as solar and wind, and adjust load distribution accordingly. The goal is to integrate renewable energy more seamlessly into the grid while maintaining stability and avoiding blackouts.

In addition, Microsoft has committed to providing its AI solutions to help utilities across the U.S. improve grid flexibility and resilience, positioning itself as a key player in transforming the power sector through digital infrastructure.

Amazon’s Renewable Integration and Demand Response

Amazon has also recognized the critical role AI will play in the future of grid modernization. Through the Amazon Web Services (AWS) platform, the company is actively developing AI models to enhance renewable energy integration and optimize energy consumption for its massive network of data centers.

As part of its commitment to reaching net-zero carbon emissions by 2040, Amazon is using AI to balance energy use and improve grid demand response, particularly in areas where renewable energy penetration is high and intermittency poses challenges.

One of Amazon’s standout efforts is its partnership with the California Independent System Operator (CAISO) to develop an AI-based energy management platform that predicts and mitigates the risks associated with renewable energy volatility. The system not only helps Amazon data centers adjust their energy usage during periods of low supply but also assists CAISO in managing grid congestion by offering real-time insights into demand patterns.

Amazon’s continued innovation in AI-driven energy solutions reflects the company’s broader strategy to decarbonize its operations while ensuring the reliability and efficiency of the power systems it relies on.

Meta’s Strategic Energy Investments and AI Integration

Meta is similarly exploring AI applications in grid management, but with a focus on accelerating the transition to renewable energy sources for its data centers. As part of its strategy to reach 100% renewable energy for global operations by 2030, Meta has invested in AI technologies designed to optimize energy procurement and minimize carbon emissions.

Through its partnership with several utility providers, Meta is using AI to predict energy demand and automate the process of sourcing clean energy at the lowest cost. Meta’s AI system integrates data from smart grids and renewable energy sources to create an efficient energy portfolio, enabling Meta to adjust its data center energy usage in real-time.

The company is also investigating AI’s potential in demand-side management, which allows energy consumers to influence grid stability and optimize usage based on fluctuating supply. With its AI-powered solutions, Meta aims to demonstrate how large-scale energy consumption can be made more adaptable to the changing dynamics of the grid.

Strategic Implications for the Data Center Industry

For the data center industry, these AI-driven initiatives represent a new paradigm in grid interaction and energy management. As hyperscalers increasingly integrate AI into their operations, they are not only positioning themselves as innovators in energy optimization but also as active contributors to the broader grid modernization efforts.

By creating smarter, more adaptive energy ecosystems, hyperscalers are paving the way for a more resilient grid capable of meeting the surging demand for energy from digital infrastructure. The growing role of hyperscalers in grid modernization also highlights the broader trend of digital infrastructure and energy systems co-evolving.

As AI continues to drive advancements in both data center operations and energy grid management, these companies are well-positioned to influence the future of power distribution and generation. The efforts made by Google, Microsoft, Amazon, and Meta underscore a pivotal shift: that AI is not only a tool for powering the digital economy but also a critical enabler of sustainable and resilient energy systems for the future.

In this context, collaborations like Google’s with PJM and Tapestry are more than just technical partnerships—they signal a new approach to energy management in the AI era. For the data center industry and the grid operators that serve it, this intersection of digital and energy infrastructure is likely to define the future of how power is distributed, optimized, and consumed at scale.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Palo Alto Networks readies security for AI-first world

Palo Alto has articulated the value of a security platform for several years. But now, given the speed at which AI is moving, the value shifts from cost consolidation to agility. With AI, most customers don’t know what their future operating environment will look like, and a platform approach lets

Read More »

Chevron executives see 2025 production growth nearing 8%

Executives of Chevron Corp., Houston, expect the company’s 2025 production growth, excluding former Hess operations, to be near the top of their guidance range of 6-8%, they said Oct. 31. Chevron’s total production for the 3 months that ended Sept. 30 totaled nearly 4.09 MMboe/d compared with 3.37 MMboe/d in

Read More »

Cisco unveils integrated edge platform for AI

Announced at Cisco’s Partner Summit, Unified Edge will likely be part of many third-party packages that can be configured in a variety of ways, Cisco stated. “The platform is customer definable. For example, if a customer has a workload and they’ve decided they want to use Nutanix, they can go

Read More »

Infoblox bolsters Universal DDI Platform with multi-cloud integrations

Universal DDI for Microsoft Management integration enables enterprises to gain control of their DNS and DHCP by centrally managing DNS and DHCP hosted on Microsoft server platforms. Integration with Google Cloud Internal Range applies consistent IPAM policies across Google Cloud, on-premises, and other cloud environments, which helps enterprise IT to

Read More »

Gunvor CEO Says Deal for Lukoil Assets Is a ‘Clean Break’

Gunvor Group Chief Executive Officer Torbjörn Törnqvist said a deal to acquire the international assets of sanctioned Russian oil producer Lukoil PJSC represents a “clean break” for the portfolio and should pass muster with regulators. “We believe it is satisfying all the concerns that may arise from a transaction of this magnitude and given the parties involved,” Törnqvist said in an interview with Bloomberg Television. He ruled out selling any of the assets back should sanctions on Lukoil be removed.  “It’s a clean break; the moment the deal is done — that’s it.” Lukoil last week announced it had agreed to sell Gunvor its vast international network of oil wells, refineries and gas stations, as well as its trading book, without disclosing terms. If finalized, the deal is a coup for Gunvor, a large trader of oil and gas that has longstanding ties to Russia’s energy industry. In 2014, co-founder Gennady Timchenko was sanctioned by the US, which claimed Russian President Vladimir Putin had “investments in Gunvor,” which the company has consistently denied. Törnqvist said he believes any concerns the authorities might have about continued Russian influence over the portfolio would be satisfied. “We’re pretty confident that this deal ticks off all the critical boxes,” he said Tuesday. The US blacklisted Lukoil and fellow Russian oil giant Rosneft PJSC last month as part of a fresh bid to end the war in Ukraine by depriving Moscow of revenues. Gunvor’s subsequent deal is subject to clearance from the US Treasury’s Office of Foreign Assets Control, among other authorities. As part of the sanctions, Lukoil and its Litasco trading arm have a short window to wind down business dealings. Gunvor is in talks with US regulators to secure an extension to a license to transact with the Russian company.  The US license

Read More »

Energy Department Announces $625 Million to Advance the Next Phase of National Quantum Information Science Research Centers

WASHINGTON— The U.S. Department of Energy (DOE) today announced $625 million in funding to renew its five National Quantum Information Science (QIS) Research Centers, originally established under the National Quantum Initiative Act signed into law by President Trump in December 2018. The renewal of DOE’s National Quantum Information Science Research Centers advances President Trump’s directive to restore American leadership in quantum science and technology. The DOE is aligning its quantum research enterprise with national priorities, focusing resources on advancing critical R&D across the American QIS, strengthening the quantum innovation ecosystem, accelerating discoveries that power next-generation technologies, and securing American leadership in quantum computing, hardware, and applications. “President Trump positioned America to lead the world in quantum science and technology and today, a new frontier of scientific discovery lies before us. Breakthroughs in QIS have the potential to revolutionize the ways we sense, communicate, and compute, sparking entirely new technologies and industries,” said U.S. Department of Energy Under Secretary for Science Darío Gil. “The renewal of DOE’s National Quantum Information Science Research Centers will empower America to secure our advantage in pioneering the next generation of scientific and engineering advancements needed for this technology.” Each NQISRC: Supports fundamental science with disruptive potential across quantum computing, simulation, networking, and sensing. Develops unique tools, equipment, and instrumentation that unlock transformative new QIS capabilities. Advances quantum technology through application to DOE’s most pressing scientific and national security challenge areas. Establishes community resources, workforce opportunities, and industry partnerships to strengthen the entire QIS ecosystem. Center renewals include: Co-design Center for Quantum Advantage (C2QA) – Brookhaven National Laboratory will advance quantum computing and sensing by improving materials used in superconducting and plasma-grown, diamond-based quantum devices and developing modular approaches for superconducting and neutral-atom systems. Superconducting Quantum Materials and Systems Center (SQMS) – Fermi National Accelerator Laboratory

Read More »

Xcel proposes doubling battery storage at Minnesota coal plant

Xcel Energy on Friday asked Minnesota regulators for permission to double the battery storage capacity at a location ajacent to its coal-fired Sherco power plant, which is slated to retire at the end of 2030. “We’re making a significant investment in battery storage because we see it as a critical part of Minnesota’s energy future,” Bria Shea, president of Xcel Energy-Minnesota, North Dakota and South Dakota, said in a statement. The Minnesota Public Utilities Commission has already approved 300 MW to be installed at Sherco. Xcel’s proposal would increase that capacity to 600 MW, making it the largest battery storage site in the upper Midwest, according to the utility. It would also add another 135.5 MW at the company’s Blue Lake facility and expand the company’s Sherco Solar facility with an additional 200-MW array. Xcel plans to start construction on the battery storage projects in 2026, and bring them online in late 2027. The projects will use lithium iron phosphate battery cell technology that “discharge energy in four-hour increments and are quick to recharge, allowing for regular use,” the utility said in a statement. Xcel said it plans to reuse existing grid connections for the batteries to store energy produced by wind, solar, nuclear and natural gas facilities across its system. “Batteries help us store energy when it’s inexpensive to produce and dispatch it when needed, allowing us to continue delivering reliable electricity to customers while keeping bills low,” Shea said. Xcel said it anticipates the projects will qualify for federal tax credits, offsetting 30% of the cost for the Blue Lake battery and 40% for the Sherco solar and battery projects. Xcel serves about 3.9 million electric customers across eight states, and expects retail sales to grow 5% through 2030. The utility on Thursday unveiled a $15 billion addition to

Read More »

BP Profit Exceeds Expectations

BP Plc’s profit exceeded expectations, with operational improvements and higher oil and gas production outweighing lower prices, as the company’s turnaround plan builds momentum. The British energy giant posted adjusted third-quarter net income of $2.21 billion, higher than the average analyst estimate of $1.98 billion. Its quarterly share buyback plan was maintained and net debt rose slightly.  The results signal Chief Executive Officer Murray Auchincloss is starting to deliver a turnaround plan to win back investor confidence by focusing on oil and gas production, selling non-strategic assets and cutting costs.  “We continue to make good progress to cut costs, strengthen our balance sheet and increase cash flow and returns,” Auchincloss said in BP’s earnings statement. “We are looking to accelerate delivery of our plans, including undertaking a thorough review of our portfolio.” BP shares were little changed in London trading, as crude prices declined. BP’s plan to divest $20 billion of assets by the end of 2027 to improve the balance sheet still includes expectations of a transaction for lubricants business Castrol, Auchincloss said in an interview on Bloomberg TV. The firm also raised its disposal expectations for 2025, saying proceeds will exceed $4 billion after previously guiding between $3 to $4 billion. Quarterly share buybacks were held at $750 million, a reduced level BP announced earlier this year along with a strategic reset. Gearing — a ratio of net debt to equity that analysts have flagged as elevated compared to peers — ticked higher to 25.1%, from 24.6% in the previous quarter. Even though the company returned to focusing on fossil fuels, BP said its full year reported upstream production is expected to be slightly lower than last year. But in a telephone interview on Tuesday, Auchincloss said “maybe we’ll do better than that, but we don’t want to

Read More »

Biden staffers say IRA was hobbled by slow deployment

Implementation of the Inflation Reduction Act and Bipartisan Infrastructure Law suffered from muddled aims, and projects took too long to materialize under the Biden administration, according to an October report from former Department of Energy staffers who interviewed more than 80 of their former colleagues on the topic. The slow rollout meant that the “political theory animating the [Biden] administration’s approach — that the economic development generated by clean energy projects and industries would create a durable bipartisan coalition — was never truly tested,” and the Trump administration has been able to claw back much of the associated funding, the report says. “Programs frequently tried to satisfy multiple aims at once: decarbonization, onshoring, labor, equity, national security,” the report says. “This layering of priorities blurred mandates and slowed action. This proved to be particularly challenging for requirements that were at odds with energy industry realities (e.g., impractical [Build America Buy America] requirements for every component; labor union requirements for transmission projects where union labor didn’t exist).” The report was written by Ramsey Fahs, a former policy advisor at DOE; Louise White, a former senior consultant with DOE’s Loan Programs Office and Office of Technology Transitions; and Alan Propp, who first worked as a senior strategy consultant with DOE’s LPO and then served as a deputy chief of staff in its Loan Underwriting and Structuring Division. All three left the agency in January.  The authors say they interviewed more than 80 “political appointees and career staff who sat at the heart of implementation, with a primary focus on the infrastructure offices” at DOE, and noted that the interviews “are not exhaustive and at times interviewees reported conflicting information or divergent experiences.” However, interviewees seemed to agree that the implementation of the IRA and BIL was hampered by jumbled priorities, as well

Read More »

What the last gas boom (and bust) says about today’s rush to build

Listen to the article 13 min This audio is auto-generated. Please let us know if you have feedback. Twenty-five years ago, a data center boom helped fuel a race to build gas-fired power plants, with the energy secretary, utilities, politicians and experts warning of blackouts and economic stagnation if the country didn’t meet surging demand for electricity. By 2001, however, the dot com bubble had burst, the economy was in recession and the huge demand increase never materialized. Efficiency and productivity improved rapidly, and demand remained more or less level for the next two decades, leaving many utilities with excess capacity and ratepayers footing the bill. Some analysts and industry sources see parallels between then and now. Once again, headlines are warning of imminent energy shortfalls due largely to the power needs of artificial intelligence. Leading figures in government and industry are promoting more firm generation, and particularly gas, as a matter of economic and national security.  “Can the same thing happen? Definitely,” said Eugene Kim, Wood Mackenzie’s Americas Gas Research team director. “The utilities and anyone planning for power demand is forecasting unprecedented and, in some cases, even exponential growth. Whether that materializes or not – huge degree of uncertainty.” Gas investment reaches new heights Investment firms, utilities, tech giants, energy companies and others are pouring billions into acquiring existing gas plants or developing new ones to serve data centers. Gas power merger and acquisition valuations have doubled since 2024, reaching up to $1.93 million/MW in some markets, according to energy analytics firm Enverus.  While there are echoes of the millennium today, there are a few important differences.  The first is that the U.S. is producing and consuming more gas than ever before — driven largely by the rise of fracking — with production concentrated in Texas and Louisiana

Read More »

Data Center Jobs: Engineering, Construction, Commissioning, Sales, Field Service and Facility Tech Jobs Available in Major Data Center Hotspots

Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting. Looking for Data Center Candidates? Check out Pkaza’s Active Candidate / Featured Candidate Hotlist Data Center Facility Technician (All Shifts Available) Impact, TX This position is also available in: Ashburn, VA; Abilene, TX; Needham, MA and New York, NY.  Navy Nuke / Military Vets leaving service accepted! This opportunity is working with a leading mission-critical data center provider. This firm provides data center solutions custom-fit to the requirements of their client’s mission-critical operational facilities. They provide reliability of mission-critical facilities for many of the world’s largest organizations facilities supporting enterprise clients, colo providers and hyperscale companies. This opportunity provides a career-growth minded role with exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning Engineer Montvale, NJ This traveling position is also available in: New York, NY; White Plains, NY;  Richmond, VA; Ashburn, VA; Charlotte, NC; Atlanta, GA; Hampton, GA; Fayetteville, GA; New Albany, OH; Cedar Rapids, IA; Phoenix, AZ; Dallas, TX or Chicago IL *** ALSO looking for a LEAD EE and ME CxA Agents and CxA PMs. *** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They have a mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Data Center MEP Construction

Read More »

NVIDIA at GTC 2025: Building the AI Infrastructure of Everything

Omniverse DSX Blueprint Unveiled Also at the conference, NVIDIA released a blueprint for how other firms should build massive, gigascale AI data centers, or AI factories, in which Oracle, Microsoft, Google, and other leading tech firms are investing billions. The most powerful and efficient of those, company representatives said, will include NVIDIA chips and software. A new NVIDIA AI Factory Research Center in Virginia will use that technology. This new “mega” Omniverse DSX Blueprint is a comprehensive, open blueprint for designing and operating gigawatt-scale AI factories. It combines design, simulation, and operations across factory facilities, hardware, and software. • The blueprint expands to include libraries for building factory-scale digital twins, with Siemens’ Digital Twin software first to support the blueprint and FANUC and Foxconn Fii first to connect their robot models. • Belden, Caterpillar, Foxconn, Lucid Motors, Toyota, Taiwan Semiconductor Manufacturing Co. (TSMC), and Wistron build Omniverse factory digital twins to accelerate AI-driven manufacturing. • Agility Robotics, Amazon Robotics, Figure, and Skild AI build a collaborative robot workforce using NVIDIA’s three-computer architecture. NVIDIA Quantum Gains  And then there’s quantum computing. It can help data centers become more energy-efficient and faster with specific tasks such as optimization and AI model training. Conversely, the unique infrastructure needs of quantum computers, such as power, cooling, and error correction, are driving the development of specialized quantum data centers. Huang said it’s now possible to make one logical qubit, or quantum bit, that’s coherent, stable, and error corrected.  However, these qubits—the units of information enabling quantum computers to process information in ways ordinary computers can’t—are “incredibly fragile,” creating a need for powerful technology to do quantum error correction and infer the qubit’s state. To connect quantum and GPU computing, Huang announced the release of NVIDIA NVQLink — a quantum‑GPU interconnect that enables real‑time CUDA‑Q calls from quantum

Read More »

The Evolution of the Neocloud: From Niche to Mainstream Hyperscale Challenger

Infrastructure and Supply Chain Race Cloud competition is increasingly defined by the ability to secure power, land, and chips— three resources that dictate project timelines and customer onboarding. Neoclouds and hyperscalers face a common set of constraints: local utility availability, substation interconnection bottlenecks, and fierce competition for high-density GPU inventory. Power stands as the gating factor for expansion, often outpacing even chip shortages in severity. Facilities are increasingly being sited based on access to dedicated, reliable megawatt-scale electricity, rather than traditional latency zones or network proximity. AI growth forecasts point to four key ceilings: electrical capacity, chip procurement cycles, latency wall between computation and data, and scalable data throughput for model training. With hyperscaler and neocloud deployments now competing for every available GPU from manufacturers, deployment agility has become a prime differentiator. Neoclouds distinguish themselves by orchestrating microgrid agreements, securing direct-source utility contracts, and compressing build-to-operational timelines. Converting a bare site to a functional data hall with operators that can viably offer a shortened deployment timeline gives neoclouds a material edge over traditional hyperscale deployments that require broader campus and network-level integration cycles. The aftereffects of the COVID era supply chain disruptions linger, with legacy operators struggling to source critical electrical components, switchgear, and transformers, sometimes waiting more than a year for equipment. As a result, neocloud providers have moved aggressively into site selection strategies, regional partnerships, and infrastructure stack integration to hedge risk and shorten delivery cycles. Microgrid solutions and island modes for power supply are increasingly utilized to ensure uninterrupted access to electricity during ramp-up periods and supply chain outages, fundamentally rebalancing the competitive dynamics of AI infrastructure deployment. Creditworthiness, Capital, and Risk Management Securing capital remains a decisive factor for the growth and sustainability of neoclouds. Project finance for campus-scale deployments hinges on demonstrable creditworthiness; lenders demand

Read More »

Canyon Magnet Energy: The Superconducting Future of Powering AI Data Centers

At this year’s Data Center Frontier Trends Summit, Honghai Song, founder of Canyon Magnet Energy, presented his company’s breakthrough superconducting magnet technology during the “6 Moonshot Trends for the 2026 Data Center Frontier” panel—showcasing how high-temperature superconductors (HTS) could reshape both fusion energy and AI data-center power systems. In this episode of the Data Center Frontier Show, Editor in Chief Matt Vincent speaks with Song about how Canyon Magnet Energy—founded in 2023 and based in New Jersey with research roots at Stony Brook University—is bridging fusion research and AI infrastructure through next-generation magnet and energy-storage technology. From Fusion Research to Data Center Reality Founded in 2023, Canyon Magnet Energy emerged from the advanced-magnet research ecosystem around Stony Brook and now operates a manufacturing line in Newark, New Jersey. Its team draws on decades of experience designing the ultra-strong magnetic fields that enable the confinement and stability of fusion plasma—but their ambitions go far beyond the laboratory. “Super magnets are the foundation of fusion,” Song explains in the interview. “But the same high-temperature superconductors that can make fusion practical can also dramatically improve how we move and store electricity in data centers.” The company’s magnets are built using REBCO (Rare Earth Barium Copper Oxide) tape, which operates at around 77 Kelvin—cold, but far warmer and more manageable than traditional low-temperature superconductors. The result is a zero-resistance pathway for electricity, unlocking new possibilities in power transmission, energy storage, and grid integration. Why High-Temperature Superconductors Matter Since their discovery in 1986, high-temperature superconductors have progressed from exotic physics experiments to industrial-scale wire and magnet manufacturing. Canyon Magnet Energy is among a new generation of companies moving this technology into the AI data-center context—where efficiency and instantaneous power responsiveness are increasingly critical. With AI training clusters consuming power at hundreds of megawatts per campus,

Read More »

OpenAI spends even more money it doesn’t have

The aim, said Gogia, “is continuity, not cost efficiency. These deals are forward leaning, relying on revenue forecasts that remain speculative. In that context, OpenAI must continue to draw heavily on outside capital, whether through venture rounds, debt, or a future public offering.” He pointed out, “the company’s recent legal and corporate restructuring was designed to open the doors to that capital. Removing Microsoft’s exclusivity makes room for more vendors but also signals that no one provider can meet OpenAI’s demands. In several cases, suppliers are stepping in with financing arrangements that link product sales to future performance. While these strategies help close funding gaps, they introduce fragility. What looks like revenue is often pre-paid consumption, not realized margin.” Execution risks, he said, add to the concern. “Building and energizing enough data centers to meet OpenAI’s projected needs is not a function of ambition alone. It requires grid access, cooling capacity, and regional stability. Microsoft has acknowledged that it lacks the power infrastructure to fully deploy the GPUs it owns. Without physical readiness, all of these agreements sit on shaky ground.” Lots of equity swapping going on Scott Bickley, advisory fellow at Info-Tech Research Group, said he has not only been astounded by the funding announcements over the last few months, but is also appalled, primarily, he said, “because of the disconnect to what this does to the underlying technology stocks and their market prices versus where the technology is at from a development and ROI perspective … and from a boots on the ground perspective.” He added that while the financial pledges involve “huge, staggering numbers, most of them are tied up in ways that are not necessarily going to require all the cash to come from OpenAI. In a lot of cases, there is equity swapping. You have

Read More »

Verizon to build high-capacity fiber network to link AWS AI data centers

“AI will be essential to the future of business and society, driving innovation that demands a network to match,” Scott Lawrence, senior vice president and chief product officer at Verizon Business said in a statement. “This deal with Amazon demonstrates our continued commitment to meet the growing demands of AI workloads for the businesses and developers building our future.” This is not the first time that two companies have partnered. Verizon has previously adopted AWS as a preferred public cloud provider for its digital transformation efforts. The collaboration also extends to joint development of private mobile edge computing solutions, delivering secure, dedicated connectivity for enterprise customers. These efforts have been targeted at industries such as manufacturing, healthcare, retail, and entertainment.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »