Stay Ahead, Stay ONMINE

Jensen Huang Maps the AI Factory Era at NVIDIA GTC 2026

SAN JOSE, Calif. — If there was a single message that emerged from Jensen Huang’s keynote at Nvidia’s GTC conference this week, it was this: the artificial intelligence revolution is entering its infrastructure phase. For the past several years, the technology industry has been preoccupied with training ever larger models. But in Huang’s telling, that […]

SAN JOSE, Calif. — If there was a single message that emerged from Jensen Huang’s keynote at Nvidia’s GTC conference this week, it was this: the artificial intelligence revolution is entering its infrastructure phase.

For the past several years, the technology industry has been preoccupied with training ever larger models. But in Huang’s telling, that era is already giving way to something far bigger: the industrial-scale deployment of AI systems that run continuously, generating intelligence on demand.

“The inference inflection point has arrived,” Huang told the audience gathered at the SAP Center.

That shift carries enormous implications for the data center industry. Instead of episodic bursts of compute used to train models, the next generation of AI systems will require persistent, high-throughput infrastructure designed to serve billions, and eventually trillions, of inference requests every day.

And the scale of the buildout Huang envisions is staggering.

Throughout the keynote, the Nvidia CEO repeatedly referenced what he believes will become a trillion-dollar global market for AI infrastructure in the coming years, spanning accelerated computing systems, networking fabrics, storage architectures, power systems, and the facilities required to house them.

At that scale, Huang argued, data centers are no longer simply IT facilities. They are truly becoming AI factories: industrial systems designed to convert electricity into tokens.

“Tokens are the new commodity,” Huang said. “AI factories are the infrastructure that produces them.”

Across more than two hours on stage, Huang sketched the architecture of that new computing platform, introducing new computing systems, networking technologies, software frameworks, and infrastructure blueprints designed to support what Nvidia believes will be the largest computing buildout in history.

Four main themes defined the presentation:

• The arrival of the inference inflection point.
• The emergence of OpenClaw as a foundational operating layer for AI agents.
• New hybrid inference architectures involving companies such as Groq.
• The growing role of optical networking and digital-twin simulation in designing gigawatt-scale AI infrastructure.

Together, the announcements offered a glimpse of the emerging architecture behind the next generation of AI infrastructure.

CUDA’s Twenty-Year Flywheel

Huang opened by marking the 20th anniversary of CUDA, Nvidia’s GPU programming platform that has become the backbone of modern AI development.

When CUDA launched in 2006, it represented a radical idea: using GPUs not just for graphics but as programmable parallel processors capable of accelerating scientific and analytical workloads.

Two decades later, that bet has grown into one of the most influential software ecosystems in computing.

CUDA now supports:

  • Hundreds of millions of deployed GPUs

  • Thousands of development libraries and tools

  • Hundreds of thousands of open-source CUDA projects

That ecosystem created what Huang described as Nvidia’s defining advantage: a developer flywheel.

How does that work? Developers build algorithms. Breakthrough algorithms create new applications. New applications expand the installed base.

“The flywheel is now accelerating,” Huang said.

The result is a platform dynamic that continues to improve the performance of existing hardware as software evolves, allowing accelerated computing to keep driving performance gains even as traditional transistor scaling slows.

“Moore’s Law has run out of steam,” Huang said. “Accelerated computing is how we take the next giant leap.”

AI Is Rewriting the Data Stack

Another major theme of the keynote was the growing role of AI in data analysis itself.

Huang described enterprise data as increasingly divided between two categories.

Structured data, stored in relational databases and analytics systems.

And unstructured data, including documents, images, video, and audio; which now accounts for roughly 90 percent of newly generated information.

Historically, unstructured data has been difficult to analyze at scale because it lacks the indexing and schema of traditional databases.

AI models are changing that.

Using multimodal reasoning capabilities, modern systems can read documents, interpret images, extract meaning, and organize massive data repositories.

To support this shift, Nvidia has introduced new data frameworks designed to accelerate both structured and unstructured workloads.

The company highlighted new GPU-accelerated libraries for data frames and vector databases, along with integrations with enterprise platforms including IBM WatsonX, Dell AI infrastructure systems, and Google Cloud analytics services.

In one example cited during the keynote, food giant Nestlé reportedly accelerated a supply-chain analytics workload fivefold while reducing compute costs by 83 percent using GPU-accelerated processing.

The AI Platform Shift

Huang framed the current moment as the latest in a series of computing platform transitions.

The personal computer era produced companies like Microsoft and Intel.

The internet era produced Google and Amazon.

The mobile and cloud era reshaped enterprise software.

Now, Huang argues, artificial intelligence is driving the next platform shift.

Investment reflects that shift.

Venture funding for AI startups has already surpassed $150 billion, making it one of the largest waves of technology investment in history.

Three breakthroughs accelerated that transition.

First came generative AI, capable of creating new content rather than simply retrieving information.

Second came reasoning models, capable of planning, verifying results, and solving complex tasks.

Third came agentic AI systems, capable of executing multi-step workflows autonomously.

When AI systems begin performing productive work – reading documents, writing software, performing research – the computing requirements change dramatically.

Every step requires inference. Thinking requires inference; reading requires inference; generating output requires inference. That dynamic, Huang said, is driving an explosion in computing demand.

Nvidia estimates that AI compute demand has increased roughly one million-fold in the past two years when accounting for both model complexity and usage.

The Inference Inflection Point

For much of the past decade, the AI industry has been defined by training workloads.

But Huang argued that the center of gravity has shifted: “The inference inflection has arrived.”

AI systems are increasingly running continuously: generating responses, executing tasks, and interacting with users in real time.

Instead of occasional bursts of compute used for model training, AI infrastructure must now support persistent, high-throughput inference workloads.

That change dramatically increases the importance of infrastructure efficiency.

Which brought Huang to the concept at the center of the keynote.

Data Centers Become Token Factories

In Huang’s framing, modern data centers are evolving into AI factories.

Rather than storing data or hosting applications, these facilities are designed to generate tokens as the fundamental units of output produced by AI models.

“Tokens are the new commodity,” Huang said.

In this framework, the economics of AI infrastructure revolve around a single metric: Tokens per watt.

Power availability has already emerged as one of the most significant constraints on AI infrastructure expansion.

As a result, the productivity of AI factories increasingly depends on how efficiently they convert electricity into inference output.

Every improvement in compute architecture, networking bandwidth, cooling efficiency, or software optimization ultimately serves the same goal: increasing token production within a fixed power envelope.

Grace Blackwell and the Inference Breakthrough

Nvidia’s current architecture, Grace Blackwell NVLink72, represents a major step toward that goal.

The system connects 72 GPUs into a single high-bandwidth computing domain using Nvidia’s NVLink switching fabric.

According to benchmarks discussed during the keynote, the platform delivers dramatic improvements in inference performance.

Industry analysts estimate that the architecture provides 35 to 50 times higher performance than previous generation systems for certain workloads.

Those gains come from multiple innovations:

  • NVFP4 precision designed for inference workloads

  • Optimized tensor processing algorithms

  • Extensive kernel tuning via DGX Cloud

  • Tight hardware-software integration

The result is dramatically lower token cost.

“If you have the wrong architecture,” Huang said, “even if it’s free, it’s not cheap enough.”

Rubin and the Architecture of AI Factories

Nvidia’s next generation platform, Vera Rubin, extends that design philosophy.

Rubin is engineered specifically for agentic AI systems, which require massive memory bandwidth and extremely fast interconnects.

The architecture integrates:

  • NVLink72 GPU clusters

  • A new Vera CPU for orchestration workloads

  • AI-optimized storage systems

  • Co-packaged optical networking

  • Fully liquid-cooled rack infrastructure

Rubin systems will deliver roughly 3.6 exaflops of compute per rack-scale system.

The architecture is also designed to operate with 45-degree Celsius hot-water cooling, reducing the thermal burden on data center facilities.

According to Huang, the Rubin infrastructure could increase token output from 2 million tokens per second to roughly 700 million tokens per second within a gigawatt-scale AI facility.

Disaggregated Inference and the Groq Integration

Another notable development in the keynote was Nvidia’s collaboration with Groq.

Groq processors use a deterministic dataflow architecture optimized for ultra-low-latency inference workloads.

Unlike GPUs, which dynamically schedule tasks, Groq processors execute statically compiled computation pipelines.

That design enables extremely fast token generation.

Historically, Groq chips lacked the memory capacity required to host large models independently.

Nvidia’s solution is disaggregated inference.

Under this approach:

The two systems operate together through Nvidia’s Dynamo inference orchestration software.

The hybrid architecture could deliver up to 35-times performance improvements for certain inference workloads, Huang said.

Co-Packaged Optics Enters the AI Data Center

Networking also emerged as a central theme.

Huang announced that Nvidia’s Spectrum-X networking platform now includes the industry’s first production co-packaged optical switch, developed in collaboration with TSMC.

Co-packaged optics integrates optical transceivers directly into networking silicon, allowing electrical signals to convert to optical signals within the switch package itself.

The result is dramatically higher bandwidth and lower power consumption compared with conventional pluggable optical modules.

As AI clusters scale into tens of thousands of GPUs, networking bandwidth is becoming as important as compute performance.

Future Nvidia architectures will combine both copper and optical connectivity.

“Are we going to scale with copper?” Huang asked.

“Yes.”

“Are we going to scale with optics?”

“Yes.”

DSX: Designing the AI Factory

Perhaps the most consequential infrastructure announcement was Nvidia’s introduction of the Vera Rubin DSX AI Factory reference architecture and the Omniverse DSX digital-twin blueprint.

Together, the two systems represent Nvidia’s attempt to standardize how gigawatt-scale AI infrastructure is designed and deployed.

The DSX reference architecture provides a guide for building fully integrated AI factories spanning compute, networking, storage, power, and cooling systems.

Meanwhile, the Omniverse DSX blueprint allows operators to create physically accurate digital twins of AI factories before construction begins.

Using Nvidia’s Omniverse simulation environment, developers can model:

These simulations allow operators to evaluate design decisions and operational policies before deploying physical infrastructure.

In an environment where AI campuses may cost tens of billions of dollars, Nvidia believes such digital twins will become essential tools.

A broad ecosystem of companies is already integrating with the platform, including Cadence, Dassault Systèmes, Schneider Electric, Siemens, Vertiv, Trane Technologies, and Switch.

Energy companies including GE Vernova, Siemens Energy, Hitachi Energy, and Emerald AI are also working with Nvidia to integrate grid-level modeling into the system.

The platform includes software components designed to optimize operations.

  • DSX Max-Q maximizes computing output within fixed power budgets.
  • DSX Flex allows AI factories to dynamically adjust power consumption in response to grid conditions.
  • DSX Exchange connects IT systems with facility and energy management platforms.
  • DSX Sim enables high-fidelity digital-twin simulations of full AI factory deployments.

Taken together, the DSX architecture reflects Nvidia’s view that AI infrastructure must be co-designed with energy systems, rather than treated as a traditional IT workload.

OpenClaw, Nemo, and Nemotron: The Software Stack for Agentic AI

On the software side, Huang highlighted the rapid emergence of OpenClaw, an open framework for orchestrating AI agents.

“OpenClaw opened the next frontier of AI to everyone and became the fastest-growing open source project in history,” said Jensen Huang.

The system connects language models, enterprise tools, APIs, and data systems into coordinated workflows.

Huang described OpenClaw as an operating system for agentic computing; a layer that manages how AI systems plan, execute, and coordinate work.

Agents built on the platform can:

  • Access databases and APIs

  • Plan multi-step workflows

  • Execute tasks

  • Coordinate with other agents

Huang compared the technology’s potential significance to foundational internet platforms.

“OpenClaw is as important as Linux,” he said. “It is as important as HTML. It is as important as Kubernetes.”

But OpenClaw is only one layer of what Huang is assembling.

If OpenClaw is the operating system, Nvidia’s Nemo platform and Nemotron models form the intelligence layer that runs on top of it.

Nvidia introduced NemoClaw, an enterprise implementation designed to bring security, governance, and policy control to agent systems as a means of addressing a key concern Huang raised: agents that can access sensitive data, execute code, and communicate externally cannot operate without guardrails.

At the same time, Nvidia is expanding its Nemotron family of open models, positioning them as a foundational layer for agentic systems across industries.

To accelerate that effort, the company announced the NVIDIA Nemotron Coalition, a collaboration with leading AI-native companies including Mistral AI, Perplexity, LangChain, Cursor, and others.

The coalition will co-develop open, frontier-scale foundation models, trained on Nvidia’s DGX Cloud infrastructure, that can be specialized for different industries, regions, and use cases.

Those models will underpin the next generation of Nemotron systems, which Huang positioned as customizable, domain-specific intelligence layers for enterprise AI.

“Open models are the lifeblood of innovation,” Huang said.

The strategy reflects a broader shift.

Rather than building a single dominant model, Nvidia is attempting to seed an ecosystem of open, extensible foundation models that enterprises can adapt to their own data, workflows, and regulatory environments.

In Huang’s architecture, the pieces fit together:

  • Nemotron provides the base intelligence.

  • OpenClaw orchestrates agent behavior.

  • NemoClaw secures and governs deployment.

Together, they form what Huang is effectively positioning as a full software stack for agentic AI.

And just as CUDA defined the programming model for GPU computing, Nvidia is now attempting to define the software architecture for the next phase of AI.

From Agents to Physical AI: Robotics, Autonomy, and Space Data Centers

If the first half of Huang’s keynote was about the economics of inference, the latter portion extended that logic into the physical world.

Agentic AI systems, he argued, are not confined to software.

They are the foundation of what Nvidia calls physical AI: systems that perceive, reason, and act in real-world environments.

“We have digital agents,” Huang said. “Now we have physically embodied agents. We call them robots.”

Nvidia’s approach to robotics reflects the same full-stack philosophy that defines its data center strategy. The company describes three core systems required to enable physical AI:

  • A training system for developing models.

  • A simulation system for generating synthetic data.

  • A runtime system embedded within robots themselves.

At the center of that approach is simulation.

Because real-world data is inherently limited and difficult to collect at scale, Huang emphasized the importance of generating synthetic training data using physics-based simulation and AI-generated environments.

“Compute is data,” Huang said, in a line that underscores Nvidia’s belief that simulation will become a primary driver of model development in robotics.

The company’s Isaac platform, along with newer tools such as Cosmos world models and Groot robotics models, is designed to create and train robots in simulated environments before deployment in the real world.

Autonomous Vehicles Reach Their “ChatGPT Moment”

Huang also suggested that autonomous driving has reached a turning point.

“The ChatGPT moment of self-driving cars has arrived,” he said.

Nvidia announced a new wave of automotive partnerships, including deployments with global manufacturers such as BYD, Hyundai, Nissan, and Geely, joining existing partners like Mercedes-Benz and Toyota.

In total, the company says its platform will support autonomous systems across tens of millions of vehicles annually.

A key shift is the integration of reasoning models into vehicle systems.

Instead of simply reacting to sensor inputs, next-generation autonomous systems are capable of explaining their decisions, planning actions, and adapting to complex environments.

In demonstrations during the keynote, vehicles narrated their own behavior; describing lane changes, obstacle avoidance, and route decisions in real time.

That capability reflects the broader convergence Huang described between language models and physical systems.

Autonomous vehicles are no longer just perception systems. They are reasoning systems operating in the physical world.

Infrastructure Extends to the Edge — and Beyond

Huang also pointed to a broader expansion of AI infrastructure beyond traditional data centers.

Telecommunications networks, for example, are evolving into distributed AI platforms.

Base stations — once designed purely for signal transmission — will increasingly run AI workloads, performing tasks such as traffic optimization, beamforming, and energy management.

“The base station is going to become an AI infrastructure platform,” Huang said.

That shift extends the AI factory model to the network edge.

But Huang pushed the idea even further.

Nvidia is now exploring data center infrastructure in space.

The company announced early work on a system called Vera Rubin Space, designed to operate in orbital environments where cooling must rely on radiation rather than convection.

While still experimental, the effort reflects Huang’s broader point: as AI becomes a continuous, global workload, infrastructure will expand wherever compute can be deployed efficiently.

Bottom line: From hyperscale campuses to edge networks and potentially into orbit, the boundaries of the data center are expanding.

The Next Phase of AI Is Physical

Taken together, the robotics, autonomous vehicle, and edge infrastructure announcements extend Huang’s core thesis.

AI is no longer just software. It is becoming a distributed, physical system embedded in machines, networks, and environments.

And just as in the data center, the governing constraints remain the same: compute efficiency, energy, throughput.

Whether in a factory, a vehicle, or a satellite, the same principle applies. AI systems must convert power into intelligence – as efficiently as possible.

The Infrastructure Era of AI

By the end of the keynote, Jensen Huang had effectively shifted the center of gravity for the industry.

For years, artificial intelligence has been defined by advances in models. But in Huang’s telling, the decisive challenge of the next decade will be building the systems required to run those models at global scale.

Those systems must deliver continuous inference, operate within fixed power envelopes, and integrate compute, networking, and energy infrastructure at unprecedented scale.

Nvidia’s estimate of a $1 trillion AI infrastructure market reflects the magnitude of that shift.

What emerges is not simply a larger data center industry, but a different kind of industrial system altogether: a global platform for generating intelligence as a continuous output.

If that forecast proves accurate, the industry moving to the center of the AI economy will not just be software or semiconductors, but the full stack of power, networking, and computing required to sustain it.

In that world, the defining metric is no longer capacity or uptime. It is productivity.

How much intelligence can be produced from a given amount of energy. How many tokens a data center can generate from every watt of electricity it consumes.

Because in the AI factory era, electricity is no longer just an input to computing. It is the raw material of intelligence.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

ADNOC, OMV advance formation of Borouge Group International

ADNOC and OMV Aktiengesellschaft signed an asset usage agreement for the Borouge 4 (B4) production complex, advancing the duo’s formation of Borouge Group International AG. The formation of Borouge Group International AG, through the combination of Borouge Plc and Borealis, and acquisition of Nova Chemicals, is progressing according to plan,

Read More »

Nile adds microsegmentation and native NAC to its secure NaaS platform

Identity is the authentication layer that feeds the NAC replacement. For users and employees, Nile pulls identity from Active Directory, including group and role membership, which maps directly to policy enforcement. Corporate devices can authenticate through RADIUS using certificates, which carry additional device metadata. For wired connections, Nile supports 802.1X

Read More »

IDC: Dell leads server market driven by AI infrastructure needs

For calendar year 2025 the market finished growing 80.4% compared to 2024, reaching a yearly record of $444.1 billion dollars revenue. Dell Technologies clearly leads the OEM market with $12.5 billion in total revenue share, accounting for 10% of total sales. IDC attributed this to outstanding growth on accelerated servers.

Read More »

Cloud providers seek to shape European sovereignty legislation

Finally, they say, there should be taxpayer-funded investments in cloud and AI infrastructure and support for the European development of key components such as memory and chips and the incorporation of strict environmental sustainability requirements. “It’s important to realize that the proposal is not just about the technical aspects but

Read More »

Energy Department Begins Delivering SPR Barrels at Record Speeds

WASHINGTON — The U.S. Department of Energy (DOE) today announced the award of contracts for the initial phase of the Strategic Petroleum Reserve (SPR) Emergency Exchange as directed by President Trump. The first oil shipments began today—just nine days after President Trump and the Department of Energy announced the United States would lead a coordinated release of emergency oil reserves among International Energy Agency (IEA) member nations to address short-term supply disruptions. Under these initial awards, DOE will move forward with an exchange of 45.2 million barrels of crude oil and receive 55 million barrels in return, all at no cost to the taxpayer. This represents the first tranche of the United States’ 172-million-barrel release. Companies will receive 10 million barrels from the Bayou Choctaw SPR site, 15.7 million barrels from Bryan Mound, and 19.5 million barrels from West Hackberry. “Thanks to President Trump, the Energy Department began this first exchange at record speeds to address short-term supply disruptions while also strengthening the Strategic Petroleum Reserve by returning additional barrels at no cost to taxpayers,” said Kyle Haustveit, Assistant Secretary of the Hydrocarbons and Geothermal Energy Office. “This exchange not only maintains reliability in the current market but will generate hundreds of millions of dollars in value in the form of additional barrels for the American people when the barrels are returned.” This initial action will ultimately add close to 10 million barrels to the SPR’s inventory when the barrels are returned. Taxpayers will benefit from both the short-term support for global supply and long-term growth of the SPR’s inventory. This helps protects U.S. and global energy security. The Trump Administration continues to pursue additional opportunities to strengthen the reserve and restore its long-term readiness as a cornerstone of American energy security. For more information on the Strategic Petroleum Reserve and DOE’s

Read More »

Then & Now: Oil prices, US shale, offshore, and AI—Deborah Byers on what changed since 2017

In this Then & Now episode of the Oil & Gas Journal ReEnterprised podcast, Managing Editor and Content Strategist Mikaila Adams reconnects with Deborah Byers, nonresident fellow at Rice University’s Baker Institute Center for Energy Studies and former EY Americas industry leader, to revisit a set of questions first posed in 2017. In 2017, the industry was emerging from a downturn and recalibrating strategy; today, it faces heightened geopolitical risk, market volatility, and a rapidly evolving technology landscape. The conversation examines how those earlier perspectives have aged—covering oil price bands and the speed of recovery from geopolitical shocks, the role of US shale relative to OPEC in balancing global supply, and the shift from scarcity to economic abundance driven by technology and capital discipline. Adams and Byers also compare the economics and risk profiles of shale and offshore development, including the growing role of Brazil, Guyana, and the Gulf of Mexico, and discuss how infrastructure and regulatory constraints shape market outcomes. The episode further explores where digital transformation—particularly artificial intelligence—is delivering tangible returns across upstream operations, from predictive maintenance and workforce planning to capital project execution. The discussion concludes with insights on consolidation and scale in the Permian basin, the strategic rationale behind recent megamergers, and the industry’s ongoing challenge to attract and retain next‑generation talent through flexibility, technical opportunity, and purpose‑driven work.

Read More »

Eni plans tieback of new gas discoveries offshore Libya

Eni North Africa, a unit of Eni SPA, together with Libya’s National Oil Corp., plans to develop two new gas discoveries offshore Libya as tiebacks to existing infrastructure. The gas discoveries were made offshore Libya, about 85 km off the coast in about 650 ft of water. Bahr Essalam South 2 (BESS 2) and Bahr Essalam South 3 (BESS 3), adjacent geological structures, were successfully drilled through the exploration well C1-16/4 and the appraisal well B2-16/4 about 16 km south of Bahr Essalam gas field, which lies about 110 km from the Tripoli coast. Gas-bearing intervals were encountered in both wells within the Metlaoui formation, the main productive reservoir of the area. The acquired data indicate the presence of a high-quality reservoir, with productive capacity confirmed by the well test already carried out on the first well. Preliminary volumetric estimates indicate that the BESS 2 and BESS 3 structures jointly contain more than 1 tcf of gas in place. Their proximity to Bahr Essalam field will enable rapid development through tie-back, the operator said. The gas produced will be supplied to the Libyan domestic market and for export to Italy. Bahr Essalam produces through the Sabratha platform to the Mellitah onshore treatment plant.

Read More »

Azule Energy launches first non-associated gas production offshore Angola

Azule Energy has started natural gas production from the New Gas Consortium (NGC)’s Quiluma shallow water field offshore Angola. Start-up of the gas delivery from Quiluma field follows the November 2025 introduction of gas into the onshore gas plant, marking the beginning of production operations. The initial gas export will be 150 MMscfd and will ramp up to 330 MMscfd by yearend, the operator said in a release Mar. 13.  In a separate release Mar. 17, NGC partner TotalEnergies said the startup marks the first development of a non-associated gas field in Angola, noting that the gas produced “will be a stable and important source of gas supply for the Angola LNG plant that is delivering LNG to both the European and Asian markets.” The non-associated gas of NGC Phase 1 will come from Quiluma and Maboqueiro shallow water fields with additional potential related to gas from Blocks 2, 3, and 15/14 areas. An onshore plant will process gas from the fields and connect to the Angola LNG plant, aimed at a reliable feedstock supply to the plant, sited near Soyo in the Zaire province in north Angola. The plant holds a capacity of 400MMscfd of gas and 20,000 b/d of condensates. Azule Energy, a 50-50 joint venture between bp and Eni, is operator of NGC project with 37.4% interest. Partners are TotalEnergies (11.8%), Cabinda Gulf Oil Co., a subsidiary of Chevron (31%), and Sonangol E&P (19.8%).

Read More »

Equinor eyes Barents Sea oil province expansion with potential oil discovery tieback

Equinor Energy AS and partners will consider a tie back of a new oil discovery to Johan Castberg field in the Barents Sea, 220 km northwest of Hammerfest. Preliminary discovery volume estimates at the in the Polynya Tubåen prospect are 2.3–3.8 million std cu m of recoverable oil equivalent (14–24 MMboe). Wildcat well 7220/7-5, the 17th exploration well in production license 532, was drilled about 16 km southwest of discovery well 7220/8-1 well by the COSL Prospector rig in 361 m of water, according to the Norwegian Offshore Directorate. The well was drilled to a vertical depth of 1,119 m subsea. It was terminated in the Fruholmen formation from the Upper Triassic. The objective was to prove petroleum in Lower Jurassic reservoir rocks in the Tubåen formation. The well encountered a 26-m gas column and a 26-m oil column in the Tubåen formation in reservoir rocks totaling 39 m, with good to very good reservoir quality. The total thickness in the Tubåen formation is 125 m. The gas-oil contact was encountered at 972 m subsea, and the oil-water contact was encountered at 998 m subsea. The well was not formation-tested, but extensive volumes of data and samples were collected. It will now be permanently plugged. ‘New’ Barents Sea oil province The discovery comes as Equinor aims to increase volumes in the Johan Castberg area—originally estimated at 500–700 million bbl—by an additional 200–500 million bbl, with plans to drill 1-2 exploration wells per year in the region, Equinor said. “With Johan Castberg, we opened a new oil province in the Barents Sea one year ago. It is encouraging that we are now making new discoveries in the area,” said Grete Birgitte Haaland, area director for Exploration and Production North at Equinor. Production at Johan Castberg began in 2025.  In June 2025, the Drivis

Read More »

Westcott named Woodside CEO

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Woodside Energy has appointed Elizabeth (Liz) Westcott as chief executive officer and managing director. Westcott, who has served as Woodside’s acting chief executive since the departure of Meg O’Neill in December 2025 to lead bp plc, has more than 30 years’ experience in the global energy industry. Westcott joined Woodside in 2023 as executive vice-president Australian operations, and in 2024 was appointed executive vice-president and chief operating officer Australia, leading Woodside’s Australian projects and business operations. Prior to joining Woodside, she most recently held the role of chief operating officer at EnergyAustralia. Liz had a 25-year career at ExxonMobil working in Australia, the United Kingdom, and Italy, including a secondment in 2013 to Adriatic LNG as managing director.

Read More »

Executive Roundtable: AI Infrastructure Enters Its Execution Era

Miranda Gardiner, iMasons Climate Accord:  Since 2023, the digital infrastructure industry has moved definitively from planning to execution in the AI infrastructure cycle. Industry analysts forecast continued exponential growth, with active capacity at least doubling between now and 2030 and total capacity potentially tripling, quintupling, or more. In practical terms, we’ll see more digital infrastructure capacity come online in the next five year than has been built in the past 30 years, representing a historic industrial transformation requiring trillions of dollars in capital expenditure and a workforce measured in the millions. Design and organizational flexibility, integrated execution of sustainable solutions, and community-centered workforce development will separate those that thrive from those that struggle. Effective organizations will pivot quickly under these constantly shifting conditions and the leaders will be those that build fast but build right, as strategic flexibility balances long-term performance, efficiency, and regulatory compliance. We already know the resource intensity required to bring AI resources online and are working diligently to ensure this short-term, delivering streamlined and optimized solutions for everything from site selection to cooling and power management while lower lifecycle emissions. Additionally, in some regions, grid interconnection timelines and power availability are already the pacing item for data center development. Organizations that align their sustainability targets and energy procurement strategies will have a clearer path to execution. An operational model capable of delivering multiple large-scale facilities simultaneously across regions is another key piece to successful outcomes. Standardized, repeatable frameworks that reduce engineering time and accelerate permitting. We hear often about collaboration and strong partnerships, and these will be critical with utilities, regulators, and equipment manufacturers to anticipate bottlenecks before they impact schedules. Execution discipline will increasingly determine competitive advantage as the industry scales. The world and, especially, our host communities, are watching closely. Projects that move forward

Read More »

Jensen Huang Maps the AI Factory Era at NVIDIA GTC 2026

SAN JOSE, Calif. — If there was a single message that emerged from Jensen Huang’s keynote at Nvidia’s GTC conference this week, it was this: the artificial intelligence revolution is entering its infrastructure phase. For the past several years, the technology industry has been preoccupied with training ever larger models. But in Huang’s telling, that era is already giving way to something far bigger: the industrial-scale deployment of AI systems that run continuously, generating intelligence on demand. “The inference inflection point has arrived,” Huang told the audience gathered at the SAP Center. That shift carries enormous implications for the data center industry. Instead of episodic bursts of compute used to train models, the next generation of AI systems will require persistent, high-throughput infrastructure designed to serve billions, and eventually trillions, of inference requests every day. And the scale of the buildout Huang envisions is staggering. Throughout the keynote, the Nvidia CEO repeatedly referenced what he believes will become a trillion-dollar global market for AI infrastructure in the coming years, spanning accelerated computing systems, networking fabrics, storage architectures, power systems, and the facilities required to house them. At that scale, Huang argued, data centers are no longer simply IT facilities. They are truly becoming AI factories: industrial systems designed to convert electricity into tokens. “Tokens are the new commodity,” Huang said. “AI factories are the infrastructure that produces them.” Across more than two hours on stage, Huang sketched the architecture of that new computing platform, introducing new computing systems, networking technologies, software frameworks, and infrastructure blueprints designed to support what Nvidia believes will be the largest computing buildout in history. Four main themes defined the presentation: • The arrival of the inference inflection point.• The emergence of OpenClaw as a foundational operating layer for AI agents.• New hybrid inference architectures involving

Read More »

Executive Roundtable: The Coordination Imperative

Christopher Gorthy, DPR Construction:  Early collaboration of key stakeholders has become the baseline to deliver these complex projects. The teams that are successful in these environments are the ones who combine effective meeting structures with enough in‑person interaction to build real trust. Pairing those relationships with the right tools can help track key decision making, document reasoning, and keep everyone aligned on “The Why,” creating more predictable outcomes. Where the industry continues to feel fragmented is around liability, risk, and comfort with sharing design and model data. Achieving the speed these projects demand requires the entire team to understand each partner’s constraints and then working together to solve problems, communicating clearly and documenting decisions as they go. All of our partnerships are solving equations with multiple variables. Our teams must provide early feedback and solutions when faced with impacts or delays outside our control, and even earlier communications of impacts that cannot be mitigated. Open communication channels, whether through shared digital platforms or recurring working sessions, are critical to staying ahead of risk. As projects get bigger, alignment with financial institutions, insurance entities and private equity partners also have become essential.   The number of trade partners capable of taking on contracts of this size is limited, so making sure we are setting up our partners for success while also working to expand the network of qualified trade partners is a key strategy.  From a tactical standpoint, the most effective projects operate from a single integrated schedule that ties together the owner, vendors, general contractor, trades, commissioning teams, and all other stakeholders. Reinforcing this with consistent two‑ to three‑week look‑ahead reviews and onsite schedule coordination meetings regardless of contractual structure significantly increases alignment and efficiency at the project level.

Read More »

Jensen Huang After the Keynote: Inside Nvidia’s GTC 2026 Press Briefing

The Data Center as Token Factory If there was one line of thinking that defined the session, it was Huang’s insistence that the industry must stop thinking about computers as systems for data entry and retrieval. That, he said, is the old paradigm. The new one is a “token manufacturing system.” That phrase landed because it compresses a lot of Nvidia’s strategy into a single mental model. In this view, the modern data center is no longer just a warehouse of servers or a cloud abstraction layer. It is a factory, and the unit of output is increasingly the token. For Data Center Frontier readers, this is a familiar direction of travel, but Huang pushed it further than most CEOs do. He repeatedly tied Nvidia’s roadmap to token throughput, token economics, and performance per watt. He is clearly trying to establish a new baseline metric for AI infrastructure value. Not raw capacity, but how much useful intelligence a facility can produce from a fixed power envelope. That point also surfaced in his discussion of Grace and Vera CPUs. Huang’s argument was not that Nvidia intends to win every classical CPU market. It was that traditional measures such as cores per dollar are insufficient in AI data centers where the real economic risk is leaving extremely valuable GPUs idle. In other words, the CPU matters because it must move work fast enough to keep the GPU estate productive. In a power-limited, AI-heavy environment, the purpose of the CPU changes. It is no longer optimized for the old hyperscale rental model. It is optimized for keeping the token factory fed. That is a subtle but major shift. It suggests that the next-generation AI data center will be increasingly engineered around the productivity of the overall system rather than around legacy component economics.

Read More »

Project Stalled: Grid Bottlenecks Threaten the Fifth Industrial Revolution

The defining feature of our current data center cycle isn’t a shortage of customers or capital; it’s a shortage of power that can actually be delivered on time. In the space of three years, large‑load interconnection queues have gone from a planning tool to the main reason otherwise viable AI campuses are missing their deployment windows. Multi‑year delays for large loads are quickly becoming the norm, not the exception, in major markets, turning what should be a sprint to deploy AI into a long and uncertain wait. At the grid level, the same pattern is visible in the queues. Across U.S. markets, that queuing infrastructure is now a primary source of delay. Regional operators from PJM to ERCOT and NYISO report steep increases in both the number and size of large‑load requests, with data centers and other energy‑intensive digital infrastructure accounting for a growing share of new demand ( https://insidelines.pjm.com/pjm-board-outlines-plans-to-integrate-large-loads-reliably/,  https://www.nyiso.com/-/energy-intensive-projects-in-nyiso-s-interconnection-queue/,  https://www.latitudemedia.com/news/ercots-large-load-queue-has-nearly-quadrupled-in-a-single-year/). In practice, that means more projects are being told that meaningful capacity will not be available on the timeline their customers expect, forcing them into redesigns, phased power ramps, or alternative power strategies. Time, in other words, has become the scarcest resource in the data center economy. The same 60 MW AI facility that looks attractive at a 17.1% IRR when delivered on schedule can see its returns fall to 12.6% with a three‑month delay and to 8.8% with a six‑month delay—nearly halving its investment case ( https://www.thefastmode.com/expert-opinion/47210-what-we-learned-in-2025-about-data-center-builds-why-delays-will-persist-in-2026-without-greater-visibility). That is why, in this industrial revolution, the metric that matters most is speed‑to‑power: how quickly real, reliable megawatts can be made available at the fence line, not how many gigawatts exist on slides or in press releases. In this industrial revolution, that metric will do more to determine who wins than any short‑term race to buy chips or secure logos.

Read More »

Roundtable: Designing for an Uncertain AI Demand Curve

For the third installment of our Executive Roundtable for the First Quarter of 2026, Data Center Frontier examines a question at the heart of AI infrastructure strategy: How to design for a demand curve that refuses to sit still. The rapid evolution of artificial intelligence workloads has introduced a new kind of uncertainty into data center development. Training clusters continue to scale, inference workloads are proliferating, and enterprise adoption is accelerating in ways that challenge even the most aggressive forecasts. Yet beneath that growth lies a fundamental ambiguity. Not just how much capacity will be needed, but when, where, and in what form. For developers and operators, this creates a tension between speed and flexibility. The pressure to deliver capacity quickly has never been greater, as hyperscale and neocloud players race to secure power and bring AI infrastructure online. At the same time, the risk of overbuilding (or locking into infrastructure that may not align with future workloads, densities, or architectures) has become increasingly difficult to ignore. Nowhere is this tension more visible than in power and electrical design. Decisions around substation sizing, transmission commitments, switchgear capacity, and on-site generation are being made years in advance of fully understood demand profiles. These choices carry long-term consequences, shaping not only capital efficiency but the ability to adapt as AI technologies and use cases continue to evolve. The result is a shift in design philosophy. Increasingly, the industry is moving away from static, one-time provisioning toward architectures that prioritize modularity, scalability, and optionality, seeking to preserve flexibility without sacrificing near-term delivery. In this roundtable, our panel explores how developers, operators, and suppliers are navigating that balance, and what it will take to future-proof AI infrastructure in an era defined by both unprecedented growth and persistent uncertainty.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »