Stay Ahead, Stay ONMINE

Deep Data Center: Neoclouds as the ‘Picks and Shovels’ of the AI Gold Rush

In 1849, the discovery of gold in California ignited a frenzy, drawing prospectors from around the world in pursuit of quick fortune. While few struck it rich digging and sifting dirt, a different class of entrepreneurs quietly prospered: those who supplied the miners with the tools of the trade. From picks and shovels to tents […]

In 1849, the discovery of gold in California ignited a frenzy, drawing prospectors from around the world in pursuit of quick fortune. While few struck it rich digging and sifting dirt, a different class of entrepreneurs quietly prospered: those who supplied the miners with the tools of the trade. From picks and shovels to tents and provisions, these providers became indispensable to the gold rush, profiting handsomely regardless of who found gold.

Today, a new gold rush is underway, in pursuit of artificial intelligence. And just like the days of yore, the real fortunes may lie not in the gold itself, but in the infrastructure and equipment that enable its extraction. This is where neocloud players and chipmakers are positioned, representing themselves as the fundamental enablers of the AI revolution.

Neoclouds: The Essential Tools and Implements of AI Innovation

The AI boom has sparked a frenzy of innovation, investment, and competition. From generative AI applications like ChatGPT to autonomous systems and personalized recommendations, AI is rapidly transforming industries. Yet, behind every groundbreaking AI model lies an unsung hero: the infrastructure powering it. Enter neocloud providers—the specialized cloud platforms delivering the GPU horsepower that fuels AI’s meteoric rise. Let’s examine how neoclouds represent the “picks and shovels” of the AI gold rush, used for extracting the essential backbone of AI innovation.

Neoclouds are emerging as indispensable players in the AI ecosystem, offering tailored solutions for compute-intensive workloads such as training large language models (LLMs) and performing high-speed inference. Unlike traditional hyperscalers (e.g., AWS, Azure, Google Cloud), which cater to a broad range of use cases, neoclouds focus exclusively on optimizing infrastructure for AI and machine learning applications. This specialization allows them to deliver superior performance at a lower cost, making them the go-to choice for startups, enterprises, and research institutions alike.

The analogy to historical gold rushes is striking: just as miners relied on suppliers of picks and shovels to extract value from the earth, today’s AI pioneers depend on neocloud providers and chipmakers to access the computational resources needed to unlock insights and drive innovation. Neoclouds don’t compete in creating AI applications themselves; instead, they profit by enabling others to do so. This business model positions them as foundational players in the AI economy—profiting regardless of which companies or applications ultimately dominate the market.

Why Neoclouds Are Surging: Four Key Advantages

As AI development accelerates, the demand for compute is outpacing even the hyperscale cloud’s capacity to deliver. Training a foundation model like GPT-4 can require tens of thousands of GPUs running continuously for weeks—something traditional providers weren’t architected to support at scale. Enter the neoclouds: leaner, purpose-built platforms designed to meet the needs of modern AI workloads with greater precision and speed.

Here’s how they’re closing the gap:

1. Specialized Hardware

Neocloud providers are laser-focused on providing access to the newest and most powerful GPUs—often before hyperscalers can make them widely available. NVIDIA’s H100 and A100 accelerators, crucial for training and inference, are the cornerstone of these platforms. Many neoclouds go a step further, offering liquid-cooled racks, ultra-low-latency interconnects, and AI-specific storage tiers designed to keep pace with multi-petabyte datasets. For cutting-edge AI labs and fast-moving startups, this means the difference between weeks and months in development timelines.

2. Bare-Metal Performance

By eliminating the virtualization layers common in general-purpose clouds, neoclouds give users direct access to raw compute power. This bare-metal approach reduces latency and avoids the “noisy neighbor” problem, enabling highly deterministic performance—crucial when fine-tuning large language models or orchestrating tightly coupled GPU workloads. For teams pushing the edge of performance, every clock cycle matters, and neoclouds are delivering those cycles unfiltered.

3. Scalability on Demand

AI R&D is rarely linear. One month you’re iterating on small models, and the next you’re scaling to train a 70-billion-parameter transformer. Neocloud infrastructure is designed to expand and contract with those demands—supporting everything from a few nodes to full-scale superclusters. Unlike traditional clouds, which often impose capacity planning constraints or quotas, neoclouds thrive on elasticity, provisioning capacity dynamically and often within hours rather than weeks.

4. Cost Efficiency with Purpose-Built Pricing

Where hyperscalers often price GPU instances at a premium—factoring in legacy overhead and multi-tenant complexity—neoclouds keep things lean. Many operate with thinner margins and lower operational complexity, translating to significantly lower costs per training hour. Providers like Lambda, Run:ai, and Voltage Park offer transparent, workload-specific pricing that aligns with actual usage. High utilization rates and tailored provisioning models keep costs in check, making neoclouds especially appealing for startups and research groups running on grant cycles or VC runway.

These advantages make neoclouds invaluable not only for startups with limited budgets but also for established enterprises seeking to accelerate their AI initiatives.

Resilience at Scale: Why Neoclouds May Outlast the AI Hype Cycle

Investing in neoclouds offers a unique opportunity to participate in the AI boom without betting on specific applications or platforms. The rapid pace of innovation means that today’s leading AI models could be eclipsed by new breakthroughs tomorrow. However, regardless of which technologies prevail, the need for robust infrastructure will remain constant.

This dynamic mirrors historical gold rushes, where equipment suppliers thrived even as individual miners faced uncertainty. By providing essential tools for AI development, neocloud providers are positioned to benefit from sustained demand across diverse industries—from healthcare and finance to entertainment and logistics.

As the AI gold rush continues, neoclouds are poised to play an increasingly central role in shaping its trajectory. Their ability to deliver cost-effective, high-performance infrastructure makes them critical enablers of innovation. At the same time, their business model—focused on empowering others rather than competing directly—ensures they remain indispensable partners in the AI ecosystem.

Looking forward, neoclouds face challenges such as supply chain constraints for GPUs and competition from hyperscalers attempting to close the performance gap. However, their agility and specialization give them a distinct edge in navigating these hurdles. In many ways, they represent the future of cloud computing: leaner, faster, and more focused on solving specific problems.

As investors and enterprises seek ways to capitalize on AI’s transformative potential, neoclouds offer a compelling proposition—one that promises steady growth amid the chaos of rapid technological change.

Economic Disruption: How Neoclouds Are Redefining Cost and Performance

The emergence of neoclouds is causing a significant economic disruption in the AI infrastructure landscape. Unlike traditional hyperscalers that offer a broad range of services, neocloud providers concentrate on delivering optimized price-performance specifically for AI workloads. This specialization translates into several key advantages: higher GPU utilization rates, bare-metal access, and the application of deep, specialist expertise. These elements combine to create a compelling economic proposition for AI developers and enterprises.

The numbers speak for themselves. Neoclouds are achieving significant cost reductions, with reports from Uptime Institute indicating as much as 66% savings on GPU instances when compared to major hyperscalers. This substantial difference stems from the ability to maximize the use of expensive GPU resources and minimize overhead. For organizations running large-scale AI training or inference tasks, this can lead to considerable savings in operational expenses.

The efficiencies introduced by neoclouds are reshaping the overall economics of AI development. As models grow in complexity and require more compute power, the cost of training and deploying them has become a major barrier. By lowering these costs, neoclouds make it feasible for a wider array of organizations – from startups to established corporations – to engage in AI initiatives. This democratization of AI resources has the potential to accelerate innovation across diverse sectors, allowing more companies to harness the power of AI without breaking the bank.

Advancing Coopetition Between Neocloud Providers and Hyperscalers

The relationship between neocloud providers and traditional hyperscalers is increasingly defined by a complex blend of competition and collaboration. As the demand for AI infrastructure surges, both camps are vying for dominance in the lucrative GPU cloud market, yet their interactions are far from zero-sum, resulting in a competitive, yet symbiotic, market landscape.

Neoclouds have carved out a niche by specializing in GPU-accelerated infrastructure tailored for AI and machine learning workloads. Their agility, focus, and deep understanding of AI developers’ needs allow them to offer cost-effective, high-performance solutions that challenge the broader, premium-priced offerings of hyperscalers. While hyperscalers benefit from vast economies of scale and integrated ecosystems, their diversified business models and high-margin pricing strategies on their cloud businesses often result in higher prices for AI-specific resources.

Despite this competition, the two groups are increasingly intertwined. Neoclouds often position themselves not as direct competitors, but as complementary partners within enterprise multi-cloud strategies. For example, an organization might use a neocloud to train a large language model, then deploy it on a hyperscaler’s platform for inference and integration with other services. This approach allows enterprises to optimize for both performance and cost, leveraging the strengths of each provider.

Investment, Partnership, and Multi-Cloud Integration

The interplay between neoclouds and hyperscalers is further complicated by growing investment and partnership activity. Hyperscalers are not only competing with neoclouds but also investing in them and, in some cases, becoming their customers. A prominent example is Microsoft’s $10 billion commitment to CoreWeave to secure access to specialized GPU infrastructure through 2029. Such deals highlight the recognition by hyperscalers of the unique value neoclouds bring to the AI infrastructure ecosystem.

For enterprises, this dynamic is accelerating the adoption of multi-cloud strategies. By integrating neoclouds into their cloud portfolios, organizations can avoid vendor lock-in, optimize for specific workloads, and ensure access to scarce GPU resources. However, this also introduces new complexities, as enterprises must now manage interoperability and data movement across increasingly fragmented cloud environments.

Looking forward, the evolving relationship between neoclouds and hyperscalers increases the prospects (and pricing) of further M&A activity across key players. As neoclouds grow in scale and strategic importance, it is likely that some will be acquired by hyperscalers seeking to bolster their AI infrastructure capabilities and maintain competitive advantage. Such consolidation will reshape the market’s structure, potentially accelerating innovation through deeper integration, but also raising questions about pricing power and the pace of future disruption.

High Stakes Require High Investment

The capital intensity of neoclouds is staggering. Building and maintaining these specialized infrastructure platforms requires massive investments in GPUs, networking equipment, and data center facilities. To finance this expansion, neoclouds have tapped into both equity and debt markets, with some pioneering the use of GPU assets as collateral for loans. This approach allows them to leverage their hardware investments more efficiently, but also exposes them to the risk of depreciating GPU values and limited liquidity.

The market is also seeing large supply chain vulnerabilities, and rapid equipment obsolescence, increasing the risk of relying too heavily on any one provider. The rapid pace of innovation in chip design presents both opportunities and challenges. While new generations of GPUs promise increased performance and efficiency, they also render older hardware obsolete on extremely short timelines.

This creates a constant pressure to upgrade infrastructure, potentially straining finances and exacerbating supply chain vulnerabilities. Geopolitical factors and manufacturing bottlenecks can further disrupt the supply of GPUs, as we are seeing with recent tariffs affecting NVIDIA, thus impacting neocloud providers’ abilities to meet growing demand.

The Chipmaker Landscape: Navigating the Silicon Battleground of the Neocloud Era

The neocloud boom is redefining the data center ecosystem, driven by billions in venture capital and a red-hot GPU-backed debt market that’s rewriting the rules of infrastructure financing. But while the capital is flowing fast, the underlying hardware story is anything but straightforward. Supply chain constraints, hardware churn, and the specter of over-leverage hang over the sector as neocloud builders sprint to scale. At the heart of this high-stakes race sit the chipmakers — the true power brokers of the AI infrastructure gold rush.

NVIDIA holds the pole position, having effectively set the standard for AI compute with its high-performance GPUs and the proprietary CUDA software stack that developers now treat as foundational. The company’s dominance isn’t just about raw silicon; it’s about controlling the ecosystem. But as demand for AI infrastructure skyrockets, so do concerns about supply bottlenecks, pricing leverage, and the systemic risks of depending too heavily on a single vendor.

Enter AMD and Intel, both aggressively positioning themselves as viable alternatives. AMD’s Instinct accelerators have made meaningful headway, especially among hyperscalers and research labs looking for more open, programmable environments. Its embrace of open-source software and its tight integration across CPU and GPU workloads gives it an edge in environments where flexibility and long-term value matter.

Intel, meanwhile, is betting on a diversified portfolio and a vertically integrated approach. With CPUs, discrete GPUs, and dedicated AI accelerators (via Habana Labs), Intel is aiming to meet the market wherever the workload lands — from training massive models to powering real-time inference at the edge. Its growing software investments signal a deeper push to win developer mindshare, not just silicon sockets.

What’s increasingly clear is that chipmakers now play a strategic role that transcends component supply. Their influence touches everything from AI model optimization to deployment timelines and total cost of ownership. In a capital-intensive market where seconds of inference time and watts per rack can make or break a business model, silicon choices aren’t just technical — they’re existential.

For neocloud operators, the path forward demands architectural agility. Betting solely on NVIDIA may win short-term performance, but long-term resilience will require multi-vendor strategies that hedge against market shocks, broaden workload compatibility, and enhance buyer leverage. The most successful neocloud platforms will be those that understand chipmakers not just as suppliers, but as strategic partners — and occasionally, as competitive threats.

As the AI era matures, the chipmaker battleground will increasingly shape who wins the neocloud race — and who gets left behind in the silicon stampede.

Which Provides the Winning Competitive Edge: Specialization or Scale?

The rise of neoclouds has been driven by their ability to deliver specialized, high-performance infrastructure tailored for AI workloads. However, the question remains: are these advantages sustainable in the long term, or do they merely represent a transitional phase before hyperscalers catch up? For neoclouds to maintain a competitive edge, there are many potential scenarios to consider.

The Bull Case: Persistent Barriers to Entry

The argument for neoclouds’ long-term viability rests on several key factors:

Technical Complexity: Building and managing AI-optimized infrastructure requires deep expertise in GPU architecture, networking, and software. Neoclouds have cultivated this expertise over time, creating a barrier to entry that is difficult for hyperscalers to replicate quickly.

Specialization: Neoclouds focus solely on AI workloads, allowing them to optimize their infrastructure and services for the specific needs of AI developers. This specialization translates into superior performance and cost-efficiency compared to the more generalized offerings of hyperscalers.

Agility: Neoclouds tend to be smaller and more agile than hyperscalers, enabling them to adapt quickly to changing market conditions and emerging technologies. This agility is particularly valuable in the rapidly evolving field of AI.

The Bear Case: Margin Compression and Scale Advantages

Despite many advantages, neoclouds face significant challenges from hyperscalers:

Margin Compression: As hyperscalers invest more heavily in AI infrastructure and refine their offerings, they may be able to erode the price advantage currently enjoyed by neoclouds. Hyperscalers’ scale economies and ability to cross-subsidize AI services with other cloud offerings could put significant pressure on neocloud margins.

Scale Advantages: Hyperscalers possess massive economies of scale, allowing them to procure hardware at lower prices and invest more heavily in R&D. This scale advantage could enable them to leapfrog neoclouds in terms of performance and innovation.

Ecosystem Integration: Hyperscalers offer tightly integrated ecosystems of cloud services, making it easier for customers to build and deploy AI applications. Neoclouds may struggle to match this level of integration, particularly for enterprises that rely on a wide range of cloud services.

Scenarios for the Future

The future of the neocloud market is uncertain, but several scenarios are possible:

Coexistence: Neoclouds and hyperscalers coexist, with each catering to different segments of the market. Neoclouds focus on specialized AI workloads and customers who prioritize performance and cost-efficiency, while hyperscalers cater to enterprises seeking a broader range of cloud services and ecosystem integration.

Market Consolidation: Hyperscalers acquire leading neoclouds to bolster their AI infrastructure capabilities and gain access to specialized expertise. This scenario could lead to greater integration and innovation, but also raise concerns about pricing power and market competition.

Disruption: Neoclouds continue to innovate and disrupt the market, challenging the dominance of hyperscalers and attracting a growing share of AI workloads. This scenario would require neoclouds to overcome challenges related to scale, ecosystem integration, and capital access.

Ultimately, the long-term success of neoclouds will depend on their ability to differentiate themselves from hyperscalers, innovate continuously, and adapt to the rapidly evolving needs of the AI community.

Making AI More Accessible: Who’s Using Neoclouds—and Why It Matters

The neocloud market represents a dynamic and essential force driving innovation in the age of AI. By specializing in high-performance, cost-effective infrastructure, neocloud providers are not only enabling the AI revolution, but also reshaping the economics of cloud computing.

Whether as “picks and shovels” suppliers to the AI gold rush, or as competitive partners alongside hyperscalers, neoclouds are proving their enduring value in a rapidly evolving technological landscape. And as they navigate the challenges of scale, financing, and chipmaker dependencies, the neoclouds are poised to continue pushing the boundaries of what’s possible in AI, fostering a more accessible and innovative future for all.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Intel bets on Terafab to help it reassert itself in the AI chip race

Tesla, SpaceX, and xAI say that Terafab will be the largest chip manufacturing facility ever, outputting 1TW a year of compute power and “combining logic, memory and advanced packaging under one roof.” Intel’s ability to “design, fabricate, and package ultra-high-performance chips at scale” will help accelerate those 1TW/year ambitions, the

Read More »

Cisco joins Anthropic’s multivendor effort to secure AI software

In addition to model usage credits, Anthropic donated $2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5 million to the Apache Software Foundation “to enable the maintainers of open-source software to respond to this changing landscape.” “Partners will, to the extent they’re able, share information and best

Read More »

Cloud-first vs. sovereign-first: Navigating the trade-off

Encryption is often suggested as a way to address data sovereignty because the customer holds the key to protect data in motion, in use, and at rest. However, Buest noted, most regulators have not explicitly approved the use of encryption or other security measures or deemed them sufficient for compliance.

Read More »

United States and Australia meet for Mining, Minerals and Metals Investment Ministerial

We, the Australian Minister for Resources and Northern Australia, the Hon Madeleine King MP, and Secretaries and senior representatives from the United States, including Secretary of Interior Doug Burgum, Administrator of the U.S. Environmental Protection Agency Lee Zeldin, Chairman of the U.S. Export Import Bank John Jovanovic, and Assistant Secretary of Energy Audrey Robertson,  held our inaugural Mining, Minerals, and Metals Investment Ministerial in Tokyo on 14 March 2026, to advance cooperation under the landmark bilateral agreement, the United States–Australia Framework for Securing Supply in the Mining and Processing of Critical Minerals and Rare Earths (the Framework). Under the Framework, Australia and the United States are delivering concrete outcomes to strengthen, secure, and diversify critical minerals and rare earth supply chains. Within six months of agreement of the Framework, we have each taken measures to provide at least USD $1 billion in financing to key critical minerals projects. By mobilising government and private sector capital, these investments support the development of our shared industrial base and strengthen longer term supply for defence, manufacturing, and energy supply chains. To build resilience, enhance stability, and bolster economic security in support of our shared critical minerals interests, Australia and the United States today announce the establishment of the Critical Minerals Supply Security Response Group and commit to deeper cooperation between our key agencies. In line with the Framework, the Critical Minerals Supply Security Response Group, led by senior representatives from the United States Department of Energy and the Australian Department of Industry, Science and Resources, will cooperate on priority minerals and supply chain vulnerabilities and coordinate efforts to accelerate the delivery of processed minerals under the Framework. Australia and the United States also commit to leveraging shared policy and interagency regulatory tools and, where appropriate, investments to secure critical minerals supply, including through cooperation between Australia’s

Read More »

Energy Security for Indo-Pacific Endurance, a Global Growth Center of the 21st Century

We, the ministers and representatives of Australia, Bangladesh, Brunei, Japan, Malaysia, New Zealand, Philippines, Republic of Korea, Singapore, Timor-Leste, United States, and Vietnam met in Tokyo, Japan, on March 14–15, 2026, to hold the historic Indo-Pacific Energy Security Ministerial and Business Forum. The forum was co-hosted by the Chair and Vice Chair of the U.S. National Energy Dominance Council, Secretary of the Interior Doug Burgum, Secretary of Energy Chris Wright, and Japanese Minister of Economy, Trade and Industry Akazawa Ryosei. We affirm our shared determination to work collectively to ensure stable and secure energy supply in the Indo-Pacific region. To this end, we focused on three key themes: reliable energy for Indo-Pacific growth and security; securing energy supply chains, infrastructure and maritime routes; and enabling trade and investment. To support these goals, Ministers affirm the value of: The necessity of reliable, affordable, secure and dispatchable energy from all sources depending on each country’s situation, in meeting the region’s surging energy demand.  Promoting quality as a key procurement mechanism to mitigate risk of operational liabilities. Protecting against rising cyber threats to the security of the energy grid, critical infrastructure, vehicles, and devices. Investment in comprehensive energy infrastructure that encompasses the entire energy supply chain from upstream development facilities to downstream equipment to support an affordable, reliable, and secure energy supply including baseload electricity.  Continuing to supply affordable and reliable energy sources in the Indo-Pacific region, including through emergency response measures, to benefit both producers and consumer countries. While maintaining strong relations with current partners, expanding and diversifying energy suppliers and fuel types in order to strengthen energy security. Promoting transparent, long-term energy contracts that reduce market volatility. As the global economy expands, so too does demand for energy driven by AI and electrification, we, as countries committed to a free and

Read More »

Energy Department Issues Funding Opportunity to Strengthen American Critical Minerals and Materials Supply Chain

WASHINGTON—The U.S. Department of Energy’s (DOE) Office of Critical Minerals and Energy Innovation (CMEI) and Hydrocarbons and Geothermal Energy Office (HGEO) today announced a funding opportunity of up to $69 million for technologies or processes that advance the domestic production and refining of critical materials. Projects selected through this Notice of Funding Opportunity (NOFO) will address the greatest technical obstacles to a stronger critical materials supply chain. “This funding will help establish a more secure and affordable supply of the critical minerals and materials that are foundational to American energy dominance, national security, and industrial competitiveness,” said Assistant Secretary of Energy (EERE) Audrey Robertson. DOE is seeking projects that bridge the gap between bench-scale innovations and commercially viable technologies. Selected project teams will form industry-led partnerships and conduct research and development with support from the U.S. national laboratories. The NOFO, which is part of DOE’s Critical Minerals and Materials Accelerator Program and jointly funded by CMEI’s Advanced Materials and Manufacturing Technologies Office and HGEO’s Office of Geothermal, has three primary topic areas: Production and material efficiency for critical materials including rare earth elements Processes to refine and alloy gallium, gallium nitride, germanium, and silicon carbide Cost-competitive direct lithium extraction, separation, and processing CMEI will host an informational webinar on April 16, 2026, to discuss the NOFO and application requirements. Letters of intent are due on April 21, 2026, by 5 p.m. ET. Deadlines for full applications will be staggered based on topic area, starting in May 2026. For more details on sub-topics and deadlines, visit the NOFO landing page. The Critical Minerals and Materials Accelerator is one of several programs developed through DOE’s Critical Materials Collaborative. This NOFO is part of $1 billion in critical materials funding announced by DOE in August 2025, and follows the Manufacturing Deployment Office’s announcement

Read More »

Latin America returns to the energy security conversation at CERAWeek

With geopolitical risk central to conversations about energy, and with long-cycle supply once again in focus, Latin America’s mix of hydrocarbons and export potential drew renewed attention at CERAWeek by S&P Global in Houston. Argentina, resource story to export platform Among the regional stories, Argentina stood out as Vaca Muerta was no longer discussed simply as a large unconventional resource, but whether the country could turn resource quality into sustained export capacity.  Country officials talked about scale: more operators, more services, more infrastructure, and a larger industrial base around the unconventional play. Daniel González, Vice Minister of Energy and Mining for Argentina, put it plainly: “The time has come to expand the Vaca Muerta ecosystem.” What is at stake now is not whether the basin works, but whether the country can build enough above-ground capacity and regulatory consistency to keep development moving. Horacio Marín, chairman and chief executive officer of YPF, offered an expansive version of that argument. He said Argentina’s energy exports could reach $50 billion/year by 2031, backed by roughly $130 billion in cumulative investment in oil, LNG, and transportation infrastructure. He said Argentine crude output could reach 1 million b/d by end-2026. He said Argentina wants to be seen less as a recurrent frontier story and more as a future supplier with scale. “The time to invest in Vaca Muerta is now,” Marín said. The LNG piece is starting to take shape. Eni, YPF, and XRG signed a joint development agreement in February to move Argentina LNG forward, with a first phase planned at 12 million tonnes/year. Southern Energy—backed by PAE, YPF, Pampa Energía, Harbour Energy, and Golar LNG—holds a long-term agreement with SEFE for 2 million tonnes/year over 8 years. The movement by global standards is early-stage and relatively modest, but it adds to Argentina’s export

Read More »

Market Focus: LNG supply shocks expose limited market flexibility

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } In this Market Focus episode of the Oil & Gas Journal ReEnterprised podcast, Conglin Xu, managing editor, economics, takes a look into the LNG market shock caused by the effective closure of the Strait of Hormuz and the sudden loss of Qatari LNG supply as the Iran war continues. Xu speaks with Edward O’Toole, director of global gas analysis, RBAC Inc., to examine how these disruptions are intensifying global supply constraints at a time when European inventories were already under pressure following a colder-than-average winter and weaker storage levels. Drawing on RBAC’s G2M2 global gas market model, O’Toole outlines disruption scenarios analyzed in the firm’s recent report and explains how current events align with their findings. With global LNG production already operating near maximum utilization, the market response is being driven by higher prices and reduced consumption. Europe faces sharper price pressure due to storage refill needs, while Asian markets are expected to see greater demand reductions as consumers switch fuels. O’Toole underscores the importance of scenario-based modeling and supply diversification as geopolitical risk exposes structural vulnerabilities in the LNG market—offering insights for stakeholders navigating an increasingly uncertain global

Read More »

Libya’s NOC, Chevron sign MoU for technical study for offshore Block NC146

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } The National Oil Corp. of Libya (NOC) signed a memorandum of understanding (MoU) with Chevron Corp. to conduct a comprehensive technical study of offshore Block NC146. The block is an unexplored area with “encouraging geological indicator that could lead to significant discoveries, helping to strengthen national reserves,” NOC noted Chairman Masoud Suleman as saying, noting that the partnership is “a message of confidence in the Libyan investment environment and evidence of the return of major companies to work and explore promising opportunities in our country.” According to the NOC, Libya produces 1.4 million b/d of oil and aims to increase oil production in the coming 3-5 years to 2 million b/d and then to 3 million b/d following years of instability that impacted the country’s production.   Chevron is working to add to its diverse exploration and production portfolio in the Mediterranean and Africa and continues to assess potential future opportunities in the region.  The operator earlier this year entered Libya after it was designated as a winning bidder for Contract Area 106 in the Sirte basin in the 2025 Libyan Bid Round. That followed the January 2026 signing of a

Read More »

Aria Networks raises $125M and debuts its approach for AI-optimized networks

That embedded telemetry feeds adaptive tuning of Dynamic Load Balancing parameters, Data Center Quantized Congestion Notification (DCQCN) and failover logic without waiting for a threshold breach or a manual intervention. The platform architecture is layered. At the lowest levels, agents react in microseconds to link-level events such as transceiver flaps, rerouting leaf-spine traffic in milliseconds. At higher layers, agents make more strategic decisions about flow placement across the cluster. At the cloud layer, a large language model-based agent surfaces correlated insights to operators in natural language, allowing them to ask questions about specific jobs or alert conditions and receive context-aware responses. Karam argued that simply bolting an LLM onto an existing architecture does not deliver the same result. “If you ask it to do anything, it could hallucinate and bring down the network,” he said. “It doesn’t have any of the context or the data that’s required for this approach to be made safe.” Aria also exposes an MCP server, allowing external systems such as job schedulers and LLM routers to query network state directly and integrate it into their own decision-making. MFU and token efficiency as the target metrics Traditional networking is often evaluated in terms of bandwidth and latency. Aria is centering its platform around two metrics: Model FLOPS Utilization (MFU) and token efficiency. MFU is defined as the ratio of achieved FLOPS per accelerator to the theoretical peak. In practice, Karam said, MFU for training workloads typically runs between 33% and 45%, and inference often comes in below 30%. “The network has a major impact on the MFU, and therefore the token efficiency, because the network touches every aspect, every other component in your cluster,” Karam said.

Read More »

New v2 UALink specification aims to catch up to NVLink

But given there are no products currently available using UALink 1.0, UALink 2.0 might be viewed as a premature launch Need to play catch up David Harold, senior analyst with Jon Peddie Research, was guarded in his reaction. “While 2.0 is a significant step forward from 1.0, we need to bear in mind that even 1.0 solutions aren’t shipping yet – they aren’t due until later this year. So, Nvidia is way ahead of the open alternatives on connectivity, indeed ahead of the proprietary or Ethernet based solutions too,” he said. What this means, he added, is that non-Nvidia alternatives are currently lagging in the market. “They need to play catch up on several fronts, not just networking. … I can’t think of a single shipping product that meaningfully has advantages over a Nvidia solution,” he said. “Ultimately UALink remains desirable since it will enable heterogeneous, multi-vendor environments but it’s quite a way behind NVLink today. ” There are plenty of signs that organizations will find it hard to break free of the Nvidia dominance, however. A couple of months ago, RISC-V pioneer SiFive signed a deal with Nvidia to incorporate Nvidia NVLink Fusion into its data center products, a departure for RISC companies. According to Harold, other companies could be joining it. “Custom ASIC company MediaTek is an NVLink partner, and they told me last week that they are planning to integrate it directly into next-generation custom silicon for AI applications,” he said. “This will enable a wider range of companies to use NVLink as their high-speed interconnect.” Other options And, Harold noted, Nvidia is already looking at other options. “Nvidia is now shifting to look at the copper limit for networking speed, with an interest in using optical connectivity instead,” said Harold.

Read More »

Nvidia’s SchedMD acquisition puts open-source AI scheduling under scrutiny

Is the concern valid? Dr. Danish Faruqui, CEO of Fab Economics, a US-based AI hardware and datacenter advisory, said the risk was real. “The skepticism that Nvidia may prioritize its own hardware in future software updates, potentially delaying or under-optimizing support for rivals, is a feasible outcome,” he said. As the primary developer, Nvidia now controls Slurm’s official development roadmap and code review process, Faruqui said, “which could influence how quickly competing chips are integrated on new development or continuous improvement elements.” Owning the control plane alongside GPUs and networking infrastructure such as InfiniBand, he added, allows Nvidia to create a tightly vertically integrated stack that can lead to what he described as “shallow moats, where advanced features are only available or performant on Nvidia hardware.” One concrete test of that, industry observers say, will be how quickly Nvidia integrates support for AMD’s next-generation chips into Slurm’s codebase compared with how quickly it integrates its own forthcoming hardware and networking technologies, such as InfiniBand. Does the Bright Computing precedent hold? Analysts point to Nvidia’s 2022 acquisition of Bright Computing as a reference point, saying the software became optimized for Nvidia chips in ways that disadvantaged users of competing hardware. Nvidia disputed that characterization, saying Bright Computing supports “nearly any CPU or GPU-accelerated cluster.” Rawat said the comparison was instructive but imperfect. “Nvidia’s acquisition of Bright Computing highlights its preference for vertical integration, embedding Bright tightly into DGX and AI Factory stacks rather than maintaining a neutral, multi-vendor orchestration role,” he said. “This reflects a broader strategic pattern — Nvidia seeks to control the full-stack AI infrastructure experience.”

Read More »

Two New England states say no to new data centers

It’s getting harder and harder for governments to ignore the impact that data centers are having on their communities, consuming vast amounts of water and driving up electricity prices, experts say. According to a Pew Research Center analysis, data centers consumed 183 terawatt-hours of electricity in 2024, more than 4% of total U.S. electricity use. That demand is projected to more than double to 426 terawatt-hours by 2030. The impact is significant. In 2023, data centers consumed about 26% of Virginia’s electricity supply, although Virginia is notable for having an extremely dense collection of data centers. Alan Howard, senior analyst for infrastructure at Omdia, says he is not surprised at all. “The amount of national press coverage regarding what is arguably a limited number of data center ‘horror’ stories has many jurisdictions and states spooked over the potential impacts data center projects might have,” he said. It’s an evolution that’s been coming for some time whereby local legislators have embraced the idea that they don’t want to learn the hard way as others already have, he argues. “All that said, it seems unlikely that there will be broad bans on data center development that would cripple the industry. There’s lots of places to go in the U.S. and developers have warmed up to siting projects in places amenable to their needs, although not ideally convenient,” said Howard.

Read More »

Nscale Expands AI Factory Strategy With Power, Platform, and Scale

Nscale has moved quickly from startup to serious contender in the race to build infrastructure for the AI era. Founded in 2024, the company has positioned itself as a vertically integrated “neocloud” operator, combining data center development, GPU fleet ownership, and a software stack designed to deliver large-scale AI compute. That model has helped it attract backing from investors including Nvidia, and in early March 2026 the company raised another $2 billion at a reported $14.6 billion valuation. Reuters has described Nscale’s approach as owning and operating its own data centers, GPUs, and software stack to support major customers including Microsoft and OpenAI. What makes Nscale especially relevant now is that it is no longer content to operate as a cloud intermediary or capacity provider. Over the past year, the company has increasingly framed itself as an AI hyperscaler and AI factory builder, seeking to combine land, power, data center shells, GPU procurement, customer offtake, and software services into a single integrated platform. Its acquisition of American Intelligence & Power Corporation, or AIPCorp, is the clearest signal yet of that shift, bringing energy infrastructure directly into the center of Nscale’s business model. The AIPCorp transaction is significant because it gives Nscale more than additional development capacity. The company said the deal includes the Monarch Compute Campus in Mason County, West Virginia, a site of up to 2,250 acres with a state-certified AI microgrid and a power runway it says can scale beyond 8 gigawatts. Nscale also said the acquisition establishes a new division, Nscale Energy & Power, headquartered in Houston, extending its platform further into power development. That positioning reflects a broader shift in the AI infrastructure market. The central bottleneck is no longer simply access to GPUs. It is the ability to assemble power, cooling, land, permits, data center

Read More »

Google Research touts memory-compression breakthrough for AI processing

The last time the market witnessed a shakeup like this was China’s DeepSeek, but doubts emerged quickly about its efficacy. Developers found DeepSeek’s efficiency gains required deep architectural decisions that had to be built in from the start. TurboQuant requires no retraining or fine-tuning. You just drop it straight into existing inference pipelines, at least in theory. If it works in production systems with no retrofitting, then data center operators will get tremendous performance gains on existing hardware. Data center operators won’t have to throw hardware at the performance problem. However, analysts urge caution before jumping to conclusions. “This is a research breakthrough, not a shipping product,” said Alex Cordovil, research director for physical infrastructure at The Dell’Oro Group. “There’s often a meaningful gap between a published paper and real-world inference workloads.” Also, Dell’Oro notes that efficiency gains in AI compute tend to get consumed by more demand, known as the Jevons paradox. “Any freed-up capacity would likely be absorbed by frontier models expanding their capabilities rather than reducing their hardware footprint.” Jim Handy, president of Objective Analysis, agrees on that second part. “Hyperscalers won’t cut their spending – they’ll just spend the same amount and get more bang for their buck,” he said. “Data centers aren’t looking to reach a certain performance level and subsequently stop spending on AI. They’re looking to out-spend each other to gain market dominance. This won’t change that.” Google plans to present a paper outlining TurboQuant at the ICLR conference in Rio de Janeiro running from April 23 through April 27.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »