Stay Ahead, Stay ONMINE

Deep Data Center: Neoclouds as the ‘Picks and Shovels’ of the AI Gold Rush

In 1849, the discovery of gold in California ignited a frenzy, drawing prospectors from around the world in pursuit of quick fortune. While few struck it rich digging and sifting dirt, a different class of entrepreneurs quietly prospered: those who supplied the miners with the tools of the trade. From picks and shovels to tents […]

In 1849, the discovery of gold in California ignited a frenzy, drawing prospectors from around the world in pursuit of quick fortune. While few struck it rich digging and sifting dirt, a different class of entrepreneurs quietly prospered: those who supplied the miners with the tools of the trade. From picks and shovels to tents and provisions, these providers became indispensable to the gold rush, profiting handsomely regardless of who found gold.

Today, a new gold rush is underway, in pursuit of artificial intelligence. And just like the days of yore, the real fortunes may lie not in the gold itself, but in the infrastructure and equipment that enable its extraction. This is where neocloud players and chipmakers are positioned, representing themselves as the fundamental enablers of the AI revolution.

Neoclouds: The Essential Tools and Implements of AI Innovation

The AI boom has sparked a frenzy of innovation, investment, and competition. From generative AI applications like ChatGPT to autonomous systems and personalized recommendations, AI is rapidly transforming industries. Yet, behind every groundbreaking AI model lies an unsung hero: the infrastructure powering it. Enter neocloud providers—the specialized cloud platforms delivering the GPU horsepower that fuels AI’s meteoric rise. Let’s examine how neoclouds represent the “picks and shovels” of the AI gold rush, used for extracting the essential backbone of AI innovation.

Neoclouds are emerging as indispensable players in the AI ecosystem, offering tailored solutions for compute-intensive workloads such as training large language models (LLMs) and performing high-speed inference. Unlike traditional hyperscalers (e.g., AWS, Azure, Google Cloud), which cater to a broad range of use cases, neoclouds focus exclusively on optimizing infrastructure for AI and machine learning applications. This specialization allows them to deliver superior performance at a lower cost, making them the go-to choice for startups, enterprises, and research institutions alike.

The analogy to historical gold rushes is striking: just as miners relied on suppliers of picks and shovels to extract value from the earth, today’s AI pioneers depend on neocloud providers and chipmakers to access the computational resources needed to unlock insights and drive innovation. Neoclouds don’t compete in creating AI applications themselves; instead, they profit by enabling others to do so. This business model positions them as foundational players in the AI economy—profiting regardless of which companies or applications ultimately dominate the market.

Why Neoclouds Are Surging: Four Key Advantages

As AI development accelerates, the demand for compute is outpacing even the hyperscale cloud’s capacity to deliver. Training a foundation model like GPT-4 can require tens of thousands of GPUs running continuously for weeks—something traditional providers weren’t architected to support at scale. Enter the neoclouds: leaner, purpose-built platforms designed to meet the needs of modern AI workloads with greater precision and speed.

Here’s how they’re closing the gap:

1. Specialized Hardware

Neocloud providers are laser-focused on providing access to the newest and most powerful GPUs—often before hyperscalers can make them widely available. NVIDIA’s H100 and A100 accelerators, crucial for training and inference, are the cornerstone of these platforms. Many neoclouds go a step further, offering liquid-cooled racks, ultra-low-latency interconnects, and AI-specific storage tiers designed to keep pace with multi-petabyte datasets. For cutting-edge AI labs and fast-moving startups, this means the difference between weeks and months in development timelines.

2. Bare-Metal Performance

By eliminating the virtualization layers common in general-purpose clouds, neoclouds give users direct access to raw compute power. This bare-metal approach reduces latency and avoids the “noisy neighbor” problem, enabling highly deterministic performance—crucial when fine-tuning large language models or orchestrating tightly coupled GPU workloads. For teams pushing the edge of performance, every clock cycle matters, and neoclouds are delivering those cycles unfiltered.

3. Scalability on Demand

AI R&D is rarely linear. One month you’re iterating on small models, and the next you’re scaling to train a 70-billion-parameter transformer. Neocloud infrastructure is designed to expand and contract with those demands—supporting everything from a few nodes to full-scale superclusters. Unlike traditional clouds, which often impose capacity planning constraints or quotas, neoclouds thrive on elasticity, provisioning capacity dynamically and often within hours rather than weeks.

4. Cost Efficiency with Purpose-Built Pricing

Where hyperscalers often price GPU instances at a premium—factoring in legacy overhead and multi-tenant complexity—neoclouds keep things lean. Many operate with thinner margins and lower operational complexity, translating to significantly lower costs per training hour. Providers like Lambda, Run:ai, and Voltage Park offer transparent, workload-specific pricing that aligns with actual usage. High utilization rates and tailored provisioning models keep costs in check, making neoclouds especially appealing for startups and research groups running on grant cycles or VC runway.

These advantages make neoclouds invaluable not only for startups with limited budgets but also for established enterprises seeking to accelerate their AI initiatives.

Resilience at Scale: Why Neoclouds May Outlast the AI Hype Cycle

Investing in neoclouds offers a unique opportunity to participate in the AI boom without betting on specific applications or platforms. The rapid pace of innovation means that today’s leading AI models could be eclipsed by new breakthroughs tomorrow. However, regardless of which technologies prevail, the need for robust infrastructure will remain constant.

This dynamic mirrors historical gold rushes, where equipment suppliers thrived even as individual miners faced uncertainty. By providing essential tools for AI development, neocloud providers are positioned to benefit from sustained demand across diverse industries—from healthcare and finance to entertainment and logistics.

As the AI gold rush continues, neoclouds are poised to play an increasingly central role in shaping its trajectory. Their ability to deliver cost-effective, high-performance infrastructure makes them critical enablers of innovation. At the same time, their business model—focused on empowering others rather than competing directly—ensures they remain indispensable partners in the AI ecosystem.

Looking forward, neoclouds face challenges such as supply chain constraints for GPUs and competition from hyperscalers attempting to close the performance gap. However, their agility and specialization give them a distinct edge in navigating these hurdles. In many ways, they represent the future of cloud computing: leaner, faster, and more focused on solving specific problems.

As investors and enterprises seek ways to capitalize on AI’s transformative potential, neoclouds offer a compelling proposition—one that promises steady growth amid the chaos of rapid technological change.

Economic Disruption: How Neoclouds Are Redefining Cost and Performance

The emergence of neoclouds is causing a significant economic disruption in the AI infrastructure landscape. Unlike traditional hyperscalers that offer a broad range of services, neocloud providers concentrate on delivering optimized price-performance specifically for AI workloads. This specialization translates into several key advantages: higher GPU utilization rates, bare-metal access, and the application of deep, specialist expertise. These elements combine to create a compelling economic proposition for AI developers and enterprises.

The numbers speak for themselves. Neoclouds are achieving significant cost reductions, with reports from Uptime Institute indicating as much as 66% savings on GPU instances when compared to major hyperscalers. This substantial difference stems from the ability to maximize the use of expensive GPU resources and minimize overhead. For organizations running large-scale AI training or inference tasks, this can lead to considerable savings in operational expenses.

The efficiencies introduced by neoclouds are reshaping the overall economics of AI development. As models grow in complexity and require more compute power, the cost of training and deploying them has become a major barrier. By lowering these costs, neoclouds make it feasible for a wider array of organizations – from startups to established corporations – to engage in AI initiatives. This democratization of AI resources has the potential to accelerate innovation across diverse sectors, allowing more companies to harness the power of AI without breaking the bank.

Advancing Coopetition Between Neocloud Providers and Hyperscalers

The relationship between neocloud providers and traditional hyperscalers is increasingly defined by a complex blend of competition and collaboration. As the demand for AI infrastructure surges, both camps are vying for dominance in the lucrative GPU cloud market, yet their interactions are far from zero-sum, resulting in a competitive, yet symbiotic, market landscape.

Neoclouds have carved out a niche by specializing in GPU-accelerated infrastructure tailored for AI and machine learning workloads. Their agility, focus, and deep understanding of AI developers’ needs allow them to offer cost-effective, high-performance solutions that challenge the broader, premium-priced offerings of hyperscalers. While hyperscalers benefit from vast economies of scale and integrated ecosystems, their diversified business models and high-margin pricing strategies on their cloud businesses often result in higher prices for AI-specific resources.

Despite this competition, the two groups are increasingly intertwined. Neoclouds often position themselves not as direct competitors, but as complementary partners within enterprise multi-cloud strategies. For example, an organization might use a neocloud to train a large language model, then deploy it on a hyperscaler’s platform for inference and integration with other services. This approach allows enterprises to optimize for both performance and cost, leveraging the strengths of each provider.

Investment, Partnership, and Multi-Cloud Integration

The interplay between neoclouds and hyperscalers is further complicated by growing investment and partnership activity. Hyperscalers are not only competing with neoclouds but also investing in them and, in some cases, becoming their customers. A prominent example is Microsoft’s $10 billion commitment to CoreWeave to secure access to specialized GPU infrastructure through 2029. Such deals highlight the recognition by hyperscalers of the unique value neoclouds bring to the AI infrastructure ecosystem.

For enterprises, this dynamic is accelerating the adoption of multi-cloud strategies. By integrating neoclouds into their cloud portfolios, organizations can avoid vendor lock-in, optimize for specific workloads, and ensure access to scarce GPU resources. However, this also introduces new complexities, as enterprises must now manage interoperability and data movement across increasingly fragmented cloud environments.

Looking forward, the evolving relationship between neoclouds and hyperscalers increases the prospects (and pricing) of further M&A activity across key players. As neoclouds grow in scale and strategic importance, it is likely that some will be acquired by hyperscalers seeking to bolster their AI infrastructure capabilities and maintain competitive advantage. Such consolidation will reshape the market’s structure, potentially accelerating innovation through deeper integration, but also raising questions about pricing power and the pace of future disruption.

High Stakes Require High Investment

The capital intensity of neoclouds is staggering. Building and maintaining these specialized infrastructure platforms requires massive investments in GPUs, networking equipment, and data center facilities. To finance this expansion, neoclouds have tapped into both equity and debt markets, with some pioneering the use of GPU assets as collateral for loans. This approach allows them to leverage their hardware investments more efficiently, but also exposes them to the risk of depreciating GPU values and limited liquidity.

The market is also seeing large supply chain vulnerabilities, and rapid equipment obsolescence, increasing the risk of relying too heavily on any one provider. The rapid pace of innovation in chip design presents both opportunities and challenges. While new generations of GPUs promise increased performance and efficiency, they also render older hardware obsolete on extremely short timelines.

This creates a constant pressure to upgrade infrastructure, potentially straining finances and exacerbating supply chain vulnerabilities. Geopolitical factors and manufacturing bottlenecks can further disrupt the supply of GPUs, as we are seeing with recent tariffs affecting NVIDIA, thus impacting neocloud providers’ abilities to meet growing demand.

The Chipmaker Landscape: Navigating the Silicon Battleground of the Neocloud Era

The neocloud boom is redefining the data center ecosystem, driven by billions in venture capital and a red-hot GPU-backed debt market that’s rewriting the rules of infrastructure financing. But while the capital is flowing fast, the underlying hardware story is anything but straightforward. Supply chain constraints, hardware churn, and the specter of over-leverage hang over the sector as neocloud builders sprint to scale. At the heart of this high-stakes race sit the chipmakers — the true power brokers of the AI infrastructure gold rush.

NVIDIA holds the pole position, having effectively set the standard for AI compute with its high-performance GPUs and the proprietary CUDA software stack that developers now treat as foundational. The company’s dominance isn’t just about raw silicon; it’s about controlling the ecosystem. But as demand for AI infrastructure skyrockets, so do concerns about supply bottlenecks, pricing leverage, and the systemic risks of depending too heavily on a single vendor.

Enter AMD and Intel, both aggressively positioning themselves as viable alternatives. AMD’s Instinct accelerators have made meaningful headway, especially among hyperscalers and research labs looking for more open, programmable environments. Its embrace of open-source software and its tight integration across CPU and GPU workloads gives it an edge in environments where flexibility and long-term value matter.

Intel, meanwhile, is betting on a diversified portfolio and a vertically integrated approach. With CPUs, discrete GPUs, and dedicated AI accelerators (via Habana Labs), Intel is aiming to meet the market wherever the workload lands — from training massive models to powering real-time inference at the edge. Its growing software investments signal a deeper push to win developer mindshare, not just silicon sockets.

What’s increasingly clear is that chipmakers now play a strategic role that transcends component supply. Their influence touches everything from AI model optimization to deployment timelines and total cost of ownership. In a capital-intensive market where seconds of inference time and watts per rack can make or break a business model, silicon choices aren’t just technical — they’re existential.

For neocloud operators, the path forward demands architectural agility. Betting solely on NVIDIA may win short-term performance, but long-term resilience will require multi-vendor strategies that hedge against market shocks, broaden workload compatibility, and enhance buyer leverage. The most successful neocloud platforms will be those that understand chipmakers not just as suppliers, but as strategic partners — and occasionally, as competitive threats.

As the AI era matures, the chipmaker battleground will increasingly shape who wins the neocloud race — and who gets left behind in the silicon stampede.

Which Provides the Winning Competitive Edge: Specialization or Scale?

The rise of neoclouds has been driven by their ability to deliver specialized, high-performance infrastructure tailored for AI workloads. However, the question remains: are these advantages sustainable in the long term, or do they merely represent a transitional phase before hyperscalers catch up? For neoclouds to maintain a competitive edge, there are many potential scenarios to consider.

The Bull Case: Persistent Barriers to Entry

The argument for neoclouds’ long-term viability rests on several key factors:

Technical Complexity: Building and managing AI-optimized infrastructure requires deep expertise in GPU architecture, networking, and software. Neoclouds have cultivated this expertise over time, creating a barrier to entry that is difficult for hyperscalers to replicate quickly.

Specialization: Neoclouds focus solely on AI workloads, allowing them to optimize their infrastructure and services for the specific needs of AI developers. This specialization translates into superior performance and cost-efficiency compared to the more generalized offerings of hyperscalers.

Agility: Neoclouds tend to be smaller and more agile than hyperscalers, enabling them to adapt quickly to changing market conditions and emerging technologies. This agility is particularly valuable in the rapidly evolving field of AI.

The Bear Case: Margin Compression and Scale Advantages

Despite many advantages, neoclouds face significant challenges from hyperscalers:

Margin Compression: As hyperscalers invest more heavily in AI infrastructure and refine their offerings, they may be able to erode the price advantage currently enjoyed by neoclouds. Hyperscalers’ scale economies and ability to cross-subsidize AI services with other cloud offerings could put significant pressure on neocloud margins.

Scale Advantages: Hyperscalers possess massive economies of scale, allowing them to procure hardware at lower prices and invest more heavily in R&D. This scale advantage could enable them to leapfrog neoclouds in terms of performance and innovation.

Ecosystem Integration: Hyperscalers offer tightly integrated ecosystems of cloud services, making it easier for customers to build and deploy AI applications. Neoclouds may struggle to match this level of integration, particularly for enterprises that rely on a wide range of cloud services.

Scenarios for the Future

The future of the neocloud market is uncertain, but several scenarios are possible:

Coexistence: Neoclouds and hyperscalers coexist, with each catering to different segments of the market. Neoclouds focus on specialized AI workloads and customers who prioritize performance and cost-efficiency, while hyperscalers cater to enterprises seeking a broader range of cloud services and ecosystem integration.

Market Consolidation: Hyperscalers acquire leading neoclouds to bolster their AI infrastructure capabilities and gain access to specialized expertise. This scenario could lead to greater integration and innovation, but also raise concerns about pricing power and market competition.

Disruption: Neoclouds continue to innovate and disrupt the market, challenging the dominance of hyperscalers and attracting a growing share of AI workloads. This scenario would require neoclouds to overcome challenges related to scale, ecosystem integration, and capital access.

Ultimately, the long-term success of neoclouds will depend on their ability to differentiate themselves from hyperscalers, innovate continuously, and adapt to the rapidly evolving needs of the AI community.

Making AI More Accessible: Who’s Using Neoclouds—and Why It Matters

The neocloud market represents a dynamic and essential force driving innovation in the age of AI. By specializing in high-performance, cost-effective infrastructure, neocloud providers are not only enabling the AI revolution, but also reshaping the economics of cloud computing.

Whether as “picks and shovels” suppliers to the AI gold rush, or as competitive partners alongside hyperscalers, neoclouds are proving their enduring value in a rapidly evolving technological landscape. And as they navigate the challenges of scale, financing, and chipmaker dependencies, the neoclouds are poised to continue pushing the boundaries of what’s possible in AI, fostering a more accessible and innovative future for all.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Lenovo targets AI workloads with massive storage update

These systems are aimed at enterprises that want to use both AI and virtualized systems, since AI hardware is not virtualized but bare-metal. For example, the all-new Lenovo ThinkAgile Converged Solution for VMware, which combines the enterprise-class features of the ThinkAgile VX Series with the data-management capabilities of the ThinkSystem

Read More »

AI plus digital twins could be the pairing enterprises need

3. Link AI with transactional workflows The third point of cooperation is the linkage of AI with transactional workflows. Companies already have applications that take orders, ship goods, move component parts around, and so forth. It’s these applications that currently drive the commercial side of a business, but taking an

Read More »

Asia-Pacific hits 50% IPv6 capability

Globally, the transition to IPv6 is advancing steadily, with 34% of networks now IPv6-capable. Not all IPv6-capable networks are using it by default; though: Capability means the system can use IPv6 — not that it prefers it. Still, the direction is clear. Countries like Vietnam (60% of networks IPv6-capable), Japan

Read More »

WTI Ends Higher But Logs Weekly Loss

Oil edged higher in a day of listless trading as investors parsed conflicting messaging on the progress of trade talks between the US and China. West Texas Intermediate futures rose to settle near $63 a barrel, but still notched their third weekly loss in the past four. Chinese authorities are weighing removing additional levies on a number of products including ethane, according to people familiar with the matter, as economic costs mount for certain industries. Shares in China’s top buyers of the fuel from the US jumped. Still, an agreement on trade between the US and China appears far off. President Donald Trump said Thursday that his administration was talking with China about trade, despite Beijing earlier denying the existence of negotiations and demanding that unilateral tariffs be revoked. The president later said that he wouldn’t drop tariffs on China unless “something substantial” is offered in return. Oil has dropped sharply this month on concerns that Trump’s sweeping tariffs and retaliatory measures from trading partners including China will cripple economic activity and throttle energy demand. In an effort to reassure US oil firms, Energy Secretary Chris Wright said that the trade turmoil will be fleeting and that the administration fully supports more crude output. “Our president is very clear and he wants lower energy prices,” Wright said during an interview with Bloomberg Television at an energy conference in Oklahoma City. Oil prices at $50 per barrel “in today’s world probably is not sustainable for our producers in this country.” The president “wants American industry and American consumers to thrive,” he added. The OPEC+ alliance has added to bearish headwinds by ramping up idled oil production, stoking fears of an oversupply. The group will meet on May 5 to discuss its output plans for June. Still, some metrics are pointing to

Read More »

Trade War Is Diverting USA Petroleum Gas Cargoes Away From China

Multiple carriers of petroleum-based gases traveling from the US to China have begun diverting to other countries due to the intensifying trade war between the world’s two largest economies.  Four cargoes of propane have shifted their routes from China to alternate destinations over the past week, bound for countries including Japan and South Korea, according to a report from analytics firm Vortexa. At least one cargo of ethane — which is used in plastics production — has been scrapped entirely, according to a person familiar with the matter. The diversions show the disruption to supply chains caused by the trade fight between the US and China, historically a major buyer of US ethane and petroleum gases. President Donald Trump has levied 145% tariffs on most US imports from China, and the US Trade Representative more recently imposed steep fees on Chinese-linked vessels seeking to access American ports. Eight Very Large Gas Carriers carrying US LPG were still on course for China as of this week, while the four diversions have all been recorded since April 17, according to Vortexa. Diverted vessels include the Zakher, Maple Gas, BW Gemini and Eiger Explorer, all departing from the US Gulf Coast.  The G. Arete, a propane carrier, diverted to South Korea from China, while a chemical tanker named STI Notting Hill is also rerouting to South Korea, Vortexa said. The US exported about 310,000 barrels of propane to China per day in 2024, double the volume from a year earlier, according to East Daley Analytics. Spot ethane shipments may continue to be affected by the trade war, while committed cargoes are harder to unwind, the person said. Asia-bound flows of ethylene — used in plastics and industrial solvents — have already slowed because of seasonal factors but may be further reduced by the tariffs, the

Read More »

Valero to shutter at least one of its California refineries

Lengthening legislative shadow Valero’s proposal for the Benicia refinery follows Phillips 66 Co.’s October 2024 confirmation that it will permanently cease conventional crude oil processing operations its 138,700-b/d dual-sited refinery in Los Angeles by yearend 2025 amid the operator’s determination that market conditions will prevent the long-term viability and competitiveness of the manufacturing site (OGJ Online, Oct. 17, 2024). Announcement of the Los Angeles refinery closure came on the heels of California Gov. Gavin Newsom’s Oct. 14, 2024, signing of legislation aimed at making the state’s oil refiners manage California’s gasoline supplies more responsibly to prevent price spikes at the pump. The legislation specifically provides the CEC more tools for requiring petroleum refiners to backfill supplies and plan for maintenance downtime as a means of helping prevent gasoline-price spikes that cost Californians upwards of $2 billion in 2023, Newsom’s office said. Introduced in early September 2024 in response to Newsom’s late-August proclamation convening the state’s legislature into special session “to take on Big Oil’s gas-price spikes,” the new legislation allows the state to require that refiners maintain a minimum inventory of fuel to avoid supply shortages that “create higher gasoline prices for consumers and higher profits for the industry,” the governor’s office said. While Valero did not reveal in its April 2025 statement any specific reasons for its decision on the Benicia refinery, in the wake of the market announcement, Brian W. Jones (R-Calif.) and Vince Fong (R-Calif.) both attributed the pending refinery closure to the legislation and policies heralded by Newsom and state regulatory departments. “Valero intends to shut down its Benicia refinery thanks to Newsom and radical Democrats’ extreme regulations and hostile business climate,” Jones said on Apr. 16, citing Phillips 66’s decision on the Los Angeles refinery and Chevron Corp.’s relocation of headquarters from San Ramon, Calif.,

Read More »

US BOEM begins process to replace current OCS lease sale plan

US Interior Secretary Doug Burgum directed the Bureau of Ocean Energy Management (BOEM) to start developing a plan for offshore oil and gas lease sales on the US Outer Continental Shelf (OCS), including likely sales in a newly established 27h OCS planning area offshore Alaska in the High Arctic. The 11th National OCS program will replace the current 10th Program (2024–29), which includes only three oil and gas lease sales over 5 years—all in the Gulf, Burgum said in a release Apr. 18.  BOEM will work to complete those sales, while it begins to develop the new program, he said. Earlier this month, Burgum directed BOEM to move forward a lease sale in the Gulf, starting with publication in June 2025 of a notice of sale. BOEM will soon publish in the Federal Register a request for information and comments which starts a 45-day public comment period that serves as the initial step in the multi-year planning process that details lease sales BOEM will hold in the coming years.  The Federal Register notice will also outline BOEM’s new jurisdiction over the High Arctic planning area offshore Alaska, as well as new boundaries for existing planning areas, Interior noted. The request for information will not propose a specific timeline for future lease sales or outline the potential sale areas. Instead, it invites stakeholders to provide recommendations for leasing opportunities and raise concerns about offshore leasing. BOEM manages 3.2 billion acres in the OCS, including 2,227 active oil and gas leases covering about 12.1 million acres in OCS regions. Of these, 469 leases are currently producing oil and gas. BOEM earlier in April increased its estimate of oil and gas reserves in the US Gulf’s OCS by 1.30 billion boe from its 2021 estimate, bringing the total reserve estimate to 7.04 billion

Read More »

Chevron adds Gulf of Mexico production with Ballymore subsea tieback startup

Chevron Corp. has started producing oil and natural gas from the Ballymore subsea tieback in the deepwater Mississippi Canyon area of the US Gulf of Mexico, the company said in a release Apr. 21.  Chevron Corp. sanctioned the Ballymore project in 2018, four years after its discovery. The discovery well, drilled to a final depth of 8,898 m about 120 km offshore Louisiana in 2,000 m of water, encountered 205 m of net oil pay in a high-quality Jurassic Norphlet reservoir (OGJ Online, Jan. 31, 2018; May 12, 2022). Ballymore, the operator’s first in the Norphlet trend of the Gulf, has been developed as a three-mile subsea tieback to the existing Chevron-operated Blind Faith semisubmersible platform and is expected to produce up to 75,000 gross b/d of oil and 50 MMcfd of gas. 

Read More »

Rystad Energy: Extended trade war could wipe out half of China’s anticipated oil demand growth

Uncertainties surrounding US President Donald Trump’s tariff policies disrupted the markets’ initial trajectory and raised concerns about the broader economy and demand prospects. According to Rystad Energy analysis, a prolonged trade war could eliminate up to half of China’s anticipated 2025 oil demand growth of 180,000 b/d if downside risks to the country’s outlook materialize. With tensions between the US and China continuing to simmer, a potential tit-for-tat tariff war is expected to further pressure oil prices, which have already shown signs of weakening—dragging down product prices as well, according to Rystad Energy. China’s first-quarter gross domestic product (GDP) growth beat expectations at 5.4%, together with other macroeconomic indicators showing growing signals such as exports, the Purchasing Managers’ Index (PMI), and retail sales. Strong economic growth in the first quarter was based on last September’s stimulus taking effect gradually. Assuming trade relations between China and the US remain disrupted, a mild scenario is likely for this year, with China’s GDP growth slowing down by 1 percentage point, Rystad Energy said. Slower GDP growth is expected to reduce Chinese oil demand growth by 0.47 percentage points, given the economy’s continued reliance on industry and exports. However, with the government poised to introduce additional stimulus measures in response to the trade war, there is potential for upside that could help counterbalance the negative effects and lessen the decline in oil demand growth. Overall, the current forecast suggests a reduction of 90,000 b/d in oil demand growth, down from an initial estimate of 180,000 b/d. “The biggest loss is in diesel and biggest gain in naphtha – offsetting some demand loss. Petrochemical and diesel demand will bear the most downside pressure because of the trade war, as consumer spending and industry prosperity and industry-related transportation will be damaged by potential trade decline,” said

Read More »

Deep Data Center: Neoclouds as the ‘Picks and Shovels’ of the AI Gold Rush

In 1849, the discovery of gold in California ignited a frenzy, drawing prospectors from around the world in pursuit of quick fortune. While few struck it rich digging and sifting dirt, a different class of entrepreneurs quietly prospered: those who supplied the miners with the tools of the trade. From picks and shovels to tents and provisions, these providers became indispensable to the gold rush, profiting handsomely regardless of who found gold. Today, a new gold rush is underway, in pursuit of artificial intelligence. And just like the days of yore, the real fortunes may lie not in the gold itself, but in the infrastructure and equipment that enable its extraction. This is where neocloud players and chipmakers are positioned, representing themselves as the fundamental enablers of the AI revolution. Neoclouds: The Essential Tools and Implements of AI Innovation The AI boom has sparked a frenzy of innovation, investment, and competition. From generative AI applications like ChatGPT to autonomous systems and personalized recommendations, AI is rapidly transforming industries. Yet, behind every groundbreaking AI model lies an unsung hero: the infrastructure powering it. Enter neocloud providers—the specialized cloud platforms delivering the GPU horsepower that fuels AI’s meteoric rise. Let’s examine how neoclouds represent the “picks and shovels” of the AI gold rush, used for extracting the essential backbone of AI innovation. Neoclouds are emerging as indispensable players in the AI ecosystem, offering tailored solutions for compute-intensive workloads such as training large language models (LLMs) and performing high-speed inference. Unlike traditional hyperscalers (e.g., AWS, Azure, Google Cloud), which cater to a broad range of use cases, neoclouds focus exclusively on optimizing infrastructure for AI and machine learning applications. This specialization allows them to deliver superior performance at a lower cost, making them the go-to choice for startups, enterprises, and research institutions alike.

Read More »

Soluna Computing: Innovating Renewable Computing for Sustainable Data Centers

Dorothy 1A & 1B (Texas): These twin 25 MW facilities are powered by wind and serve Bitcoin hosting and mining workloads. Together, they consumed over 112,000 MWh of curtailed energy in 2024, demonstrating the impact of Soluna’s model. Dorothy 2 (Texas): Currently under construction and scheduled for energization in Q4 2025, this 48 MW site will increase Soluna’s hosting and mining capacity by 64%. Sophie (Kentucky): A 25 MW grid- and hydro-powered hosting center with a strong cost profile and consistent output. Project Grace (Texas): A 2 MW AI pilot project in development, part of Soluna’s transition into HPC and machine learning. Project Kati (Texas): With 166 MW split between Bitcoin and AI hosting, this project recently exited the Electric Reliability Council of Texas, Inc. planning phase and is expected to energize between 2025 and 2027. Project Rosa (Texas): A 187 MW flagship project co-located with wind assets, aimed at both Bitcoin and AI workloads. Land and power agreements were secured by the company in early 2025. These developments are part of the company’s broader effort to tackle both energy waste and infrastructure bottlenecks. Soluna’s behind-the-meter design enables flexibility to draw from the grid or directly from renewable sources, maximizing energy value while minimizing emissions. Competition is Fierce and a Narrower Focus Better Serves the Business In 2024, Soluna tested the waters of providing AI services via a  GPU-as-a-Service through a partnership with HPE, branded as Project Ada. The pilot aimed to rent out cloud GPUs for AI developers and LLM training. However, due to oversupply in the GPU market, delayed product rollouts (like NVIDIA’s H200), and poor demand economics, Soluna terminated the contract in March 2025. The cancellation of the contract with HPE frees up resources for Soluna to focus on what it believes the company does best: designing

Read More »

Quiet Genius at the Neutral Line: How Onics Filters Are Reshaping the Future of Data Center Power Efficiency

Why Harmonics Matter In a typical data center, nonlinear loads—like servers, UPS systems, and switch-mode power supplies—introduce harmonic distortion into the electrical system. These harmonics travel along the neutral and ground conductors, where they can increase current flow, cause overheating in transformers, and shorten the lifespan of critical power infrastructure. More subtly, they waste power through reactive losses that don’t show up on a basic utility bill, but do show up in heat, inefficiency, and increased infrastructure stress. Traditional mitigation approaches—like active harmonic filters or isolation transformers—are complex, expensive, and often require custom integration and ongoing maintenance. That’s where Onics’ solution stands out. It’s engineered as a shunt-style, low-pass filter: a passive device that sits in parallel with the circuit, quietly siphoning off problematic harmonics without interrupting operations.  The result? Lower apparent power demand, reduced electrical losses, and a quieter, more stable current environment—especially on the neutral line, where cumulative harmonic effects often peak. Behind the Numbers: Real-World Impact While the Onics filters offer a passive complement to traditional mitigation strategies, they aren’t intended to replace active harmonic filters or isolation transformers in systems that require them—they work best as a low-complexity enhancement to existing power quality designs. LoPilato says Onics has deployed its filters in mission-critical environments ranging from enterprise edge to large colos, and the data is consistent. In one example, a 6 MW data center saw a verified 9.2% reduction in energy consumption after deploying Onics filters at key electrical junctures. Another facility clocked in at 17.8% savings across its lighting and support loads, thanks in part to improved power factor and reduced transformer strain. The filters work by targeting high-frequency distortion—typically above the 3rd harmonic and up through the 35th. By passively attenuating this range, the system reduces reactive current on the neutral and helps stabilize

Read More »

New IEA Report Contrasts Energy Bottlenecks with Opportunities for AI and Data Center Growth

Artificial intelligence has, without question, crossed the threshold—from a speculative academic pursuit into the defining infrastructure of 21st-century commerce, governance, and innovation. What began in the realm of research labs and open-source models is now embedded in the capital stack of every major hyperscaler, semiconductor roadmap, and national industrial strategy. But as AI scales, so does its energy footprint. From Nvidia-powered GPU clusters to exascale training farms, the conversation across boardrooms and site selection teams has fundamentally shifted. It’s no longer just about compute density, thermal loads, or software frameworks. It’s about power—how to find it, finance it, future-proof it, and increasingly, how to generate it onsite. That refrain—“It’s all about power now”—has moved from a whisper to a full-throated consensus across the data center industry. The latest report from the International Energy Agency (IEA) gives this refrain global context and hard numbers, affirming what developers, utilities, and infrastructure operators have already sensed on the ground: the AI revolution will be throttled or propelled by the availability of scalable, sustainable, and dispatchable electricity. Why Energy Is the Real Bottleneck to Intelligence at Scale The major new IEA report puts it plainly: The transformative promise of AI will be throttled—or unleashed—by the world’s ability to deliver scalable, reliable, and sustainable electricity. The stakes are enormous. Countries that can supply the power AI craves will shape the future. Those that can’t may find themselves sidelined. Importantly, while AI poses clear challenges, the report emphasizes how it also offers solutions: from optimizing energy grids and reducing emissions in industrial sectors to enhancing energy security by supporting infrastructure defenses against cyberattacks. The report calls for immediate investments in both energy generation and grid capabilities, as well as stronger collaboration between the tech and energy sectors to avoid critical bottlenecks. The IEA advises that, for countries

Read More »

Colorado Eyes the AI Data Center Boom with Bold Incentive Push

Even as states work on legislation to limit data center development, it is clear that some locations are looking to get a bigger piece of the huge data center spending that the AI wave has created. It appears that politicians in Colorado took a look around and thought to themselves “Why is all that data center building going to Texas and Arizona? What’s wrong with the Rocky Mountain State?” Taking a page from the proven playbook that has gotten data centers built all over the country, Colorado is trying to jump on the financial incentives for data center development bandwagon. SB 24-085: A Statewide Strategy to Attract Data Center Investment Looking to significantly boost its appeal as a data center hub, Colorado is now considering Senate Bill 24-085, currently making its way through the state legislature. Sponsored by Senators Priola and Buckner and Representatives Parenti and Weinberg, this legislation promises substantial economic incentives in the form of state sales and use tax rebates for new data centers established within the state from fiscal year 2026 through 2033. Colorado hopes to position itself strategically to compete with neighboring states in attracting lucrative tech investments and high-skilled jobs. According to DataCenterMap.com, there are currently 53 data centers in the state, almost all located in the Denver area, but they are predominantly smaller facilities. In today’s era of massive AI-driven hyperscale expansion, Colorado is rarely mentioned in the same breath as major AI data center markets.  Some local communities have passed their own incentive packages, but SB 24-085 aims to offer a unified, statewide framework that can also help mitigate growing NIMBY (Not In My Backyard) sentiment around new developments. The Details: How SB 24-085 Works The bill, titled “Concerning a rebate of the state sales and use tax paid on new digital infrastructure

Read More »

Wonder Valley and the Great AI Pivot: Kevin O’Leary’s Bold Data Center Play

Data Center World 2025 drew record-breaking attendance, underscoring the AI-fueled urgency transforming infrastructure investment. But no session captivated the crowd quite like Kevin O’Leary’s electrifying keynote on Wonder Valley—his audacious plan to build the world’s largest AI compute data center campus. In a sweeping narrative that ranged from pandemic pivots to stranded gas and Branson-brand inspiration, O’Leary laid out a real estate and infrastructure strategy built for the AI era. A Pandemic-Era Pivot Becomes a Case Study in Digital Resilience O’Leary opened with a Shark Tank success story that doubled as a business parable. In 2019, a woman-led startup called Blueland raised $50 million to eliminate plastic cleaning bottles by shipping concentrated cleaning tablets in reusable kits. When COVID-19 shut down retail in 2020, her inventory was stuck in limbo—until she made an urgent call to O’Leary. What followed was a high-stakes, last-minute pivot: a union-approved commercial shoot in Brooklyn the night SAG-AFTRA shut down television production. The direct response ad campaign that resulted would not only liquidate the stranded inventory at full margin, but deliver something more valuable—data. By targeting locked-down consumers through local remnant TV ad slots and optimizing by conversion, Blueland saw unheard-of response rates as high as 17%. The campaign turned into a data goldmine: buyer locations, tablet usage patterns, household sizes, and contact details. Follow-up SMS campaigns would drive 30% reorders. “It built such a franchise in those 36 months,” O’Leary said, “with no retail. Now every retailer wants in.” The lesson? Build your infrastructure to control your data, and you build a business that scales even in chaos. This anecdote set the tone for the keynote: in a volatile world, infrastructure resilience and data control are the new core competencies. The Data Center Power Crisis: “There Is Not a Gig on the Grid” O’Leary

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »