Stay Ahead, Stay ONMINE

Forget About Cloud Computing. On-Premises Is All the Rage Again

Ten years ago, everybody was fascinated by the cloud. It was the new thing, and companies that adopted it rapidly saw tremendous growth. Salesforce, for example, positioned itself as a pioneer of this technology and saw great wins. The tides are turning though. As much as cloud providers still proclaim that they’re the most cost-effective and efficient solution for businesses of all sizes, this is increasingly clashing with the day-to-day experience. Cloud Computing was touted as the solution for scalability, flexibility, and reduced operational burdens. Increasingly, though, companies are finding that, at scale, the costs and control limitations outweigh the benefits.​ Attracted by free AWS credits, me and my CTO started out with setting up our entire company IT infrastructure on the cloud. However, we were shocked when we saw the costs ballooning after just a few software tests. We decided to invest in a high-quality server and moved our whole infrastructure onto it. And we’re not looking back: This decision is already saving us hundreds of Euros per month. We’re not the only ones: Dropbox already made this move in 2016 and saved close to $75 million over the ensuing two years. The company behind Basecamp, 37signals, completed this transition in 2022, and expects to save $7 million over five years. We’ll dive deeper into the how and why of this trend and the cost savings that are associated with it. You can expect some practical insights that will help you make or influence such a decision at your company, too. Cloud costs have been exploding According to a recent study by Harness, 21% of enterprise cloud infrastructure spend—which will be equivalent to $44.5 billion in 2025—is wasted on underutilized resources. According to the study author, cloud spend is one of the biggest cost drivers for many software enterprises, second only to salaries. The premise of this study is that developers must develop a keener eye on costs. However, I disagree. Cost control can only get you so far—and many smart developers are already spending inordinate amounts of their time on cost control instead of building actual products. Cloud costs have a tendency to balloon over time: Storage costs per GB of data might seem low, but when you’re dealing with terabytes of data—which even we as a three-person startup are already doing—costs add up very quickly. Add to this retrieval and egress fees, and you’re faced with a bill you cannot unsee. Steep retrieval and egress fees only serve one thing: Cloud providers want to incentivize you to keep as much data as possible on the platform, so they can make money off every operation. If you download data from the cloud, it will cost you inordinate amounts of money. Variable costs based on CPU and GPU usage often spike during high-performance workloads. A report by CNCF found that almost half of Kubernetes adopters found that they’d exceeded their budget as a result. Kubernetes is an open-source container orchestration software that is often used for cloud deployments. The pay-per-use model of the cloud has its advantages, but billing becomes unpredictable as a result. Costs can then explode during usage spikes. Cloud add-ons for security, monitoring, and data analytics also come at a premium, which often increases costs further. As a result, many IT leaders have started migrating back to on-premises servers. A 2023 survey by Uptime found that 33% of respondents had repatriated at least some production applications in the past year. Cloud providers have not restructured their billing in response to this trend. One could argue that doing so would seriously impact their profitability, especially in a largely consolidated market where competitive pressure by upstarts and outsiders is limited. As long as this is the case, the trend towards on-premises is expected to continue. Cost efficiency and control There is a reason that cloud providers tend to advertise so much to small firms and startups. The initial setup costs of a cloud infrastructure are low because of pay-as-you-go models and free credits. The easy setup can be a trap, though, especially once you start scaling. (At my firm, we noticed our costs going out of control even before we scaled to a decent extent, simply because we handle large amounts of data.) Monthly costs for on-premises servers are fixed and predictable; costs for cloud services can quickly balloon beyond expectations. As mentioned before, cloud providers also charge steep data egress fees, which can quickly add up when you’re considering a hybrid infrastructure. Security costs can initially be higher on-premises. On the other hand, you have full control over everything you implement. Cloud providers cover infrastructure security, but you remain responsible for data security and configuration. This often requires paid add-ons. A round-up can be found in the table above. On the whole, an on-premises infrastructure comes with higher setup costs and needs considerable know-how. This initial investment pays off quickly, though, because you tend to have very predictable monthly costs and full control over additions like security measures. There are plenty of prominent examples of companies that have saved millions by moving back on-premises. Whether this is a good choice for you depends on several factors, though, which need to be assessed carefully. Should you move back on-premises? Whether you should make the shift back to server racks depends on several factors. The most important considerations in most cases are financial, operational, and strategic. From a financial point of view, your company’s cash structure plays a big role. If you prefer lean capital expenditures but have no problem racking up high operational costs every month, then you should remain on the cloud. If you can make a higher capital expenditure up front and then refrain from bleeding cash, you should do this though. At the end of the day, the total operational costs (TCO) are key though. If your operational costs on cloud are consistently lower than running servers yourself, then you should absolutely stay on the cloud. From an operational point of view, staying on the cloud can make sense if you often face spikes in usage. On-premises servers can only carry so much traffic; cloud servers scale pretty seamlessly in proportion to demand. If expensive and specialized hardware is more accessible for you on the cloud, this is also a point in favor of staying on the cloud. On the other hand, if you are worried about complying with specific regulations (like GDPR, HIPAA, or CSRD for example), then the shared-responsibility model of cloud services is likely not for you. Strategically speaking, having full control of your infrastructure can be a strategic advantage. It keeps you from getting locked in with a vendor and having to play along with whatever they bill you and what services they are able to offer you. If you plan a geographic expansion or rapidly deploy new services, then cloud can be advantageous though. In the long run, however, going on-premises might make sense even when you’re expanding geographically or in your scope of services, due to increased control and lower operational costs. The decision to move back on-premises depends on several factors. Diagram generated with the help of Claude AI. On the whole, if you value predictability, control, and compliance, you should consider running on-premises. If, on the other hand, you value flexibility, then staying on the cloud might be your better choice. How to repatriate easily If you are considering repatriating your services, here is a brief checklist to follow: Assess Current Cloud Usage: Inventory applications and data volume. Cost Analysis: Calculate current cloud costs vs. projected on-prem costs. Select On-Prem Infrastructure: Servers, storage, and networking requirements. Minimize Data Egress Costs: Use compression and schedule transfers during off-peak hours. Security Planning: Firewalls, encryption, and access controls for on-prem. Test and Migrate: Pilot migration for non-critical workloads first. Monitor and Optimize: Set up monitoring for resources and adjust. Repatriation is not just for enterprise companies that make the headlines. As the example of my firm shows, even small startups need to make this consideration. The earlier you make the migration, the less cash you’ll bleed. The bottom line: Cloud is not dead, but the hype around it is dying Cloud services aren’t going anywhere. They offer flexibility and scalability, which are unmatched for certain use cases. Startups and companies with unpredictable or rapidly growing workloads still benefit greatly from cloud solutions. That being said, even early-stage companies can benefit from on-premises infrastructure, for example if the large data loads they’re handling would make the cloud bill balloon out of control. This was the case at my firm. The cloud has often been marketed as a one-size-fits-all solution for everything from data storage to AI workloads. We can see that this is not the case; the reality is a bit more granular than this. As companies scale, the costs, compliance challenges, and performance limitations of cloud computing become impossible to ignore. The hype around cloud services is dying because experience is showing us that there are real limits and plenty of hidden costs. In addition, cloud providers can often not adequately provide for security solutions, options for compliance, and user control if you don’t pay a hefty premium for all this. Most companies will likely adopt a hybrid approach in the long run: On-premises offers control and predictability; cloud servers can jump into the fray when demand from users spikes. There’s no real one-size-fits-all solution. However, there are specific criteria that should help you guide your decision. Like every hype, there are ebbs and flows. The fact that cloud services are no longer hyped does not mean that you need to go all-in on server racks now. It does, however, invite for a deeper reflection about the advantages that this trend offers for your company.

Ten years ago, everybody was fascinated by the cloud. It was the new thing, and companies that adopted it rapidly saw tremendous growth. Salesforce, for example, positioned itself as a pioneer of this technology and saw great wins.

The tides are turning though. As much as cloud providers still proclaim that they’re the most cost-effective and efficient solution for businesses of all sizes, this is increasingly clashing with the day-to-day experience.

Cloud Computing was touted as the solution for scalability, flexibility, and reduced operational burdens. Increasingly, though, companies are finding that, at scale, the costs and control limitations outweigh the benefits.​

Attracted by free AWS credits, me and my CTO started out with setting up our entire company IT infrastructure on the cloud. However, we were shocked when we saw the costs ballooning after just a few software tests. We decided to invest in a high-quality server and moved our whole infrastructure onto it. And we’re not looking back: This decision is already saving us hundreds of Euros per month.

We’re not the only ones: Dropbox already made this move in 2016 and saved close to $75 million over the ensuing two years. The company behind Basecamp, 37signals, completed this transition in 2022, and expects to save $7 million over five years.

We’ll dive deeper into the how and why of this trend and the cost savings that are associated with it. You can expect some practical insights that will help you make or influence such a decision at your company, too.

Cloud costs have been exploding

According to a recent study by Harness, 21% of enterprise cloud infrastructure spend—which will be equivalent to $44.5 billion in 2025—is wasted on underutilized resources. According to the study author, cloud spend is one of the biggest cost drivers for many software enterprises, second only to salaries.

The premise of this study is that developers must develop a keener eye on costs. However, I disagree. Cost control can only get you so far—and many smart developers are already spending inordinate amounts of their time on cost control instead of building actual products.

Cloud costs have a tendency to balloon over time: Storage costs per GB of data might seem low, but when you’re dealing with terabytes of data—which even we as a three-person startup are already doing—costs add up very quickly. Add to this retrieval and egress fees, and you’re faced with a bill you cannot unsee.

Steep retrieval and egress fees only serve one thing: Cloud providers want to incentivize you to keep as much data as possible on the platform, so they can make money off every operation. If you download data from the cloud, it will cost you inordinate amounts of money.

Variable costs based on CPU and GPU usage often spike during high-performance workloads. A report by CNCF found that almost half of Kubernetes adopters found that they’d exceeded their budget as a result. Kubernetes is an open-source container orchestration software that is often used for cloud deployments.

The pay-per-use model of the cloud has its advantages, but billing becomes unpredictable as a result. Costs can then explode during usage spikes. Cloud add-ons for security, monitoring, and data analytics also come at a premium, which often increases costs further.

As a result, many IT leaders have started migrating back to on-premises servers. A 2023 survey by Uptime found that 33% of respondents had repatriated at least some production applications in the past year.

Cloud providers have not restructured their billing in response to this trend. One could argue that doing so would seriously impact their profitability, especially in a largely consolidated market where competitive pressure by upstarts and outsiders is limited. As long as this is the case, the trend towards on-premises is expected to continue.

Cost efficiency and control

There is a reason that cloud providers tend to advertise so much to small firms and startups. The initial setup costs of a cloud infrastructure are low because of pay-as-you-go models and free credits.

The easy setup can be a trap, though, especially once you start scaling. (At my firm, we noticed our costs going out of control even before we scaled to a decent extent, simply because we handle large amounts of data.) Monthly costs for on-premises servers are fixed and predictable; costs for cloud services can quickly balloon beyond expectations.

As mentioned before, cloud providers also charge steep data egress fees, which can quickly add up when you’re considering a hybrid infrastructure.

Security costs can initially be higher on-premises. On the other hand, you have full control over everything you implement. Cloud providers cover infrastructure security, but you remain responsible for data security and configuration. This often requires paid add-ons.

https://datawrapper.dwcdn.net/czWge/1/

A round-up can be found in the table above. On the whole, an on-premises infrastructure comes with higher setup costs and needs considerable know-how. This initial investment pays off quickly, though, because you tend to have very predictable monthly costs and full control over additions like security measures.

There are plenty of prominent examples of companies that have saved millions by moving back on-premises. Whether this is a good choice for you depends on several factors, though, which need to be assessed carefully.

Should you move back on-premises?

Whether you should make the shift back to server racks depends on several factors. The most important considerations in most cases are financial, operational, and strategic.

From a financial point of view, your company’s cash structure plays a big role. If you prefer lean capital expenditures but have no problem racking up high operational costs every month, then you should remain on the cloud. If you can make a higher capital expenditure up front and then refrain from bleeding cash, you should do this though.

At the end of the day, the total operational costs (TCO) are key though. If your operational costs on cloud are consistently lower than running servers yourself, then you should absolutely stay on the cloud.

From an operational point of view, staying on the cloud can make sense if you often face spikes in usage. On-premises servers can only carry so much traffic; cloud servers scale pretty seamlessly in proportion to demand. If expensive and specialized hardware is more accessible for you on the cloud, this is also a point in favor of staying on the cloud. On the other hand, if you are worried about complying with specific regulations (like GDPR, HIPAA, or CSRD for example), then the shared-responsibility model of cloud services is likely not for you.

Strategically speaking, having full control of your infrastructure can be a strategic advantage. It keeps you from getting locked in with a vendor and having to play along with whatever they bill you and what services they are able to offer you. If you plan a geographic expansion or rapidly deploy new services, then cloud can be advantageous though. In the long run, however, going on-premises might make sense even when you’re expanding geographically or in your scope of services, due to increased control and lower operational costs.

The decision to move back on-premises depends on several factors. Diagram generated with the help of Claude AI.

On the whole, if you value predictability, control, and compliance, you should consider running on-premises. If, on the other hand, you value flexibility, then staying on the cloud might be your better choice.

How to repatriate easily

If you are considering repatriating your services, here is a brief checklist to follow:

  • Assess Current Cloud Usage: Inventory applications and data volume.
  • Cost Analysis: Calculate current cloud costs vs. projected on-prem costs.
  • Select On-Prem Infrastructure: Servers, storage, and networking requirements.
  • Minimize Data Egress Costs: Use compression and schedule transfers during off-peak hours.
  • Security Planning: Firewalls, encryption, and access controls for on-prem.
  • Test and Migrate: Pilot migration for non-critical workloads first.
  • Monitor and Optimize: Set up monitoring for resources and adjust.

Repatriation is not just for enterprise companies that make the headlines. As the example of my firm shows, even small startups need to make this consideration. The earlier you make the migration, the less cash you’ll bleed.

The bottom line: Cloud is not dead, but the hype around it is dying

Cloud services aren’t going anywhere. They offer flexibility and scalability, which are unmatched for certain use cases. Startups and companies with unpredictable or rapidly growing workloads still benefit greatly from cloud solutions.

That being said, even early-stage companies can benefit from on-premises infrastructure, for example if the large data loads they’re handling would make the cloud bill balloon out of control. This was the case at my firm.

The cloud has often been marketed as a one-size-fits-all solution for everything from data storage to AI workloads. We can see that this is not the case; the reality is a bit more granular than this. As companies scale, the costs, compliance challenges, and performance limitations of cloud computing become impossible to ignore.

The hype around cloud services is dying because experience is showing us that there are real limits and plenty of hidden costs. In addition, cloud providers can often not adequately provide for security solutions, options for compliance, and user control if you don’t pay a hefty premium for all this.

Most companies will likely adopt a hybrid approach in the long run: On-premises offers control and predictability; cloud servers can jump into the fray when demand from users spikes.

There’s no real one-size-fits-all solution. However, there are specific criteria that should help you guide your decision. Like every hype, there are ebbs and flows. The fact that cloud services are no longer hyped does not mean that you need to go all-in on server racks now. It does, however, invite for a deeper reflection about the advantages that this trend offers for your company.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

SATORP halts processing activities at Jubail refinery

Saudi Aramco Total Refinery & Petrochemicals Co.—a joint venture of Saudi Aramco (62.5%) and TotalEnergies SE (37.5%)—has temporarily shuttered units at its 460,000 b/d full-conversion refinery complex at Jubail, on Saudi Arabia’s eastern coast, following disruptions resulting from the ongoing war in the Middle East. In an Apr. 10 update

Read More »

Intel secures Google cloud and AI infrastructure deal

“Scaling AI requires more than accelerators – it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand,” said Lip-Bu Tan, CEO  of Intel in a statement. Google does offer custom Armv9-based Axion processors as an alternative to x86 based instances

Read More »

Broadcom strikes chip deals with Google, Anthropic

Anthropic said this week that the AI startup’s annual revenue run rate has now crossed $30 billion, up from about $9 billion the previous year. “We are making our most significant compute commitment to date to keep pace with our unprecedented growth,” said Krishna Rao, CFO of Anthropic, in a

Read More »

BW Energy granted 25-year extension of license offshore Gabon

BW Energy Gabon has received approval from the Ministry of Oil and Gas of the Gabonese Republic to extend the Dussafu Marin production license offshore Gabon, West Africa. The license period has been extended to 2053 from 2028, inclusive of three 5-year option periods from 2038 onwards. The prior contract was until 2038 inclusive of two 5-year option periods from 2028 onwards. The extra time “provides long-term visibility for production, investments, and reserve development” of the operator’s “core producing asset,” the company said in a release Apr. 7. Ongoing license projects include MaBoMo Phase 2, with planned first oil in second-half 2026, and the Bourdon development following its discovery last year. The timeline also “strengthens the foundation for future infrastructure‑led growth opportunities across the adjacent Niosi and Guduma licenses, both operated by BW Energy,” the company continued. The Dussafu Marin permit is a development and exploitation license with multiple discoveries and prospects lying within a proven oil and gas play fairway within Southern Gabon basin. To the northwest of the block is the Etame-Ebouri Trend, a collection of fields producing from the pre-salt Gamba and Dentale sandstones, and to the north are Lucina and M’Bya fields which produce from the syn-rift Lucina sandstones beneath the Gamba. Oil fields within the Dussafu Permit include Moubenga, Walt Whitman, Ruche, Ruche North East, Tortue, Hibiscus, and Hibiscus North. BW Energy Gabon is operator at Dussafu (73.50%) with partners Panoro Energy ASA (17.5%) and Gabon Oil Co. (9%). Dussafu.

Read More »

Santos plans development of North Slope’s Quokka Unit

Santos Ltd. has started development planning in the Quokka Unit on Alaska’s North Slope after further delineating the Nanushuk reservoir. The Quokka-1 appraisal well spudded on Jan. 1, 2026, about 6 six miles from the Mitquq-1 discovery well drilled in 2020. It was drilled to 4,787 ft TD and encountered a high-quality reservoir with about 143 ft of net oil pay in the Nanushuk formation, demonstrating an average porosity of 19%. Following a single stage fracture stimulation, the well achieved a flow rate of 2,190 bo/d. Reservoir sands correlated between the two discoveries, coupled with fluid analyses, confirm the presence of high‑quality, light‑gravity oil, supporting strong well performance and improved pricing relative to Pikka oil. Together with additional geological data, these results underpin the potential for a two‑drill‑site development with production capacity comparable to Pikka phase 1, the company said.  Rate and resource potential for the two-drill-site development is being evaluated. Resource estimation is ongoing and appraisal results will be evaluated as part of the FY26 contingent resource assessment. In FY25, Santos reported 2C contingent resources of 177 MMboe for the Quokka Unit. Based on these results, Santos has started development planning, including the initiation of key permitting activities. Santos is operator of the Quokka Unit (51%) with partner Repsol (49%).

Read More »

Fluor, Axens secure contracts for US grassroots refinery project

Fluor Corp. and Axens Group have been awarded key contracts for America First Refining’s (AFR) proposed grassroots refinery at the Port of Brownsville, Tex., advancing development of what would be the first new US refinery to be built in more than 50 years. Fluor will execute front-end engineering and design (FEED) for the project, while Axens will serve as technology licensor of core refining process technologies to be used at the site, the service providers said in separate Apr. 7 releases. The AFR refinery is designed to process more than 60 million bbl/year—or about 164,400 b/d—of US light shale crude into transportation fuels, including gasoline, diesel, and jet fuel. Contract details Without disclosing a specific value of its contract, Fluor said the scope of its FEED study will cover early-stage engineering and design required to define project execution, cost, and schedule based on a complex that will incorporate commercially proven technologies to improve efficiency and emissions performance while processing domestic shale crude. As technology licensor, Axens said it will deliver process technologies for key refining units at the site, including those for: Naphtha, diesel hydrotreating. Continuous catalytic reforming. Isomerization. Alongside supporting improved fuel-quality specifications, the unspecified technologies to be supplied for the refinery will also help to reduce overall energy consumption at the site. Axens—which confirmed its involvement since 2017 in working with AFR on early-stage development of the project—said this latest licensing agreement will also cover engineering support, equipment, catalysts, and services across the refinery’s process configuration. Project background, commercial framework Upon first announcing the project in March 2026, AFR said the proposed development came alongside an already signed 20-year offtake agreement with a global integrated oil company covering 1.2 billion bbl of US light shale crude, as well as capital investment to support construction. As part of the

Read More »

EIA: US crude inventories up 3.1 million bbl

US crude oil inventories for the week ended Apr. 3, excluding the Strategic Petroleum Reserve, increased by 3.1 million bbl from the previous week, according to data from the US Energy Information Administration (EIA). At 464.7 million bbl, US crude oil inventories are about 2% above the 5-year average for this time of year, the EIA report indicated. EIA said total motor gasoline inventories decreased by 1.6 million bbl from last week and are about 3% above the 5-year average for this time of year. Finished gasoline inventories increased while blending components inventories decreased last week. Distillate fuel inventories decreased by 3.1 million bbl last week and are about 5% below the 5-year average for this time of year. Propane-propylene inventories increased by 600,000 bbl from last week and are 71% above the 5-year average for this time of year, EIA said. US crude oil refinery inputs averaged 16.3 million b/d for the week ended Apr. 3, which was 129,000 b/d less than the previous week’s average. Refineries operated at 92% of capacity. Gasoline production decreased, averaging 9.4 million b/d. Distillate fuel production increased, averaging 5.0 million b/d. US crude oil imports averaged 6.3 million b/d, down 130,000 b/d from the previous week. Over the last 4 weeks, crude oil imports averaged about 6.6 million b/d, 9.1% more than the same 4-week period last year. Total motor gasoline imports averaged 571,000 b/d. Distillate fuel imports averaged 152,000 b/d.

Read More »

Oil prices plunge as Iran war tensions ease amid tentative Hormuz reopening

Crude oil prices plunged sharply on Apr. 7 after US President Donald Trump announced a conditional 2-week ceasefire agreement with Iran, contingent on reopening the Strait of Hormuz and restoring safe passage for energy shipments. Both Brent and WTI crude oil fell towards $95/bbl, marking their largest single-day decline since 2020. Under the agreement, Iran signaled willingness to halt attacks on shipping and allow transit through Hormuz while broader negotiations continue. The US also indicated it would assist in clearing a backlog of tankers and stabilizing maritime traffic. Benchmark crude prices initially surged above $110/bbl in early April amid fears of prolonged supply disruption after Iran effectively restricted traffic through the strait—a corridor responsible for roughly 20% of global oil flows. The blockade, triggered by escalating US-Iran hostilities, caused tanker traffic to collapse and stranded millions of barrels of crude and refined products in the region. Despite the price correction, analysts caution that supply disruptions and infrastructure damage will continue to constrain markets. The conflict has already impaired regional energy assets, including LNG infrastructure in Qatar, and forced producers across the Middle East to curtail output or delay exports. The US Energy Information Administration (EIA) warned that fuel prices may remain elevated for months even if flows normalize, citing logistical bottlenecks, depleted inventories, and continued geopolitical uncertainty. “In theory, the 10–13 million b/d of crude oil and product supply stranded behind the Strait should now be gradually released. Whether the pre-March status quo will be re-established depends entirely on whether the truce can be turned into a permanent peace during the negotiations in Pakistan,” said Tamas Varga, analyst, PVM Oil Associates. “What appears evident, at least for now, is that the current quarter, the April–June period, will be the tightest, as the scarcity of available oil, both crude and refined

Read More »

EIA: Brent crude to reach $115/bbl in second-quarter 2026

Global oil markets have entered a period of acute volatility, with prices expected to surge into second-quarter 2026 as war-driven supply disruptions in the Middle East constrain flows through the Strait of Hormuz, according to the US Energy Information Administration (EIA)’s April Short-Term Energy Outlook. The agency estimates that Brent crude averaged $103/bbl in March and will climb further to a quarterly peak of about $115/bbl in second-quarter 2026, reflecting a sharp tightening in global supply following widespread production shut-ins across key Gulf producers. The disruption stems from the effective closure of the Strait of Hormuz, a critical chokepoint that typically carries nearly 20% of global oil supply. The US-Iran war in the region has forced producers including Saudi Arabia, Iraq, Kuwait, and the UAE to curtail output significantly. EIA estimates that crude production shut-ins averaged 7.5 million b/d in March and will rise to a peak of 9.1 million b/d in April. In this outlook, EIA assumes the conflict does not persist past April and that traffic through the Strait of Hormuz gradually resumes. Under those assumptions, EIA expects production shut-ins will fall to 6.7 million b/d in May and return close to pre-conflict levels in late 2026. The scale of the outage has rapidly flipped the market from prior expectations of oversupply into a pronounced deficit, with global inventories drawing sharply during the second quarter. Despite an assumption that the conflict does not persist beyond April, the agency warns that supply chains will take months to normalize, keeping a geopolitical risk premium embedded in prices through late 2026. EIA forecasts the Brent crude oil price will fall below $90/bbl in fourth-quarter 2026 and average $76/bbl in 2027, about $23/bbl higher than in its February STEO forecast. This price forecast is highly dependent on EIA’s assumptions of both the

Read More »

OpenAI puts part of Stargate project on hold over runaway power costs

OpenAI has postponed plans to open one of the data centers central to its Stargate project. It announced its plan to open the data center in the UK with great fanfare last September, when it was regarded as a major boost for the country’s nascent AI industry, as well as proving a step up for OpenAI’s international credentials. At the time, Sam Altman, CEO of OpenAI, said, “The UK has been a longstanding pioneer of AI, and is now home to world-class researchers, millions of ChatGPT users, and a government that quickly recognized the potential of this technology.” All of that has been quietly forgotten. The plans for the data center in Northumberland, in the Northeast of England, have been put on hold, with the project ready to be revived when the conditions are ripe for major infrastructure investment, according to a report by the BBC.

Read More »

Neoclouds gain momentum in a supply-constrained world

And since they used the same hardware, both neoclouds and traditional cloud providers are subject to the same shortage problem. Component suppliers are reporting significant shortages due to demand for AI data centers and Synergy sees neoclouds also experiencing delays just like traditional cloud providers. “Demand is currently outstripping supply,” said Dinsmore. “It will take a while before that starts to come into more balance.” Among neoclouds, CoreWeave stands out as the most direct challenger to traditional hyperscale cloud providers. Meanwhile, OpenAI and Anthropic represent a distinct but increasingly important category, and that is platform-centric providers offering cloud-like access to foundational models and AI development environments. Synergy says that as demand for AI infrastructure accelerates, neoclouds are positioning themselves as focused alternatives to traditional hyperscale providers such as Amazon, Microsoft and Google.

Read More »

What is AI networking? How it adds intelligence to your infrastructure

The end goal is to make networks more reliable, efficient and performant. Enterprises are already seeing notable results when AI is applied to IT operations, including shorter deployment times, a decrease in trouble tickets, and faster time to resolution. With the help of AI, networks  will become more autonomous and self-healing (that is, able to address issues without the need for human intervention). In fact, Tier 1 and Tier 2 infrastructure is moving toward ‘no human in the loop,’ Nick Lippis, co-founder and co-chair of enterprise user community ONUG, recently told Network World. In time, humans will only need to step in for policy exceptions and high-risk decisions. “Layering in AI capabilities makes LAN management applications easier to use and more accessible across an organization,” Dell’Oro Group analyst Sian Morgan said. Gartner predicts that, by 2030, AI agents will drive most network activities, up from “minimal adoption” in 2025. The firm emphasizes that leaders who overlook the AI networking shift “risk higher MTTR [meantime to repair], rising costs, and growing security exposure.” The core components of AI networking It’s important to note that the use of AI and machine learning (ML) in network management is not new. AI for IT operations (AIOps), for instance, is a common practice that uses automation to improve broader IT operations. AI networking is specific to the network itself, covering domains including multi-cloud software, wired and wireless LAN, data center switching, SD-WAN and managed network services (MNS). The incorporation of generative AI, in particular, has brought AI networking to the fore, as enterprise leaders are rethinking every single aspect of their business, networking included.

Read More »

Aria Networks raises $125M and debuts its approach for AI-optimized networks

That embedded telemetry feeds adaptive tuning of Dynamic Load Balancing parameters, Data Center Quantized Congestion Notification (DCQCN) and failover logic without waiting for a threshold breach or a manual intervention. The platform architecture is layered. At the lowest levels, agents react in microseconds to link-level events such as transceiver flaps, rerouting leaf-spine traffic in milliseconds. At higher layers, agents make more strategic decisions about flow placement across the cluster. At the cloud layer, a large language model-based agent surfaces correlated insights to operators in natural language, allowing them to ask questions about specific jobs or alert conditions and receive context-aware responses. Karam argued that simply bolting an LLM onto an existing architecture does not deliver the same result. “If you ask it to do anything, it could hallucinate and bring down the network,” he said. “It doesn’t have any of the context or the data that’s required for this approach to be made safe.” Aria also exposes an MCP server, allowing external systems such as job schedulers and LLM routers to query network state directly and integrate it into their own decision-making. MFU and token efficiency as the target metrics Traditional networking is often evaluated in terms of bandwidth and latency. Aria is centering its platform around two metrics: Model FLOPS Utilization (MFU) and token efficiency. MFU is defined as the ratio of achieved FLOPS per accelerator to the theoretical peak. In practice, Karam said, MFU for training workloads typically runs between 33% and 45%, and inference often comes in below 30%. “The network has a major impact on the MFU, and therefore the token efficiency, because the network touches every aspect, every other component in your cluster,” Karam said.

Read More »

New v2 UALink specification aims to catch up to NVLink

But given there are no products currently available using UALink 1.0, UALink 2.0 might be viewed as a premature launch Need to play catch up David Harold, senior analyst with Jon Peddie Research, was guarded in his reaction. “While 2.0 is a significant step forward from 1.0, we need to bear in mind that even 1.0 solutions aren’t shipping yet – they aren’t due until later this year. So, Nvidia is way ahead of the open alternatives on connectivity, indeed ahead of the proprietary or Ethernet based solutions too,” he said. What this means, he added, is that non-Nvidia alternatives are currently lagging in the market. “They need to play catch up on several fronts, not just networking. … I can’t think of a single shipping product that meaningfully has advantages over a Nvidia solution,” he said. “Ultimately UALink remains desirable since it will enable heterogeneous, multi-vendor environments but it’s quite a way behind NVLink today. ” There are plenty of signs that organizations will find it hard to break free of the Nvidia dominance, however. A couple of months ago, RISC-V pioneer SiFive signed a deal with Nvidia to incorporate Nvidia NVLink Fusion into its data center products, a departure for RISC companies. According to Harold, other companies could be joining it. “Custom ASIC company MediaTek is an NVLink partner, and they told me last week that they are planning to integrate it directly into next-generation custom silicon for AI applications,” he said. “This will enable a wider range of companies to use NVLink as their high-speed interconnect.” Other options And, Harold noted, Nvidia is already looking at other options. “Nvidia is now shifting to look at the copper limit for networking speed, with an interest in using optical connectivity instead,” said Harold.

Read More »

Nvidia’s SchedMD acquisition puts open-source AI scheduling under scrutiny

Is the concern valid? Dr. Danish Faruqui, CEO of Fab Economics, a US-based AI hardware and datacenter advisory, said the risk was real. “The skepticism that Nvidia may prioritize its own hardware in future software updates, potentially delaying or under-optimizing support for rivals, is a feasible outcome,” he said. As the primary developer, Nvidia now controls Slurm’s official development roadmap and code review process, Faruqui said, “which could influence how quickly competing chips are integrated on new development or continuous improvement elements.” Owning the control plane alongside GPUs and networking infrastructure such as InfiniBand, he added, allows Nvidia to create a tightly vertically integrated stack that can lead to what he described as “shallow moats, where advanced features are only available or performant on Nvidia hardware.” One concrete test of that, industry observers say, will be how quickly Nvidia integrates support for AMD’s next-generation chips into Slurm’s codebase compared with how quickly it integrates its own forthcoming hardware and networking technologies, such as InfiniBand. Does the Bright Computing precedent hold? Analysts point to Nvidia’s 2022 acquisition of Bright Computing as a reference point, saying the software became optimized for Nvidia chips in ways that disadvantaged users of competing hardware. Nvidia disputed that characterization, saying Bright Computing supports “nearly any CPU or GPU-accelerated cluster.” Rawat said the comparison was instructive but imperfect. “Nvidia’s acquisition of Bright Computing highlights its preference for vertical integration, embedding Bright tightly into DGX and AI Factory stacks rather than maintaining a neutral, multi-vendor orchestration role,” he said. “This reflects a broader strategic pattern — Nvidia seeks to control the full-stack AI infrastructure experience.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Financial services

This page brings together essential resources to help financial institutions evaluate, adopt, and scale AI in regulated environments. Whether you’re exploring early use cases or

Read More »