Stay Ahead, Stay ONMINE

AI, Data Centers, and the Next Big Correction: Will Growth Outpace Market Reality?

AI is being readily embraced by organizations, government, and individual enthusiasts for data aggregation, pattern recognition, data visualization, and co-creation of content. Given the headlines lately, AI is set to take over the world. And as an emerging, revolutionary technology with large potential impact and newfound user-friendliness, both large tech companies and small startups alike […]

AI is being readily embraced by organizations, government, and individual enthusiasts for data aggregation, pattern recognition, data visualization, and co-creation of content. Given the headlines lately, AI is set to take over the world. And as an emerging, revolutionary technology with large potential impact and newfound user-friendliness, both large tech companies and small startups alike have raced to capitalize on potential growth. Hands down, this transformative technology has caused a wave of adoption, investment, and innovation around the world and across industries.

Naturally, when a technology or application accelerates quickly, the more risk-averse will be cautious and when it accelerates this quickly, a bubble might be forming. Even more bullish investors have ridden through too much tumult in the past few decades for their bank accounts to withstand another cataclysmic loss. More investment is pouring in (including at a federal level), stock valuations are all over the charts and not necessarily true to a ticker’s earnings, and the recent market fluctuations leave the entire ecosystem a little hesitant about buying into the hype too much.

The Nature of Bubbles and Some Potential Signals to Watch For

Economic bubbles occur when asset prices significantly exceed their intrinsic value, often fueled by speculative demand and irrational investment, leading to unsustainable market conditions. A bigger concern than just to digital infrastructure, bubbles can have far-reaching impacts on the entire market, as the initial distorted financial metrics encourage excessive lending and create systemic risk. The collapse of a bubble can trigger a chain reaction of financial distress, causing widespread economic instability and potentially leading to recessions, as seen in historical examples like the dot-com and housing bubbles.

Reasonable bubble indicators that have the market concerned include:

  • Overvaluation and Lack of Profit Generation: Tech giants are heavily invested in AI despite limited returns from the associated products. Likewise, many AI startups have achieved valuations far exceeding their earnings. This discrepancy between valuation and profitability is a classic sign of a bubble.
  • Hype vs. Reality: The AI hype cycle throughout the news has led to significant investments, with society torn about the potential and ethical claims regarding AI capabilities. Overstatements in the media often must be tempered with corrections in later expectation, but when hundreds of billions of dollars are at stake, it’s no small adjustment.
  • Diminishing Returns: Some experts suggest that large language models (LLMs) may not be as scalable as previously thought, leading to diminishing returns on investment in these technologies.

The Dot-Com Burst Saw Precisely This Happen

The dot-com bubble emerged in the late 1990s, fueled by the rapid growth of the internet and the establishment of numerous tech startups. This period saw a surge in demand for internet-based stocks, leading to high valuations that often exceeded the companies’ intrinsic value. The NASDAQ Composite index rose dramatically, increasing by 582% from January 1995 to March 2000, only to fall by 75% from March 2000 to October 2002.

The frenzy of buying internet-based stocks was overwhelming, with many companies lacking viable business models and focusing instead on metrics like website traffic. Venture capitalists and other investors poured money into these startups, often ignoring traditional financial metrics in favor of speculative growth potential. The media played a significant role in fueling this hype, encouraging investors to overlook caution and invest in risky tech stocks.

The bubble burst when capital began to dry up, leading to a market crash. By 2002, investor losses were estimated at around $5 trillion. Many tech companies that conducted IPOs during this era declared bankruptcy or were acquired by other companies. The collapse of the dotcom bubble resulted in massive layoffs in the technology sector and served as a cautionary tale about the dangers of speculative investing and overvaluation.

The aftermath of the dotcom bubble led to a more cautious approach to investing, with a renewed focus on fundamental analysis rather than speculative hype. Despite the devastating impact, it laid the groundwork for the modern tech industry, with companies like Amazon and Google surviving and thriving to become leaders in their fields.

Growth and Profitability

While AI as a technology has been around for decades, the advent of generative AI built on neural networks resulted in the release of ChatGPT. This launched a user-friendly chatbot that could interpret and then generate responses in milli-seconds that were more than just coherent, but informative, insightful, and intuitive. The potential of AI was on display for all the world to see and users of OpenAI’s system grew to 1 million users in five days and 100 million users in 2 months, the fastest adoption of a platform the world has ever seen. They recently have reached 400 million weekly active users.

The societal adoption makes sense, but what about the business application, where there is real money to be made? Other than for the reputed college kids writing term papers, AI’s value to an organization is its ability to analyze vast amounts of disorganized data, aggregate it all, and make complex decisions from it. Key industries like healthcare, computer science, cybersecurity, logistics, manufacturing, and content creation are all leading the shift and embracing the benefits of AI technology and there is no end in sight to the innovation available.

The efficiency gains and reduced operational costs to an organization are limited only by a user’s imagination for what queries to put to the test. But speaking openly, as someone who grew up in the power distribution world, peddling equipment that made utilities and industries more efficient and reduced OpEx as our core product benefits, I can tell you this isn’t an easy value proposition to market your products on, even when it is so tangibly evident as it is with AI, and the enterprise and B2B adoption is rolling out slower than the headlines might have us believing.

Simply stated, this technology is only profitable if there are paying customers and revenue growth that follow. Serious startup capital is being spent on applications of this technology that the market may not be ready to support. This does have the markings of a crash, but whether that crash will be a true bubble will depend on the speed, reach, and broader impact of that decline.

Economic Considerations

Herd mentality plays a significant role in the adoption of AI technologies. This phenomenon involves individuals following the crowd and making decisions based on the actions of others, rather than their own beliefs or analysis. In the context of AI, herd behavior is amplified by the widespread adoption of AI tools and the fear of missing out (FOMO) on potential benefits.

AI algorithms, trained on extensive datasets, can perpetuate this mentality by replicating existing trends and strategies, making them more appealing to a broader audience. As a result, the rapid adoption of AI technologies can lead to inflated expectations and valuations, similar to what was observed during the dotcom bubble, where speculative demand drove prices far beyond their intrinsic value.

The prices of hardware necessary for AI development and deployment are being driven up by several factors, including scarcity and increased demand. The rapid growth of AI applications has led to a surge in demand for GPUs and TPUs necessary for training models. This increased demand, coupled with supply chain constraints and geopolitical tensions affecting semiconductor production, has resulted in higher prices for these critical components.

Additionally, the concentration of manufacturing in a few regions exacerbates these supply chain issues, further contributing to price increases. As AI continues to expand across industries, the strain on hardware resources is likely to persist, maintaining upward pressure on prices.

Right now, investors and data center operators, alike, are attempting to chart the viability of the many parties and the likely winners of the AI arms race, and charting those sort of outcomes always brings different economic tools such as game theory to mind, where we have many players all vying for the same opportunities. The considerations of approaching this like a game are that we can complement our decisions by modeling interdependencies, ensuring strategies that achieve the most desirable outcomes.

This mathematical framework is frequently used for understanding interactions within an ecosystem, but is much more complicated than the well-known Nash equilibrium, whereby each participant strives to maximize their outcome, and equilibrium is achieved only when all players have reached this maximum, which is interdependent on the behaviors and actions of the other players. The Prisoner’s Dilemma is the well-known classic, but as applied in this sense, other studied “games” to consider are more applicable, especially those that result in a “winner takes all” outcome.

One of the challenges, however, is that new neocloud players are joining amidst an ongoing game, making this extremely difficult to mathematically chart. Nevertheless, it can be useful framework for isolated scenario modeling of strategies, predictive analytics, and decision mapping to anticipate outcomes.

For example, many AI startup companies may be bidding for the same hyperscale AI projects. As with a Prisoner’s Dilemma, there may be a first-mover advantage, but this is actually more like a game of Chicken. The first to pull out of the competition loses the crown title, but keeps their life; the one to stay in the match (if the other pulls out) earns both; or they defeat each other through psychological tactics whereby 1) neither succeed or 2) the result is mutually assured destruction when neither gives in.

The resulting sentiment is that in this arms race, one year from now only a handful of companies will have survived.

Therefore, investment is slowing down as investors are digging deeper into the cost of the technology, the feasibility of finding customers, and the timeline to revenue. “Show me the money,” is being heard across digital infrastructure, or rather, show me the path to monetization, the business case for your unique application of the technology and prospective customer. With limited winners and an excess of losers, it is hard to see investors placing financial bets across the board; they will be much more strategically selected than we saw in the dot-com days.

Ripples in the Ecosystem

Countering the bubble fear-mongers, it must be argued that the long-term outlook of AI and the underlying technology that fosters this innovation will have a lasting-impact. From the 40,000-foot view, I can’t imagine a fundamentally revolutionary technology causing a complete market burst, while businesses and individuals have already come to rely on various AI applications as essential tools for business.

Rather than a crash, natural economic adjustment may be more likely, though it must be said that market fluctuations have had greater swings of late and may be established as a norm that day tradesmen have to account for in their strategies, while longer-term investors are willing to ride these waves out. That is, if they ever lock in on a winner they choose to financially back. Readjustments are just part of the game.

As an asset category, we need to consider the full ecosystem and consider the market corrections we’ve begun to see play out:

  • Competitive Market Growth:  An example of this is easily seen when we consider the DeepSeek launch recently, a Chinese product competitive to ChatGPT that supposedly boasted lower costs and energy usage. The U.S. tech index lost $1 trillion in value that day. Much of that was quickly recovered. Additionally, individual stocks may contribute to some fluctuations, but there was some concern about a burst looming, because a single announcement should never have seen the swing that resulted from this announcement. In general, we need to stop letting short-term sentiment and fear impact us to this extent and trust what we know to be true about the technology adoption. The wake-up call was heard nonetheless across the market, and we should expect to see much more reticence to large investments that present a high risk profile.
  • Lease Terms: The data center market has been a bit of a seller’s market for a few years now; those with land and power need simply say the word and they could lock in 15 year lease terms. That’s changing a bit of late and as we’ve seen, some hyperscalers are even pulling back lease terms to under 10 years, some around 7-8 years. AI leases are even less secure with many neocloud startups aiming for 5-7 year lease terms. This doesn’t offer the same confidence to an investor or to a data center provider compared to a longer-term commit and let’s not forget, these cash constrained startups cannot afford to give this perception. As we learned from the real estate bubble, inability to pay the rent quite literally could become a trigger for another burst.
  • Equipment Obsolescence:  Another factor to consider is the high cost of investment in hardware. Ultimately with growth, price per unit will come down. Then as new models are released by the various manufacturers, the previous renditions will become obsolete, and suddenly entire generations of hardware may lose value. As long as the neocloud provider has established a decent customer base to generate revenue, or a hyperscaler has deep enough pockets to fund an equipment refresh, this is no concern. But it’s a bitter pill to swallow when it happens and is not always a blow that can be recovered from, since it hinges on the model already demonstrating success. Some question has arisen whether there will be a second-hand market for GPUs. With the investment that goes into the purchase up front, it would be a struggle to imagine that there won’t be, but a viable use case has yet to emerge; it’s simply too new to discern. It would likely be pennies on the dollar, but better than nothing. Perhaps by being repurposed for smaller outfits that lease  to single-use enterprises will provide a niche market where equipment finds new utility, even if not as lucrative as the initial use.
  • Equipment Failure:  Beginning to be discussed openly, GPUs have a high failure rate due to component failures, memory issues, and driver problems. This unreliability can lead to costly downtime and data loss, impacting the efficiency and reliability of AI operations. As AI applications become more complex and widespread, the need for robust and reliable GPU infrastructure grows. The consequences of these failures ripple through the market, affecting not only the deployment timelines and operational costs but will also make companies more hesitant to adopt and scale their use of the technology. Moreover, the scarcity of GPUs, exacerbated by supply chain disruptions and export restrictions, further complicates the situation, pushing companies to explore alternative solutions like GPU-as-a-Service (GPUaaS) to mitigate these risks.
  • Stock Valuations:  Nvidia, the leading supplier of GPUs essential for training AI models, has become one of the most valuable publicly listed companies, with a valuation exceeding $3 trillion. As the gold standard for GPUs, Nvidia’s stock performance significantly influences the broader market, particularly tech-heavy indices like the S&P 500. Given its substantial market capitalization, Nvidia’s stock makes up a considerable portion of major indexes, meaning that any large market adjustment could have far-reaching effects on the entire tech sector. This concentration of market influence in a few key stocks, including Nvidia, leaves investors vulnerable unless they are well diversified. The valuation of AI-related stocks, such as OpenAI potentially reaching a $300 billion valuation despite never being profitable, raises questions about sustainability. The recent stock market surge has been largely driven by the “Magnificent Seven” companies—Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla—which are heavily invested in AI and have collectively seen significant growth. These companies account for over half of the S&P 500’s total return in 2024, with annualized appreciation rates exceeding 20% over the past five years, and Nvidia leading with over 90% growth. The sustainability of such high valuations and growth rates is uncertain, and any correction could have profound implications for the entire market.
  • Colocation Markets:  The Magnificent Seven mentioned include the hyperscale market, which naturally leads the majority of AI investment, but we must consider impacts to other operators. Over the past two years, many hyperscalers paused to reevaluate their facility designs, then turned to colocation providers for extended support. We have now seen this infrastructure begin to crumble, with Microsoft cancelling leases based on concerns of oversupply and reduced capacity needs for AI. Those contracted deployments will have caused a financial loss for the colocation providers who planned to construct them. This may have been our biggest market test yet, as it eerily echoes the dot-com triggers that began the burst. The market did react and it’s unclear whether we’re out of the woods just yet. Aside from hyperscale AI deployments inside a colocation data center, neocloud companies present another viable AI tenant opportunity, but even they are all bidding for the same hyperscale contracts. When the hyperscalers get nervous, this puts the entire industry at great concern about long-term viability.
Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Top quantum breakthroughs of 2025

The Helios quantum computing platform is available to customers through Quantinuum’s cloud service and on-premises offering. HSBC is using IBM’s Heron quantum computer to improve their bond trading predictions by 34% compared to classical computing. Caltech physicists create 6,100-qubit array. Kon H. Leung is seen working on the apparatus used

Read More »

How enterprises are rethinking online AI tools

A second path enterprises like had only about 35% buy-in, but generated the most enthusiasm. It is to use an online AI tool that offers more than a simple answer to a question, something more like an “interactive AI agent” than a chatbot. Two that got all the attention are

Read More »

Occidental Beats Q3 Profit Estimates on Higher Production

Occidental Petroleum Corp has reported $649 million or 64 cents per share in net income adjusted for nonrecurring items for the third quarter. That beat the Zacks Consensus Estimate – which averages forecasts by brokerage analysts – of 48 cents, as production exceeded the upper end of the company’s guidance. Net profit before adjustment was $661 million, or $0.65 per diluted share, the Warren Buffett-backed company said in its quarterly report. Occidental maintained its dividend at $0.24 per share. July-September output averaged 1.47 million barrels of oil equivalent per day (MMboepd). The Permian Basin accounted for 800,000 boepd. The Rockies and other United States assets contributed 288,000 boepd, while the Gulf of America produced 139,000 boepd. Occidental derived 238,000 boepd from outside the U.S. Net sales totaled $6.62 billion, down from $7.17 billion for Q3 2024. Q3 oil and gas pre-tax income was $1.3 billion. “Excluding items affecting comparability, the increase in third quarter oil and gas income, compared to the second quarter of 2025, was due to higher crude oil volumes and prices”, Occidental said. “For the third quarter of 2025, average WTI and Brent marker prices were $64.93 per barrel and $68.14 per barrel, respectively. Average worldwide realized crude oil prices increased by two percent from the prior quarter to $64.78 per barrel. Average worldwide realized natural gas liquids prices decreased by five percent from the prior quarter to $19.60 per barrel. Average domestic realized gas prices increased by 11 percent from the prior quarter to $1.48 per thousand cubic feet”. Occidental Chemical Corp (OxyChem) generated $197 million in pre-tax earnings, down quarter-on-quarter due to lower realized prices and volumes across most products. These were “partially offset by favorable raw material costs”, Occidental said. Buffett’s holding company Berkshire Hathaway Inc is in the process of acquiring OxyChem for $9.7 billion.

Read More »

Tape Ark Bags Multi Million Dollar Oil and Gas AI Enablement Deal

In a release sent to Rigzone recently by the Tape Ark team, Tape Ark revealed that it had bagged a “multi-million dollar oil and gas AI enablement contract”. Tape Ark announced in that release that it had “secured a multi-million dollar contract with a major U.S. oil and gas exploration company to liberate and modernize one of the industry’s largest legacy tape archives in readiness for massive petabyte scale AI programs”. Tape Ark, which described itself in the release as the global leader in large-scale tape to cloud data migration, revealed in the statement that the contract was awarded “after an extensive global evaluation”. The company highlighted that the deal will see Tape Ark migrate over 50 petabytes of critical exploration and production data from legacy tapes into the cloud, “enabling advanced AI, analytics, faster access to historical data, and compliance with evolving energy-sector data retention standards”. Guy Holmes, Founder and CEO of Tape Ark, said in the statement that “this partnership represents a major milestone in our North American expansion”. “For decades, vital subsurface and operational data in the energy sector has remained trapped on aging tapes. We’re proud to be the company trusted to bring this data to life – securely, at scale, and in the cloud,” he added. “This contract is more than a migration – it’s a digital transformation of decades of exploration intelligence,” Holmes continued. “By unlocking this data and enabling it for scalable AI, our client will gain an entirely new layer of insight and operational value. Our expansion and investment plans are about making that possible for organizations worldwide,” Holmes went on to state. Tape Ark noted in its release that the deal “follows recent large-scale projects globally for major broadcasters, governments, and oil companies”. In a statement posted on its website back

Read More »

[Podcast] How Utilities Are Planning for Demand

From the electrification of transportation to the heating and cooling of data centers, utilities across the U.S. face the challenge of meeting surging demand for electricity. “How Utilities Are Planning for Demand” is a three-part podcast series that examines the increasingly complex utility sector landscape. The series features the insights of utility experts addressing industry-critical topics, including the vital role of smart planning in meeting historic demand, how to meet demand on accelerated timelines and the grid of tomorrow. Check out the podcast episodes! ⬆ <!– Ep. 3 Planning the Grid of Tomorrow –> <!– Ep. 2 The Need for Speed –> Ep. 1 The Big Picture: Why Smart Planning is Key to Meeting Historic Demand <!– Ep. 3 Planning the Grid of Tomorrow Meeting today’s soaring electricity demand is challenging enough — but what about the decades ahead? In this episode, industry experts explore how long-term planning, transmission buildouts, and advanced tools can give utilities the confidence to invest wisely and prepare for a future shaped by renewables, bidirectional power flows, and extreme weather. COMING SOON –> <!– Ep. 2 The Need for Speed What makes many emerging utility customers different today? They need a lot of electricity fast. In this episode, we explore how utilities can respond quickly to meet the accelerating load growth. Listen to “The Need for Speed” on Spreaker. –> Ep. 1 The Big Picture: Why Smart Planning is Key to Meeting Historic Demand Whether it’s growing demand, an aging workforce and grid, or the influx of renewable energy, the utility sector is more complex than ever. In this episode, we will explore the broad range of issues utility leaders must navigate to reliably and affordably deliver growing amounts of electricity to customers.

Read More »

EIA Raises Oil Price Forecasts but Still Sees Drop in 2026

In its latest short term energy outlook (STEO), which was released on November 12, the U.S. Energy Information Administration (EIA) increased its Brent price forecast for 2025 and 2026 but still projected that the commodity will drop next year compared to 2025. According to its November STEO, the EIA now sees the Brent spot price averaging $68.76 per barrel this year and $54.92 per barrel next year. In its previous STEO, which was released in October, the EIA projected that the Brent spot price would average $68.64 per barrel in 2025 and $52.16 per barrel in 2026. A quarterly breakdown included in the EIA’s latest STEO projected that the Brent spot price will come in at $62.52 per barrel in the fourth quarter of this year, $54.30 per barrel in the first quarter of next year, $54.02 per barrel in the second quarter, $55.32 per barrel in the third quarter, and $56.00 per barrel in the fourth quarter of 2026. In its previous STEO, the EIA forecast that the Brent spot price would average $62.05 per barrel in the fourth quarter of 2025, $51.97 per barrel in the first quarter of 2026, $51.67 per barrel in the second quarter, $52.00 per barrel in the third quarter, and $53.00 per barrel in the fourth quarter. The EIA highlighted in its latest STEO that Brent crude oil spot prices averaged $65 per barrel in October, which it pointed out was $3 per barrel less than the average in September and $15 per barrel less than the average in January 2025. “Crude oil prices fell in October as growing supplies of crude oil outweighed uncertainties related to the effect of new rounds of sanctions on Russia’s oil sector,” the EIA said in its November STEO. “We forecast that growing global oil production and

Read More »

Energy Department Strengthens Puerto Rico’s Energy Grid with Renewed Orders

As the island continues to repair critical infrastructure and prepares for next summer’s peak demand season, additional emergency orders are needed in order to strengthen Puerto Rico’s electric grid. WASHINGTON—The U.S. Department of Energy (DOE) today renewed two emergency orders to further strengthen Puerto Rico’s electric grid as the island prepares for rising demand and seasonal storm risks. Building on previous actions in May and August 2025, DOE’s emergency orders authorize the Puerto Rico Electric Power Authority (PREPA) to dispatch generation units essential for maintaining critical generation capacity, while accelerating vegetation management to reduce outages and strengthen long-term grid reliability. “Modernizing Puerto Rico’s energy grid is essential to achieving long-term reliability and affordability for the Commonwealth,” said U.S. Secretary of Energy Chris Wright. “Our team is working with local and federal partners to boost power generation and accelerate vegetation management efforts to strengthen Puerto Rico’s electrical grid. The Trump Administration is fully committed to delivering affordable, reliable and secure energy to all Americans.” This year, DOE’s emergency orders and actions assisted the Puerto Rican government in restoring up to 820 MW of baseload generation capacity in Puerto Rico, resulting in an approximate 13% increase to the island’s systemwide generation capacity of 6,460 MW. With DOE funding, PREPA was able to bring a key unit back online after being inoperative for more than two years–strengthening Puerto Rico’s grid. These orders also address vegetation management issues near high-voltage lines. Falling tree limbs or brush falling during Puerto Rico’s frequent storms and high winds can damage transmission lines, cause widespread outages and potentially cause wildfires. Addressing these hazards to public health and safety is critically important. Additional information can be found here. “The Department of Energy’s 202(c) emergency orders have provided concrete benefits for Puerto Rico, allowing us to restore 1,200 MW of

Read More »

Chevron Chooses West Texas for 1st AI Data Center Power Project

Chevron Corp. chose West Texas as the site of its first project to provide natural gas-fired power to a data center, the beginning of a new line of business for the oil giant to capitalize on the boom in artificial intelligence.  The company is in exclusive talks with the data center’s end user, which it didn’t name, and anticipates making a final investment decision early next year, according to a statement and presentation released ahead of Chevron’s investor day on Wednesday. The facility is expected be operational in 2027, and will have capacity to generate as much as 5,000 megawatts in the future. Big Oil is looking to cash in on the enormous demand for energy that will be needed to power data centers, which are being located further away from major population centers and closer to sources of fuel. Chevron is one of the biggest producers in the Permian Basin of West Texas, which spews out so much natural gas that it often overwhelms pipelines and has to be burned off. “We’ve got the gas,” Chief Financial Officer Eimear Bonner said in an interview prior to Chevron’s investor presentation in New York on Wednesday. “We are uniquely positioned to have a very competitive project.” The power project is expected to ramp up by its third year to have the capacity to produce about 2,500 megawatts, which is more than the equivalent of two nuclear reactors. It will likely be built separately from the grid to avoid competing with electricity supply for the wider population. Chevron sees an opportunity to secure demand for its 3 billion cubic feet per day of natural gas output. The stock declined 1.7% by 10 a.m. in New York, with Brent crude prices trading down 2.7% at $63.41 a barrel.  Key to Chevron’s venture into AI

Read More »

Building the Regional Edge: DartPoints CEO Scott Willis on High-Density AI Workloads in Non-Tier-One Markets

When DartPoints CEO Scott Willis took the stage on “the Distributed Edge” panel at the 2025 Data Center Frontier Trends Summit, his message resonated across a room full of developers, operators, and hyperscale strategists: the future of AI infrastructure will be built far beyond the nation’s tier-one metros. On the latest episode of the Data Center Frontier Show, Willis expands on that thesis, mapping out how DartPoints has positioned itself for a moment when digital infrastructure inevitably becomes more distributed, and why that moment has now arrived. DartPoints’ strategy centers on what Willis calls the “regional edge”—markets in the Midwest, Southeast, and South Central regions that sit outside traditional cloud hubs but are increasingly essential to the evolving AI economy. These are not tower-edge micro-nodes, nor hyperscale mega-campuses. Instead, they are regional data centers designed to serve enterprises with colocation, cloud, hybrid cloud, multi-tenant cloud, DRaaS, and backup workloads, while increasingly accommodating the AI-driven use cases shaping the next phase of digital infrastructure. As inference expands and latency-sensitive applications proliferate, Willis sees the industry’s momentum bending toward the very markets DartPoints has spent years cultivating. Interconnection as Foundation for Regional AI Growth A key part of the company’s differentiation is its interconnection strategy. Every DartPoints facility is built to operate as a deeply interconnected environment, drawing in all available carriers within a market and stitching sites together through a regional fiber fabric. Willis describes fiber as the “nervous system” of the modern data center, and for DartPoints that means creating an interconnection model robust enough to support a mix of enterprise cloud, multi-site disaster recovery, and emerging AI inference workloads. The company is already hosting latency-sensitive deployments in select facilities—particularly inference AI and specialized healthcare applications—and Willis expects such deployments to expand significantly as regional AI architectures become more widely

Read More »

Key takeaways from Cisco Partner Summit

Brian Ortbals, senior vice president from World Wide Technology, which is one of Cisco’s biggest and most important partners stated: “Cisco engaged partners early in the process and took our feedback along the way. We believe now is the right time for these changes as it will enable us to capitalize on the changes in the market.” The reality is, the more successful its more-than-half-a-million partners are, the more successful Cisco will be. Platform approach is coming together When Jeetu Patel took the reigns as chief product officer, one of his goals was to make the Cisco portfolio a “force multiple.” Patel has stated repeatedly that, historically, Cisco acted more as a technology holding company with good products in networking, security, collaboration, data center and other areas. In this case, product breadth was not an advantage, as everything must be sold as “best of breed,” which is a tough ask of the salesforce and partner community. Since then, there have been many examples of the coming together of the portfolio to create products that leverage the breadth of the platform. The latest is the Unified Edge appliance, an all-in-one solution that brings together compute, networking, storage and security. Cisco has been aggressive with AI products in the data center, and Cisco Unified Edge compliments that work with a device designed to bring AI to edge locations. This is ideally suited for retail, manufacturing, healthcare, factories and other industries where it’s more cost effecting and performative to run AI where the data lives.

Read More »

AI networking demand fueled Cisco’s upbeat Q1 financials

Customers are very focused on modernizing their network infrastructure in the enterprise in preparation for inferencing and AI workloads, Robbins said. “These things are always multi-year efforts,” and this is only the beginning, Robbins said. The AI opportunity “As we look at the AI opportunity, we see customer use cases growing across training, inferencing, and connectivity, with secure networking increasingly critical as workloads move from the data center to end users, devices, and agents at the edge,” Robbins said. “Agents are transforming network traffic from predictable bursts to persistent high-intensity loads, with agentic AI queries generating up to 25 times more network traffic than chatbots.” “Instead of pulling data to and from the data center, AI workloads require models and infrastructure to be closer to where data is created and decisions are made, particularly in industries such as retail, healthcare, and manufacturing.” Robbins pointed to last week’s introduction of Cisco Unified Edge, a converged platform that integrates networking, compute and storage to help enterprise customers more efficiently handle data from AI and other workloads at the edge. “Unified Edge enables real-time inferencing for agentic and physical AI workloads, so enterprises can confidently deploy and manage AI at scale,” Robbins said. On the hyperscaler front, “we see a lot of solid pipeline throughout the rest of the year. The use cases, we see it expanding,” Robbins said. “Obviously, we’ve been selling networking infrastructure under the training models. We’ve been selling scale-out. We launched the P200-based router that will begin to address some of the scale-across opportunities.” Cisco has also seen great success with its pluggable optics, Robbins said. “All of the hyperscalers now are officially customers of our pluggable optics, so we feel like that’s a great opportunity. They not only plug into our products, but they can be used with other companies’

Read More »

When the Cloud Leaves Earth: Google and NVIDIA Test Space Data Centers for the Orbital AI Era

On November 4, 2025, Google unveiled Project Suncatcher, a moonshot research initiative exploring the feasibility of AI data centers in space. The concept envisions constellations of solar-powered satellites in Low Earth Orbit (LEO), each equipped with Tensor Processing Units (TPUs) and interconnected via free-space optical laser links. Google’s stated objective is to launch prototype satellites by early 2027 to test the idea and evaluate scaling paths if the technology proves viable. Rather than a commitment to move production AI workloads off-planet, Suncatcher represents a time-bound research program designed to validate whether solar-powered, laser-linked LEO constellations can augment terrestrial AI factories, particularly for power-intensive, latency-tolerant tasks. The 2025–2027 window effectively serves as a go/no-go phase to assess key technical hurdles including thermal management, radiation resilience, launch economics, and optical-link reliability. If these milestones are met, Suncatcher could signal the emergence of a new cloud tier: one that scales AI with solar energy rather than substations. Inside Google’s Suncatcher Vision Google has released a detailed technical paper titled “Towards a Future Space-Based, Highly Scalable AI Infrastructure Design.” The accompanying Google Research blog describes Project Suncatcher as “a moonshot exploring a new frontier” – an early-stage effort to test whether AI compute clusters in orbit can become a viable complement to terrestrial data centers. The paper outlines several foundational design concepts: Orbit and Power Project Suncatcher targets Low Earth Orbit (LEO), where solar irradiance is significantly higher and can remain continuous in specific orbital paths. Google emphasizes that space-based solar generation will serve as the primary power source for the TPU-equipped satellites. Compute and Interconnect Each satellite would host Tensor Processing Unit (TPU) accelerators, forming a constellation connected through free-space optical inter-satellite links (ISLs). Together, these would function as a disaggregated orbital AI cluster, capable of executing large-scale batch and training workloads. Downlink

Read More »

Cloud-based GPU savings are real – for the nimble

The pattern points to an evolving GPU ecosystem: while top-tier chips like Nvidia’s new GB200 Blackwell processors remain in extremely short supply, older models such as the A100 and H100 are becoming cheaper and more available. Yet, customer behavior may not match practical needs. “Many are buying the newest GPUs because of FOMO—the fear of missing out,” he added. “ChatGPT itself was built on older architecture, and no one complained about its performance.” Gil emphasized that managing cloud GPU resources now requires agility, both operationally and geographically. Spot capacity fluctuates hourly or even by the minute, and availability varies across data center regions. Enterprises willing to move workloads dynamically between regions—often with the help of AI-driven automation—can achieve cost reductions of up to 80%. “If you can move your workloads where the GPUs are cheap and available, you pay five times less than a company that can’t move,” he said. “Human operators can’t respond that fast automation is essential.” Conveniently, Cast sells an AI automation solution. But it is not the only one and the argument is valid. If spot pricing can be found cheaper at another location, you want to take it to keep the cloud bill down/ Gil concluded by urging engineers and CTOs to embrace flexibility and automation rather than lock themselves into fixed regions or infrastructure providers. “If you want to win this game, you have to let your systems self-adjust and find capacity where it exists. That’s how you make AI infrastructure sustainable.”

Read More »

Harnessing Gravity: RRPT Hydro Reimagines Data Center Power

At the 2025 Data Center Frontier Trends Summit, amid panels on AI, nuclear, and behind-the-meter power, few technologies stirred more curiosity than a modular hydropower system without dams or flowing rivers. That concept—piston-driven hydropower—was presented by Expanse Energy Corporation President and CEO Ed Nichols and Chief Electrical Engineer Gregory Tarver during the Trends Summit’s closing “6 Moonshots for the 2026 Data Center Frontier” panel. Nichols and Tarver joined the Data Center Frontier Show recently to discuss how their Reliable Renewable Power Technology (RRPT Hydro) platform could rewrite the economics of clean, resilient power for the AI era. A New Kind of Hydropower Patented in the U.S. and entering commercial readiness, RRPT Hydro’s system replaces flowing water with a gravity-and-buoyancy engine housed in vertical cylinders. Multiple pistons alternately sink and rise inside these cylinders—heavy on the downward stroke, buoyant on the upward—creating continuous motion that drives electrical generation. “It’s not perpetual motion,” Nichols emphasizes. “You need a starter source—diesel, grid, solar, anything—but once in motion, the system sustains itself, converting gravity’s constant pull and buoyancy’s natural lift into renewable energy.” The concept traces its roots to a moment of natural awe. Its inventor, a gas-processing engineer, was moved to action by the 2004 Boxing Day tsunami, seeking a way to “containerize” and safely harvest the vast energy seen in that disaster. Two decades later, that spark has evolved into a patented, scalable system designed for industrial deployment. Physics-Based Power: Gravity Down, Buoyancy Up Each RRPT module operates as a closed-loop hydropower system: On the downstroke, pistons filled with water become dense and fall under gravity, generating kinetic energy. On the upstroke, air ballast tanks lighten the pistons, allowing buoyant forces to restore potential energy. By combining gravitational and buoyant forces—both constant, free, and renewable—RRPT converts natural equilibrium into sustained mechanical power.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »