Stay Ahead, Stay ONMINE

AI, Data Centers, and the Next Big Correction: Will Growth Outpace Market Reality?

AI is being readily embraced by organizations, government, and individual enthusiasts for data aggregation, pattern recognition, data visualization, and co-creation of content. Given the headlines lately, AI is set to take over the world. And as an emerging, revolutionary technology with large potential impact and newfound user-friendliness, both large tech companies and small startups alike […]

AI is being readily embraced by organizations, government, and individual enthusiasts for data aggregation, pattern recognition, data visualization, and co-creation of content. Given the headlines lately, AI is set to take over the world. And as an emerging, revolutionary technology with large potential impact and newfound user-friendliness, both large tech companies and small startups alike have raced to capitalize on potential growth. Hands down, this transformative technology has caused a wave of adoption, investment, and innovation around the world and across industries.

Naturally, when a technology or application accelerates quickly, the more risk-averse will be cautious and when it accelerates this quickly, a bubble might be forming. Even more bullish investors have ridden through too much tumult in the past few decades for their bank accounts to withstand another cataclysmic loss. More investment is pouring in (including at a federal level), stock valuations are all over the charts and not necessarily true to a ticker’s earnings, and the recent market fluctuations leave the entire ecosystem a little hesitant about buying into the hype too much.

The Nature of Bubbles and Some Potential Signals to Watch For

Economic bubbles occur when asset prices significantly exceed their intrinsic value, often fueled by speculative demand and irrational investment, leading to unsustainable market conditions. A bigger concern than just to digital infrastructure, bubbles can have far-reaching impacts on the entire market, as the initial distorted financial metrics encourage excessive lending and create systemic risk. The collapse of a bubble can trigger a chain reaction of financial distress, causing widespread economic instability and potentially leading to recessions, as seen in historical examples like the dot-com and housing bubbles.

Reasonable bubble indicators that have the market concerned include:

  • Overvaluation and Lack of Profit Generation: Tech giants are heavily invested in AI despite limited returns from the associated products. Likewise, many AI startups have achieved valuations far exceeding their earnings. This discrepancy between valuation and profitability is a classic sign of a bubble.
  • Hype vs. Reality: The AI hype cycle throughout the news has led to significant investments, with society torn about the potential and ethical claims regarding AI capabilities. Overstatements in the media often must be tempered with corrections in later expectation, but when hundreds of billions of dollars are at stake, it’s no small adjustment.
  • Diminishing Returns: Some experts suggest that large language models (LLMs) may not be as scalable as previously thought, leading to diminishing returns on investment in these technologies.

The Dot-Com Burst Saw Precisely This Happen

The dot-com bubble emerged in the late 1990s, fueled by the rapid growth of the internet and the establishment of numerous tech startups. This period saw a surge in demand for internet-based stocks, leading to high valuations that often exceeded the companies’ intrinsic value. The NASDAQ Composite index rose dramatically, increasing by 582% from January 1995 to March 2000, only to fall by 75% from March 2000 to October 2002.

The frenzy of buying internet-based stocks was overwhelming, with many companies lacking viable business models and focusing instead on metrics like website traffic. Venture capitalists and other investors poured money into these startups, often ignoring traditional financial metrics in favor of speculative growth potential. The media played a significant role in fueling this hype, encouraging investors to overlook caution and invest in risky tech stocks.

The bubble burst when capital began to dry up, leading to a market crash. By 2002, investor losses were estimated at around $5 trillion. Many tech companies that conducted IPOs during this era declared bankruptcy or were acquired by other companies. The collapse of the dotcom bubble resulted in massive layoffs in the technology sector and served as a cautionary tale about the dangers of speculative investing and overvaluation.

The aftermath of the dotcom bubble led to a more cautious approach to investing, with a renewed focus on fundamental analysis rather than speculative hype. Despite the devastating impact, it laid the groundwork for the modern tech industry, with companies like Amazon and Google surviving and thriving to become leaders in their fields.

Growth and Profitability

While AI as a technology has been around for decades, the advent of generative AI built on neural networks resulted in the release of ChatGPT. This launched a user-friendly chatbot that could interpret and then generate responses in milli-seconds that were more than just coherent, but informative, insightful, and intuitive. The potential of AI was on display for all the world to see and users of OpenAI’s system grew to 1 million users in five days and 100 million users in 2 months, the fastest adoption of a platform the world has ever seen. They recently have reached 400 million weekly active users.

The societal adoption makes sense, but what about the business application, where there is real money to be made? Other than for the reputed college kids writing term papers, AI’s value to an organization is its ability to analyze vast amounts of disorganized data, aggregate it all, and make complex decisions from it. Key industries like healthcare, computer science, cybersecurity, logistics, manufacturing, and content creation are all leading the shift and embracing the benefits of AI technology and there is no end in sight to the innovation available.

The efficiency gains and reduced operational costs to an organization are limited only by a user’s imagination for what queries to put to the test. But speaking openly, as someone who grew up in the power distribution world, peddling equipment that made utilities and industries more efficient and reduced OpEx as our core product benefits, I can tell you this isn’t an easy value proposition to market your products on, even when it is so tangibly evident as it is with AI, and the enterprise and B2B adoption is rolling out slower than the headlines might have us believing.

Simply stated, this technology is only profitable if there are paying customers and revenue growth that follow. Serious startup capital is being spent on applications of this technology that the market may not be ready to support. This does have the markings of a crash, but whether that crash will be a true bubble will depend on the speed, reach, and broader impact of that decline.

Economic Considerations

Herd mentality plays a significant role in the adoption of AI technologies. This phenomenon involves individuals following the crowd and making decisions based on the actions of others, rather than their own beliefs or analysis. In the context of AI, herd behavior is amplified by the widespread adoption of AI tools and the fear of missing out (FOMO) on potential benefits.

AI algorithms, trained on extensive datasets, can perpetuate this mentality by replicating existing trends and strategies, making them more appealing to a broader audience. As a result, the rapid adoption of AI technologies can lead to inflated expectations and valuations, similar to what was observed during the dotcom bubble, where speculative demand drove prices far beyond their intrinsic value.

The prices of hardware necessary for AI development and deployment are being driven up by several factors, including scarcity and increased demand. The rapid growth of AI applications has led to a surge in demand for GPUs and TPUs necessary for training models. This increased demand, coupled with supply chain constraints and geopolitical tensions affecting semiconductor production, has resulted in higher prices for these critical components.

Additionally, the concentration of manufacturing in a few regions exacerbates these supply chain issues, further contributing to price increases. As AI continues to expand across industries, the strain on hardware resources is likely to persist, maintaining upward pressure on prices.

Right now, investors and data center operators, alike, are attempting to chart the viability of the many parties and the likely winners of the AI arms race, and charting those sort of outcomes always brings different economic tools such as game theory to mind, where we have many players all vying for the same opportunities. The considerations of approaching this like a game are that we can complement our decisions by modeling interdependencies, ensuring strategies that achieve the most desirable outcomes.

This mathematical framework is frequently used for understanding interactions within an ecosystem, but is much more complicated than the well-known Nash equilibrium, whereby each participant strives to maximize their outcome, and equilibrium is achieved only when all players have reached this maximum, which is interdependent on the behaviors and actions of the other players. The Prisoner’s Dilemma is the well-known classic, but as applied in this sense, other studied “games” to consider are more applicable, especially those that result in a “winner takes all” outcome.

One of the challenges, however, is that new neocloud players are joining amidst an ongoing game, making this extremely difficult to mathematically chart. Nevertheless, it can be useful framework for isolated scenario modeling of strategies, predictive analytics, and decision mapping to anticipate outcomes.

For example, many AI startup companies may be bidding for the same hyperscale AI projects. As with a Prisoner’s Dilemma, there may be a first-mover advantage, but this is actually more like a game of Chicken. The first to pull out of the competition loses the crown title, but keeps their life; the one to stay in the match (if the other pulls out) earns both; or they defeat each other through psychological tactics whereby 1) neither succeed or 2) the result is mutually assured destruction when neither gives in.

The resulting sentiment is that in this arms race, one year from now only a handful of companies will have survived.

Therefore, investment is slowing down as investors are digging deeper into the cost of the technology, the feasibility of finding customers, and the timeline to revenue. “Show me the money,” is being heard across digital infrastructure, or rather, show me the path to monetization, the business case for your unique application of the technology and prospective customer. With limited winners and an excess of losers, it is hard to see investors placing financial bets across the board; they will be much more strategically selected than we saw in the dot-com days.

Ripples in the Ecosystem

Countering the bubble fear-mongers, it must be argued that the long-term outlook of AI and the underlying technology that fosters this innovation will have a lasting-impact. From the 40,000-foot view, I can’t imagine a fundamentally revolutionary technology causing a complete market burst, while businesses and individuals have already come to rely on various AI applications as essential tools for business.

Rather than a crash, natural economic adjustment may be more likely, though it must be said that market fluctuations have had greater swings of late and may be established as a norm that day tradesmen have to account for in their strategies, while longer-term investors are willing to ride these waves out. That is, if they ever lock in on a winner they choose to financially back. Readjustments are just part of the game.

As an asset category, we need to consider the full ecosystem and consider the market corrections we’ve begun to see play out:

  • Competitive Market Growth:  An example of this is easily seen when we consider the DeepSeek launch recently, a Chinese product competitive to ChatGPT that supposedly boasted lower costs and energy usage. The U.S. tech index lost $1 trillion in value that day. Much of that was quickly recovered. Additionally, individual stocks may contribute to some fluctuations, but there was some concern about a burst looming, because a single announcement should never have seen the swing that resulted from this announcement. In general, we need to stop letting short-term sentiment and fear impact us to this extent and trust what we know to be true about the technology adoption. The wake-up call was heard nonetheless across the market, and we should expect to see much more reticence to large investments that present a high risk profile.
  • Lease Terms: The data center market has been a bit of a seller’s market for a few years now; those with land and power need simply say the word and they could lock in 15 year lease terms. That’s changing a bit of late and as we’ve seen, some hyperscalers are even pulling back lease terms to under 10 years, some around 7-8 years. AI leases are even less secure with many neocloud startups aiming for 5-7 year lease terms. This doesn’t offer the same confidence to an investor or to a data center provider compared to a longer-term commit and let’s not forget, these cash constrained startups cannot afford to give this perception. As we learned from the real estate bubble, inability to pay the rent quite literally could become a trigger for another burst.
  • Equipment Obsolescence:  Another factor to consider is the high cost of investment in hardware. Ultimately with growth, price per unit will come down. Then as new models are released by the various manufacturers, the previous renditions will become obsolete, and suddenly entire generations of hardware may lose value. As long as the neocloud provider has established a decent customer base to generate revenue, or a hyperscaler has deep enough pockets to fund an equipment refresh, this is no concern. But it’s a bitter pill to swallow when it happens and is not always a blow that can be recovered from, since it hinges on the model already demonstrating success. Some question has arisen whether there will be a second-hand market for GPUs. With the investment that goes into the purchase up front, it would be a struggle to imagine that there won’t be, but a viable use case has yet to emerge; it’s simply too new to discern. It would likely be pennies on the dollar, but better than nothing. Perhaps by being repurposed for smaller outfits that lease  to single-use enterprises will provide a niche market where equipment finds new utility, even if not as lucrative as the initial use.
  • Equipment Failure:  Beginning to be discussed openly, GPUs have a high failure rate due to component failures, memory issues, and driver problems. This unreliability can lead to costly downtime and data loss, impacting the efficiency and reliability of AI operations. As AI applications become more complex and widespread, the need for robust and reliable GPU infrastructure grows. The consequences of these failures ripple through the market, affecting not only the deployment timelines and operational costs but will also make companies more hesitant to adopt and scale their use of the technology. Moreover, the scarcity of GPUs, exacerbated by supply chain disruptions and export restrictions, further complicates the situation, pushing companies to explore alternative solutions like GPU-as-a-Service (GPUaaS) to mitigate these risks.
  • Stock Valuations:  Nvidia, the leading supplier of GPUs essential for training AI models, has become one of the most valuable publicly listed companies, with a valuation exceeding $3 trillion. As the gold standard for GPUs, Nvidia’s stock performance significantly influences the broader market, particularly tech-heavy indices like the S&P 500. Given its substantial market capitalization, Nvidia’s stock makes up a considerable portion of major indexes, meaning that any large market adjustment could have far-reaching effects on the entire tech sector. This concentration of market influence in a few key stocks, including Nvidia, leaves investors vulnerable unless they are well diversified. The valuation of AI-related stocks, such as OpenAI potentially reaching a $300 billion valuation despite never being profitable, raises questions about sustainability. The recent stock market surge has been largely driven by the “Magnificent Seven” companies—Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla—which are heavily invested in AI and have collectively seen significant growth. These companies account for over half of the S&P 500’s total return in 2024, with annualized appreciation rates exceeding 20% over the past five years, and Nvidia leading with over 90% growth. The sustainability of such high valuations and growth rates is uncertain, and any correction could have profound implications for the entire market.
  • Colocation Markets:  The Magnificent Seven mentioned include the hyperscale market, which naturally leads the majority of AI investment, but we must consider impacts to other operators. Over the past two years, many hyperscalers paused to reevaluate their facility designs, then turned to colocation providers for extended support. We have now seen this infrastructure begin to crumble, with Microsoft cancelling leases based on concerns of oversupply and reduced capacity needs for AI. Those contracted deployments will have caused a financial loss for the colocation providers who planned to construct them. This may have been our biggest market test yet, as it eerily echoes the dot-com triggers that began the burst. The market did react and it’s unclear whether we’re out of the woods just yet. Aside from hyperscale AI deployments inside a colocation data center, neocloud companies present another viable AI tenant opportunity, but even they are all bidding for the same hyperscale contracts. When the hyperscalers get nervous, this puts the entire industry at great concern about long-term viability.
Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Xcel, Colorado agencies propose extending life of Comanche 2 coal unit

Listen to the article 4 min This audio is auto-generated. Please let us know if you have feedback. Dive Brief: Xcel Energy, the Colorado Energy Office, Colorado Public Utilities Commission staff and the Colorado Office of the Utility Consumer Advocate have asked state regulators to approve a one-year operating extension for the coal-fired Comanche Unit 2, which is slated to close at the end of this year. Rising peak demand, an unplanned outage at Comanche Unit 3 and several other factors are driving the need, the parties said Monday. Comanche 2 has a nameplate capacity of 335 MW and an accredited capacity of 296 MW. The 750-MW Comanche 3 is not expected to resume operations until June at the earliest, according to the petition to the Colorado Public Utilities Commission. Operating Unit 2 in its stead is a “cost effective, nearterm solution,” the parties concluded. Dive Insight: Comanche 3 is the largest coal unit in Colorado and it’s been “an albatross around the neck of Xcel ratepayers,” Erin Overturf, clean energy director at Western Resources Advocates, said in a statement. The troubled unit has been offline for part or all of 138 days for the two years beginning in early August 2023, according to WRA. “This request to delay the long-planned retirement of Comanche 2 will lead to increased costs for utility customers at a time when people are already economically struggling,” Overturf said. And keeping Comanche 2 online without a requirement to limit operations, even if Comanche 3 resumes generating electricity, creates additional pollution risks, the group said.    “WRA will be reviewing this petition carefully, with a focus on reducing the potential environmental and economic harm” said Overturf. Xcel has been planning to retire Comanche 2 since 2018, but in its petition the utility and parties said “the ensuing years have brought numerous

Read More »

Sister Companies MODEC America, SOFEC Merge

MODEC Inc said Tuesday it is combining its wholly owned companies MODEC America Inc and SOFEC Inc to create an integrated mooring solutions business. “SOFEC will be fully integrated into the MODEC Group by becoming the new Mooring Solutions Business Unit”, Tokyo-based MODEC said in a statement on its website. “Importantly, the new Mooring Solutions Business Unit will maintain SOFEC’s commitment to the wider offshore market. “It will continue to provide SOFEC-branded mooring solutions for clients other than MODEC, ensuring the continuation of the same quality, performance and reliability that SOFEC mooring systems have delivered for over 50 years, now with the strong backing and financial strength of the MODEC Group. “As an industry leader, MODEC has over half a century of experience and a strong track record, having delivered more than 50 floating production solutions for offshore oil and gas projects worldwide. In support of these projects, SOFEC, renowned for its cutting-edge permanent mooring systems and fluid transfer technologies, has supplied mooring systems for a total of 49 FPSOs/FSOs [floating production, storage and offloading vessels] built by MODEC, including four currently under construction”. MODEC president and chief executive Hirohiko Miyata said, “This strategic merger will allow the MODEC Group to provide an integrated project team to supply floating facilities with SOFEC mooring solutions to their clients, while enabling the Mooring Solutions Business Unit to support other floater providers with SOFEC-branded mooring solutions that deliver added value”. MODEC expects to complete the merger January 2026. “The impact of this merger on MODEC’s consolidated results and financial position is expected to be immaterial”, MODEC said. On Wednesday MODEC reported an 11.9 percent year-on-year increase to $3.35 billion in revenue for the first nine months of 2025 “due to the recognition of revenue and gross profit from the steady progress of the FPSO

Read More »

US Urges NATO Allies to Shun Russian Energy

The US urged NATO allies to stop buying Russian energy in order to help end the war in Ukraine, adding pressure on member countries such as Turkey even as they cut back their purchases. The message was delivered by Vice President JD Vance and Secretary of State Marco Rubio in a meeting with Turkish Foreign Minister Hakan Fidan on Monday, the State Department said in a statement. Turkey is the third-largest buyer of Russian oil after China and India. Its refineries recently started to reduce purchases of Russian crude after the US sanctioned Moscow’s top two producers, but the country doesn’t plan to stop buying altogether, Bloomberg reported last week. Russia is also Turkey’s biggest supplier of natural gas and the two sides are currently negotiating long-term contracts as existing deals are set to expire at the end of the year. US pressure “could pose a potential headache” for Turkey, Bloomberg Economics’ Selva Bahar Baziki and Alex Kokcharov said in a note on Tuesday. “But thanks to diversification, Ankara appears well placed to absorb the impact and keep any rise in its import bill manageable.” Trump last week granted NATO ally Hungary an exemption from sanctions on purchases of Russian oil, providing a major win for Prime Minister Viktor Orban. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.

Read More »

Coterra’s net income surges, Kimmeridge calls for leadership change

Coterra Energy Inc. yesterday reported third-quarter 2025 net income of $322 million up sharply from $252 million from the year-earlier quarter. Year-to-date net income was nearly $1.35 billion, a 64% increase from the first 9 months of 2024. For third-quarter 2025, total barrels of oil equivalent (boe), natural gas production, and oil production were all near the high-end of the company’s guidance ranges, beating their respective mid-points by roughly 2.5%. Incurred capital expenditures from drilling, completion, and other fixed asset additions (non-GAAP) totaled $658 million, near the mid-point of Coterra’s guidance range of $625-675 million. The company turned in-line 48 net wells during the quarter. In the Permian, 38 net wells were turned in-line, below guidance of 40-50 net wells. Anadarko and Marcellus turned in-line six and four net wells, respectively, in line with guidance. Total equivalent production averaged 785,000 boe/d, near the high end of guidance (740,000-790,000 boe/d). But private investment firm Kimmeridge, describing itself as a significant Coterra shareholder, today released an open letter to Coterra’s board calling for “decisive action to address the company’s failures of governance and lack of strategic focus following the failed merger of Cabot Oil & Gas and Cimarex Energy,” up to and including a change of leadership. Coterra was created by the 2021 merger of these two companies. “Coterra’s history has been tainted by a boardroom unwilling to acknowledge its own missteps,” said Mark Viviano, managing partner at Kimmeridge. “Coterra now trades at a significant discount to both Permian and gas-focused peers, underscoring the market’s rejection of a merger that prioritized self-preservation over strategic merit. Kimmeridge maintains that Coterra’s path forward hinges on new leadership and a renewed focus on the Delaware basin. The Board should immediately appoint a non-executive chair who is independent and unassociated with the merger to restore objectivity

Read More »

Diamondback production and output ‘leveling off’ late this year and into 2026

Van’t Hof told analysts on the conference call that the demand picture looks strong these days and that “supply is the hot debate right now.” In a letter accompanying Diamondback’s third-quarter earnings report, he added that the company’s leaders are more aligned with OPEC’s forecast that oversupply through mid-2026 will be less than 500,000 b/d than they are with the International Energy Agency’s outlook of a nearly 4 million b/d surplus. Diamondback, which produced nearly 504,000 b/d of oil in Q3 from its roughly 750,000 net acres in the Permian basin, is content to hold its production levels steady but still be prepared to either boost or bring down output should market conditions change significantly. “We firmly believe there is no need for incremental oil barrels until there is a proper price signal,” Van’t Hof wrote in his letter. In the 3 months that ended Sept. 30, Diamondback’s total production came in at nearly 943,000 boe/d, up from about 920,000 boe/d in the second quarter. The company’s average price/bbl moved up to $64.60 from $63.23 in the spring but was still 12% below the figure from 2024’s third quarter. Its combined price ticked up slightly to $39.73/boe from $39.61 in Q2. Those data points translated into net income of $1.09 billion on total revenues of more than $3.9 billion. Looking to the current quarter, Van’t Hof and his team are forecasting oil output of 505,000 to 515,000 b/d. (That figure will dip to about 505,000 b/d after the company completes an asset sale to its Viper Energy mineral and royalty subsidiary.) They expect total production to be between 927,000 and 963,000 boe/d. Shares of Diamondback (Ticker: FANG) were down nearly 2% to $138.69 in early-afternoon trading Nov. 4, with broader market indices all down more than 1%. Diamondback stock is

Read More »

TotalEnergies Bags 10-Year Data4 Green Power Supply Contract

TotalEnergies SE has won a 10-year contract to supply Data4 data centers in Spain with a total of 610 gigawatt hours (GWh) of renewable electricity starting 2026. The power will come from Spanish wind and solar farms with a combined capacity of 30 MW. The plants “are about to start production”, a joint statement said. “As European leader in the data center industry, Data4 is now established in six countries, and announced its plan to invest nearly EUR 2 billion [$2.32 billion] by 2030 to develop its campuses in Spain. This agreement with TotalEnergies reaffirms Data4’s engagement to fully integrate renewable energy across all its locations”, the statement added. “The PPA with Data4 follows similar contracts signed by TotalEnergies with STMicroelectronics, Saint-Gobain, Air Liquide, Amazon, LyondellBasell, Merck, Microsoft, Orange and Sasol, and provides a further illustration of TotalEnergies’ ability to develop innovative solutions by leveraging its diverse asset portfolio to support its customers’ decarbonization efforts”, the statement said. Sophie Chevalier, senior vice president for flexible power and integration at France’s TotalEnergies, said, “Our ‘Clean Firm Power’ solutions are specifically designed to meet our clients’ requirements in terms of cost, consumption profile and environmental commitment. These solutions are based on our integrated power portfolio, combining both renewable and flexible assets, and contribute to achieving our target of 12 percent profitability in the power sector”. Francois Sterin, chief operation officer at Paris-based Data4, said, “This agreement reaffirms Data4’s commitment to renewable energy which is more crucial than ever as the race for AI accelerates and the energy capacity required for all data centers in Spain is expected to more than triple by 2030”. TotalEnergies says it is developing a 3-GW solar portfolio under agreements signed 2020 with Powertis and Solarbay Renewable Energy, as well as with Ignis.  Earlier this year TotalEnergies inaugurated a cluster of solar power projects near Sevilla. It said at the

Read More »

Cloud-based GPU savings are real – for the nimble

The pattern points to an evolving GPU ecosystem: while top-tier chips like Nvidia’s new GB200 Blackwell processors remain in extremely short supply, older models such as the A100 and H100 are becoming cheaper and more available. Yet, customer behavior may not match practical needs. “Many are buying the newest GPUs because of FOMO—the fear of missing out,” he added. “ChatGPT itself was built on older architecture, and no one complained about its performance.” Gil emphasized that managing cloud GPU resources now requires agility, both operationally and geographically. Spot capacity fluctuates hourly or even by the minute, and availability varies across data center regions. Enterprises willing to move workloads dynamically between regions—often with the help of AI-driven automation—can achieve cost reductions of up to 80%. “If you can move your workloads where the GPUs are cheap and available, you pay five times less than a company that can’t move,” he said. “Human operators can’t respond that fast automation is essential.” Conveniently, Cast sells an AI automation solution. But it is not the only one and the argument is valid. If spot pricing can be found cheaper at another location, you want to take it to keep the cloud bill down/ Gil concluded by urging engineers and CTOs to embrace flexibility and automation rather than lock themselves into fixed regions or infrastructure providers. “If you want to win this game, you have to let your systems self-adjust and find capacity where it exists. That’s how you make AI infrastructure sustainable.”

Read More »

Harnessing Gravity: RRPT Hydro Reimagines Data Center Power

At the 2025 Data Center Frontier Trends Summit, amid panels on AI, nuclear, and behind-the-meter power, few technologies stirred more curiosity than a modular hydropower system without dams or flowing rivers. That concept—piston-driven hydropower—was presented by Expanse Energy Corporation President and CEO Ed Nichols and Chief Electrical Engineer Gregory Tarver during the Trends Summit’s closing “6 Moonshots for the 2026 Data Center Frontier” panel. Nichols and Tarver joined the Data Center Frontier Show recently to discuss how their Reliable Renewable Power Technology (RRPT Hydro) platform could rewrite the economics of clean, resilient power for the AI era. A New Kind of Hydropower Patented in the U.S. and entering commercial readiness, RRPT Hydro’s system replaces flowing water with a gravity-and-buoyancy engine housed in vertical cylinders. Multiple pistons alternately sink and rise inside these cylinders—heavy on the downward stroke, buoyant on the upward—creating continuous motion that drives electrical generation. “It’s not perpetual motion,” Nichols emphasizes. “You need a starter source—diesel, grid, solar, anything—but once in motion, the system sustains itself, converting gravity’s constant pull and buoyancy’s natural lift into renewable energy.” The concept traces its roots to a moment of natural awe. Its inventor, a gas-processing engineer, was moved to action by the 2004 Boxing Day tsunami, seeking a way to “containerize” and safely harvest the vast energy seen in that disaster. Two decades later, that spark has evolved into a patented, scalable system designed for industrial deployment. Physics-Based Power: Gravity Down, Buoyancy Up Each RRPT module operates as a closed-loop hydropower system: On the downstroke, pistons filled with water become dense and fall under gravity, generating kinetic energy. On the upstroke, air ballast tanks lighten the pistons, allowing buoyant forces to restore potential energy. By combining gravitational and buoyant forces—both constant, free, and renewable—RRPT converts natural equilibrium into sustained mechanical power.

Read More »

Buyer’s guide to AI networking technology

Extreme Networks: AI management over AI hardware Extreme deliberately prioritizes AI-powered network management over building specialized hyperscale AI infrastructure, a pragmatic positioning for a vendor targeting enterprise and mid-market.Named a Leader in IDC MarketScape: Worldwide Enterprise Wireless LAN 2025 (October 2025) for AI-powered automation, flexible deployment options and expertise in high-density environments. The company specializes in challenging wireless environments including stadiums, airports and historic venues (Fenway Park, Lambeau Field, Dubai World Trade Center, Liverpool FC’s Anfield Stadium). Key AI networking hardware 8730 Switch: 32×400GbE QSFP-DD fixed configuration delivering 12.8 Tbps throughput in 2RU for IP fabric spine/leaf designs. Designed for AI and HPC workloads with low latency, robust traffic management and power efficiency. Runs Extreme ONE OS (microservices architecture). Supports integrated application hosting with dedicated CPU for VM-based apps. Available Q3 2025. 7830 Switch: High-density 100G/400G fixed-modular core switch delivering 32×100Gb QSFP28 + 8×400Gb QSFP-DD ports with two VIM expansion slots. VIM modules enable up to 64×100Gb or 24×400Gb total capacity with 12.8 Tbps throughput in 2RU. Powered by Fabric Engine OS. Announced May 2025, available Q3 2025. Wi-Fi 7 access points: AP4020 (indoor) and AP4060 (outdoor with external antenna support, GA September 2025) completing premium Wi-Fi 7 portfolio. Extreme Platform ONE:Generally available Q3 2025 with 265+ customers. Integrates conversational, multimodal and agentic AI with three agents (AI Expert, AI Canvas, Service AI Agent) cutting resolution times 98%. Includes embedded Universal ZTNA and two-tier simplified licensing. ExtremeCloud IQ: Cloud-based network management integrating wireless, wired and SD-WAN with AI/ML capabilities and digital twin support for testing configurations before deployment. Extreme Fabric: Native SPB-based Layer 2 fabric with sub-second convergence, automated macro and micro-segmentation and free licensing (no controllers required). Multi-area fabric architecture solves traditional SPB scaling limitations. Analyst Rankings: Market leadership in AI networking Foundry Each of the vendors has its

Read More »

Microsoft’s In-Chip Microfluidics Technology Resets the Limits of AI Cooling

Raising the Thermal Ceiling for AI Hardware As Microsoft positions it, the significance of in-chip microfluidics goes well beyond a novel way to cool silicon. By removing heat at its point of generation, the technology raises the thermal ceiling that constrains today’s most power-dense compute devices. That shift could redefine how next-generation accelerators are designed, packaged, and deployed across hyperscale environments. Impact of this cooling change: Higher-TDP accelerators and tighter packing. Where thermal density has been the limiting factor, in-chip microfluidics could enable denser server sleds—such as NVL- or NVL-like trays—or allow higher per-GPU power budgets without throttling. 3D-stacked and HBM-heavy silicon. Microsoft’s documentation explicitly ties microfluidic cooling to future 3D-stacked and high-bandwidth-memory (HBM) architectures, which would otherwise be heat-limited. By extracting heat inside the package, the approach could unlock new levels of performance and packaging density for advanced AI accelerators. Implications for the AI Data Center If microfluidics can be scaled from prototype to production, its influence will ripple through every layer of the data center, from the silicon package to the white space and plant. The technology touches not only chip design but also rack architecture, thermal planning, and long-term cost models for AI infrastructure. Rack densities, white space topology, and facility thermals Raising thermal efficiency at the chip level has a cascading effect on system design: GPU TDP trajectory. Press materials and analysis around Microsoft’s collaboration with Corintis suggest the feasibility of far higher thermal design power (TDP) envelopes than today’s roughly 1–2 kW per device. Corintis executives have publicly referenced dissipation targets in the 4 kW to 10 kW range, highlighting how in-chip cooling could sustain next-generation GPU power levels without throttling. Rack, ring, and row design. By removing much of the heat directly within the package, microfluidics could reduce secondary heat spread into boards and

Read More »

Designing the AI Century: 7×24 Exchange Fall ’25 Charts the New Data Center Industrial Stack

SMRs and the AI Power Gap: Steve Fairfax Separates Promise from Physics If NVIDIA’s Sean Young made the case for AI factories, Steve Fairfax offered a sobering counterweight: even the smartest factories can’t run without power—and not just any power, but constant, high-availability, clean generation at a scale utilities are increasingly struggling to deliver. In his keynote “Small Modular Reactors for Data Centers,” Fairfax, president of Oresme and one of the data center industry’s most seasoned voices on reliability, walked through the long arc from nuclear fusion research to today’s resurgent interest in fission at modular scale. His presentation blended nuclear engineering history with pragmatic counsel for AI-era infrastructure leaders: SMRs are promising, but their road to reality is paved with physics, fuel, and policy—not PowerPoint. From Fusion Research to Data Center Reliability Fairfax began with his own story—a career that bridges nuclear reliability and data center engineering. As a young physicist and electrical engineer at MIT, he helped build the Alcator C-MOD fusion reactor, a 400-megawatt research facility that heated plasma to 100 million degrees with 3 million amps of current. The magnet system alone drew 265,000 amps at 1,400 volts, producing forces measured in millions of pounds. It was an extreme experiment in controlled power, and one that shaped his later philosophy: design for failure, test for truth, and assume nothing lasts forever. When the U.S. cooled on fusion power in the 1990s, Fairfax applied nuclear reliability methods to data center systems—quantifying uptime and redundancy with the same math used for reactor safety. By 1994, he was consulting for hyperscale pioneers still calling 10 MW “monstrous.” Today’s 400 MW campuses, he noted, are beginning to look a lot more like reactors in their energy intensity—and increasingly, in their regulatory scrutiny. Defining the Small Modular Reactor Fairfax defined SMRs

Read More »

Top network and data center events 2025 & 2026

Denise Dubie is a senior editor at Network World with nearly 30 years of experience writing about the tech industry. Her coverage areas include AIOps, cybersecurity, networking careers, network management, observability, SASE, SD-WAN, and how AI transforms enterprise IT. A seasoned journalist and content creator, Denise writes breaking news and in-depth features, and she delivers practical advice for IT professionals while making complex technology accessible to all. Before returning to journalism, she held senior content marketing roles at CA Technologies, Berkshire Grey, and Cisco. Denise is a trusted voice in the world of enterprise IT and networking.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »