
The AI infrastructure boom is often framed around massive hyperscale campuses racing to secure gigawatts of power. But an equally important shift is happening in parallel: AI infrastructure is also becoming more distributed, modular, and sovereign, extending compute far beyond traditional data center hubs.
A wave of recent announcements across developers, infrastructure investors, and regional operators shows the market pursuing a dual strategy. On one end, developers are accelerating delivery of hyperscale campuses measured in hundreds of megawatts, and increasingly gigawatts, often located where power availability and energy economics offer structural advantage, and in some cases pairing compute directly with dedicated generation. On the other, providers are building increasingly capable regional and edge facilities designed to bring AI compute closer to users, industrial operations, and national jurisdictions.
Taken together, these moves point toward a future in which AI infrastructure is no longer purely centralized, but built around interconnected hub-and-spoke architectures combining energy-advantaged hyperscale cores with rapidly deployable edge capacity.
Recent developments across hyperscale developers, edge specialists, infrastructure investors, and regional operators illustrate how quickly this model is taking shape.
Sovereign AI Moves Beyond the Core
On Feb. 5, 2026, San Francisco-based Armada and European AI infrastructure builder Nscale signed a letter of intent to jointly deploy both large-scale and edge AI infrastructure worldwide.
The collaboration targets enterprise and public sector customers seeking sovereign, secure, geographically distributed AI environments. Nscale is building large AI supercomputer clusters globally, offering vertically integrated capabilities spanning power, data centers, compute, and software. Armada specializes in modular deployments through its Galleon data centers and Armada Edge Platform, delivering compute and storage into remote or infrastructure-poor environments.
The combined offering addresses a growing challenge: many governments and enterprises want AI capability deployed within their own jurisdictions, even where traditional hyperscale infrastructure does not yet exist.
“There is increasing demand from enterprises and governments for operational AI, and meeting that need requires infrastructure that is scalable, distributed, and ultimately sovereign,” said Josh Payne, Founder and CEO of Nscale. “By working with Armada, we will be able to offer customers a flexible foundation for deploying advanced AI workloads wherever they need to operate, without compromising performance, security, or control.”
The model envisions hyperscale campuses providing economic efficiency while Armada’s modular deployments, such as its megawatt-scale Leviathan Galleon units, extend sovereign compute into edge geographies.
“As AI adoption accelerates, organizations need infrastructure that can reach beyond centralized clusters, on Earth and even beyond,” said Dan Wright, Co-Founder and CEO of Armada. “One of Armada’s key differentiators is that we enable sovereign AI, with speed and scale. Partnering with Nscale allows us to extend our modular AI infrastructure into new global markets.”
Speed matters. Modular deployments can arrive far faster than greenfield builds, enabling sovereign AI capacity where full-scale campuses would take years.
Regional Markets Become Strategic Edge Nodes
Distributed AI infrastructure is also reshaping secondary and regional U.S. markets.
On Jan. 30, 2026, US Signal announced acquisition of a data center in Aurora, Illinois, reinforcing its strategy to build a distributed, edge-focused national platform.
The facility strengthens US Signal’s Chicago-area footprint while enabling immediate investment to expand capacity to roughly 4 MW critical IT load (or 6 MW commercial capacity). The company will also deploy its OpenCloud private cloud and virtualization platform locally.
“This facility aligns perfectly with our long-term growth strategy,” said Daniel Watts, CEO of US Signal. “We’re investing immediately to expand capacity and will deliver OpenCloud at scale, giving customers in this market a local, enterprise-grade private cloud option backed by our national network and proven operational model.”
The Aurora facility supports:
• Private and hybrid cloud deployments
• AI inference and latency-sensitive workloads
• Enterprise and carrier colocation
The site connects directly into US Signal-owned fiber routes and benefits from continued investment in ultra-dense fiber expansion and in-line amplification sites, allowing low-latency connectivity from edge to core.
Infrastructure investor Igneo Infrastructure Partners has committed more than $200 million to accelerate US Signal’s expansion across data centers, fiber, and cloud infrastructure.
“This acquisition and the immediate investment that follows reinforces our confidence in the platform and its ability to deliver long-term value,” said Michael Ryder, Partner and Co-Head of North America at Igneo Infrastructure Partners.
One takeaway here is that regional markets are increasingly becoming operational AI nodes rather than secondary infrastructure markets.
Hyperscale Campuses Still Form the Backbone
While distributed deployments expand outward, centralized campuses continue to form the backbone of AI compute, particularly for model training and large-scale inference workloads that demand massive contiguous power delivery.
On Jan. 29, 2026, Manulife Investment Management-backed developer Serverfarm announced it secured a $3.0 billion credit facility, closed in December 2025, to accelerate hyperscale campus development across strategic North American markets.
Supported by a syndicate of 23 institutional lenders, the facility provides development and construction capital enabling Serverfarm to rapidly deliver AI-ready infrastructure for cloud providers and AI innovators requiring deployments at unprecedented scale and density.
The financing supports multiple hyperscale projects already advancing, including the company’s:
• Houston hyperscale campus offering more than 500 MW of development potential across 250 acres in Houston’s energy corridor, supported by dual on-site substations designed to support phased expansion at scale.
• Atlanta expansion in Covington, Georgia, delivering a 498,960-square-foot facility with 60 MW of critical IT capacity for a single hyperscale tenant deployment.
• Toronto expansion adding 4 MW of capacity to support continued hyperscale demand growth in the Canadian market.
Serverfarm executives emphasized that speed of delivery is now as critical as scale itself, particularly as hyperscalers race to bring GPU-intensive AI workloads online.
“This $3.0 billion facility provides the capital foundation to accelerate our hyperscale campus development pipeline at a time when speed to market is a competitive differentiator,” said CEO Avner Papouchado. “Our proven basis-of-design enables accelerated delivery timelines, allowing cloud providers and AI innovators to deploy GPU-intensive workloads when timing matters most.”
Manulife Investment Management, which backs the Serverfarm platform, underscored continued institutional appetite to deploy capital into data center infrastructure as AI demand reshapes long-term infrastructure investment strategies.
“We see attractive opportunities to deploy capital to develop data center infrastructure globally and are excited to continue to support the Serverfarm platform in executing its growth plans,” said Recep Kendircioglu, Global Head of Infrastructure at Manulife Investment Management.
Serverfarm, which operates across key global markets including Houston, Northern Virginia, Chicago, Atlanta, Los Angeles, London, Amsterdam, Tel Aviv, Moses Lake, and Toronto, has increasingly positioned its campuses around AI-ready designs, including deployment of closed-loop water-cooling systems across locations to support high-density AI workloads while minimizing water waste.
The result is a portfolio built to support both hyperscale cloud expansion and the rapid emergence of AI training and inference clusters requiring sustained high-density operation.
Texas—and Now Appalachia—Show How Energy Is Becoming the Primary AI Constraint for Infrastructure Deployment
If distributed infrastructure is scaling outward, Texas shows how far hyperscale development itself is scaling upward.
On Jan. 16, 2026, New Era Energy & Digital announced a partnership with Primary Digital Infrastructure to co-develop Texas Critical Data Centers (TCDC), an approximately one-gigawatt-plus hyperscale campus in Ector County near Odessa, Texas.
The campus is engineered specifically for next-generation hyperscale and AI compute demand, combining grid-supplied electricity with behind-the-meter generation solutions and offering substantial expansion potential in the Permian Basin energy corridor, one of North America’s most energy-rich regions.
E. Will Gray II, CEO of New Era Energy & Digital, described the partnership as a validation of the company’s infrastructure-first strategy.
“The formation of our partnership with Primary Digital is a watershed moment… bringing the critical expertise required to execute a development of this scale,” Gray said. “We remain on track to sign a hyperscale anchor tenant… and we believe this development will deliver significant and durable value.”
As lead capital partner and co-sponsor, Primary Digital Infrastructure brings both hyperscale tenant relationships and capital markets expertise needed to structure and finance a development of this magnitude while de-risking execution.
Leveraging deep relationships with global cloud and AI companies, the firm is positioned to help secure an anchor hyperscale tenant for the project’s initial phase while simultaneously arranging the multi-billion-dollar financing required for delivery.
“The next wave of hyperscale and AI infrastructure is being built where power is abundant, flexible, and economically advantaged,” said Bill Stein, Executive Managing Director and Chief Investment Officer at Primary Digital Infrastructure. “With an experienced, well-capitalized partner like New Era, together we will deliver a strategically located, hyperscale-ready campus designed to meet the demands of investment-grade tenants seeking reliable solutions for advanced computing needs.”
Primary Digital’s Texas momentum also includes participation in Stargate-related development, including a previously announced $15 billion joint venture with Crusoe Energy Systems and Blue Owl Capital to develop a 1.2-gigawatt AI campus in Abilene supported by more than $11.6 billion in debt and equity financing.
Momentum behind TCDC accelerated further as New Era announced a binding agreement to acquire Sharon AI’s 50% ownership interest in the project for $70 million, consolidating full ownership under New Era as development shifts from planning toward execution.
The transaction structure, reported as combining cash, deferred equity, and a senior secured promissory note, was designed to minimize shareholder dilution while enabling faster project execution.
In parallel, New Era closed on the acquisition of an additional 203 contiguous acres, expanding the campus footprint to 438 acres and strengthening its ability to support a multi-phase, gigawatt-scale AI and HPC development.
“Full ownership allows us to align capital with development, accelerating the project’s execution… As the campus now moves from planning into execution, we believe a simplified ownership structure is the right next step,” Gray said.
Together, these moves reinforce Texas’ position as one of the primary proving grounds for energy-anchored hyperscale AI infrastructure.
Beyond Texas: Energy-Centric AI Campuses Scale Further
Texas is not alone in redefining how compute and energy infrastructure are being co-developed.
In late January, Fidelis New Energy and 8090 Industries launched American Intelligence & Power Corporation (AIP Corp), a platform purpose-built to develop and operate fully integrated AI compute campuses paired with dedicated power microgrids.
The platform is anchored by the Monarch Compute Campus in Mason County, West Virginia, structured around as much as 8 gigawatts of planned generation capacity, combining natural gas generation and battery storage to supply AI workloads through a fully behind-the-meter microgrid.
Authorized under West Virginia’s HB2014 framework as an islanded microgrid utility, the project is designed to deliver high-reliability power for AI workloads without increasing costs for existing utility customers while enabling accelerated infrastructure deployment.
“AIP Corp was deliberately designed as an infrastructure-first platform focused on execution, reliability, and scale,” said Daniel Shapiro, CEO of AIP Corp.
Rayyan Islam, Co-Founder of 8090 Industries, framed the shift succinctly: “AI leadership is not a software problem. It is an energy and infrastructure problem.”
Backed by LuminArx Capital Management and other investors, AIP aims to replicate this integrated power-and-compute model nationally.
If Texas demonstrates how hyperscale AI development is accelerating today, projects like Monarch suggest how deeply power and compute may be integrated in the next phase of infrastructure expansion.
The New AI Infrastructure Model: Hub and Spoke
Viewed together, these developments reveal how AI infrastructure deployment is evolving beyond pure hyperscale expansion.
Hyperscale campuses provide training-scale compute and economic efficiency. Regional and edge deployments deliver latency, compliance, and sovereign control. Fiber and cloud platforms stitch them together.
This is not hyperscale versus edge. It is the emergence of a layered infrastructure stack in which centralized power and distributed intelligence increasingly operate together.
Where the Industry Is Heading Next
The AI infrastructure race is no longer simply about who can build the biggest campus or secure the largest power contracts. Increasingly, competitive advantage belongs to operators who can align power, capital, and deployment timelines; placing capacity where energy, regulation, and customer demand intersect.
The pattern is becoming clear across markets. Modular edge deployments extend AI capabilities closer to users and industrial operations, while energy-advantaged hyperscale campuses continue to scale the core infrastructure required to train and operate increasingly complex models.
In that sense, AI infrastructure is beginning to look less like a technology land grab and more like traditional infrastructure development again: capital-intensive, power-constrained, and execution-driven.
And as 2026 unfolds, the companies best positioned to succeed may simply be those able to deliver capacity where it is actually possible to build.





















