Stay Ahead, Stay ONMINE

AI Infrastructure Scales Out and Up: Edge Expansion Meets the Gigawatt Campus Era

The AI infrastructure boom is often framed around massive hyperscale campuses racing to secure gigawatts of power. But an equally important shift is happening in parallel: AI infrastructure is also becoming more distributed, modular, and sovereign, extending compute far beyond traditional data center hubs. A wave of recent announcements across developers, infrastructure investors, and regional […]

The AI infrastructure boom is often framed around massive hyperscale campuses racing to secure gigawatts of power. But an equally important shift is happening in parallel: AI infrastructure is also becoming more distributed, modular, and sovereign, extending compute far beyond traditional data center hubs.

A wave of recent announcements across developers, infrastructure investors, and regional operators shows the market pursuing a dual strategy. On one end, developers are accelerating delivery of hyperscale campuses measured in hundreds of megawatts, and increasingly gigawatts, often located where power availability and energy economics offer structural advantage, and in some cases pairing compute directly with dedicated generation. On the other, providers are building increasingly capable regional and edge facilities designed to bring AI compute closer to users, industrial operations, and national jurisdictions.

Taken together, these moves point toward a future in which AI infrastructure is no longer purely centralized, but built around interconnected hub-and-spoke architectures combining energy-advantaged hyperscale cores with rapidly deployable edge capacity.

Recent developments across hyperscale developers, edge specialists, infrastructure investors, and regional operators illustrate how quickly this model is taking shape.

Sovereign AI Moves Beyond the Core

On Feb. 5, 2026, San Francisco-based Armada and European AI infrastructure builder Nscale signed a letter of intent to jointly deploy both large-scale and edge AI infrastructure worldwide.

The collaboration targets enterprise and public sector customers seeking sovereign, secure, geographically distributed AI environments. Nscale is building large AI supercomputer clusters globally, offering vertically integrated capabilities spanning power, data centers, compute, and software. Armada specializes in modular deployments through its Galleon data centers and Armada Edge Platform, delivering compute and storage into remote or infrastructure-poor environments.

The combined offering addresses a growing challenge: many governments and enterprises want AI capability deployed within their own jurisdictions, even where traditional hyperscale infrastructure does not yet exist.

“There is increasing demand from enterprises and governments for operational AI, and meeting that need requires infrastructure that is scalable, distributed, and ultimately sovereign,” said Josh Payne, Founder and CEO of Nscale. “By working with Armada, we will be able to offer customers a flexible foundation for deploying advanced AI workloads wherever they need to operate, without compromising performance, security, or control.”

The model envisions hyperscale campuses providing economic efficiency while Armada’s modular deployments, such as its megawatt-scale Leviathan Galleon units, extend sovereign compute into edge geographies.

“As AI adoption accelerates, organizations need infrastructure that can reach beyond centralized clusters, on Earth and even beyond,” said Dan Wright, Co-Founder and CEO of Armada. “One of Armada’s key differentiators is that we enable sovereign AI, with speed and scale. Partnering with Nscale allows us to extend our modular AI infrastructure into new global markets.”

Speed matters. Modular deployments can arrive far faster than greenfield builds, enabling sovereign AI capacity where full-scale campuses would take years.

Regional Markets Become Strategic Edge Nodes

Distributed AI infrastructure is also reshaping secondary and regional U.S. markets.

On Jan. 30, 2026, US Signal announced acquisition of a data center in Aurora, Illinois, reinforcing its strategy to build a distributed, edge-focused national platform.

The facility strengthens US Signal’s Chicago-area footprint while enabling immediate investment to expand capacity to roughly 4 MW critical IT load (or 6 MW commercial capacity). The company will also deploy its OpenCloud private cloud and virtualization platform locally.

“This facility aligns perfectly with our long-term growth strategy,” said Daniel Watts, CEO of US Signal. “We’re investing immediately to expand capacity and will deliver OpenCloud at scale, giving customers in this market a local, enterprise-grade private cloud option backed by our national network and proven operational model.”

The Aurora facility supports:

• Private and hybrid cloud deployments
• AI inference and latency-sensitive workloads
• Enterprise and carrier colocation

The site connects directly into US Signal-owned fiber routes and benefits from continued investment in ultra-dense fiber expansion and in-line amplification sites, allowing low-latency connectivity from edge to core.

Infrastructure investor Igneo Infrastructure Partners has committed more than $200 million to accelerate US Signal’s expansion across data centers, fiber, and cloud infrastructure.

“This acquisition and the immediate investment that follows reinforces our confidence in the platform and its ability to deliver long-term value,” said Michael Ryder, Partner and Co-Head of North America at Igneo Infrastructure Partners.

One takeaway here is that regional markets are increasingly becoming operational AI nodes rather than secondary infrastructure markets.

Hyperscale Campuses Still Form the Backbone

While distributed deployments expand outward, centralized campuses continue to form the backbone of AI compute, particularly for model training and large-scale inference workloads that demand massive contiguous power delivery.

On Jan. 29, 2026, Manulife Investment Management-backed developer Serverfarm announced it secured a $3.0 billion credit facility, closed in December 2025, to accelerate hyperscale campus development across strategic North American markets.

Supported by a syndicate of 23 institutional lenders, the facility provides development and construction capital enabling Serverfarm to rapidly deliver AI-ready infrastructure for cloud providers and AI innovators requiring deployments at unprecedented scale and density.

The financing supports multiple hyperscale projects already advancing, including the company’s:

• Houston hyperscale campus offering more than 500 MW of development potential across 250 acres in Houston’s energy corridor, supported by dual on-site substations designed to support phased expansion at scale.
• Atlanta expansion in Covington, Georgia, delivering a 498,960-square-foot facility with 60 MW of critical IT capacity for a single hyperscale tenant deployment.
• Toronto expansion adding 4 MW of capacity to support continued hyperscale demand growth in the Canadian market.

Serverfarm executives emphasized that speed of delivery is now as critical as scale itself, particularly as hyperscalers race to bring GPU-intensive AI workloads online.

“This $3.0 billion facility provides the capital foundation to accelerate our hyperscale campus development pipeline at a time when speed to market is a competitive differentiator,” said CEO Avner Papouchado. “Our proven basis-of-design enables accelerated delivery timelines, allowing cloud providers and AI innovators to deploy GPU-intensive workloads when timing matters most.”

Manulife Investment Management, which backs the Serverfarm platform, underscored continued institutional appetite to deploy capital into data center infrastructure as AI demand reshapes long-term infrastructure investment strategies.

“We see attractive opportunities to deploy capital to develop data center infrastructure globally and are excited to continue to support the Serverfarm platform in executing its growth plans,” said Recep Kendircioglu, Global Head of Infrastructure at Manulife Investment Management.

Serverfarm, which operates across key global markets including Houston, Northern Virginia, Chicago, Atlanta, Los Angeles, London, Amsterdam, Tel Aviv, Moses Lake, and Toronto, has increasingly positioned its campuses around AI-ready designs, including deployment of closed-loop water-cooling systems across locations to support high-density AI workloads while minimizing water waste.

The result is a portfolio built to support both hyperscale cloud expansion and the rapid emergence of AI training and inference clusters requiring sustained high-density operation.

Texas—and Now Appalachia—Show How Energy Is Becoming the Primary AI Constraint for Infrastructure Deployment

If distributed infrastructure is scaling outward, Texas shows how far hyperscale development itself is scaling upward.

On Jan. 16, 2026, New Era Energy & Digital announced a partnership with Primary Digital Infrastructure to co-develop Texas Critical Data Centers (TCDC), an approximately one-gigawatt-plus hyperscale campus in Ector County near Odessa, Texas.

The campus is engineered specifically for next-generation hyperscale and AI compute demand, combining grid-supplied electricity with behind-the-meter generation solutions and offering substantial expansion potential in the Permian Basin energy corridor, one of North America’s most energy-rich regions.

E. Will Gray II, CEO of New Era Energy & Digital, described the partnership as a validation of the company’s infrastructure-first strategy.

“The formation of our partnership with Primary Digital is a watershed moment… bringing the critical expertise required to execute a development of this scale,” Gray said. “We remain on track to sign a hyperscale anchor tenant… and we believe this development will deliver significant and durable value.”

As lead capital partner and co-sponsor, Primary Digital Infrastructure brings both hyperscale tenant relationships and capital markets expertise needed to structure and finance a development of this magnitude while de-risking execution.

Leveraging deep relationships with global cloud and AI companies, the firm is positioned to help secure an anchor hyperscale tenant for the project’s initial phase while simultaneously arranging the multi-billion-dollar financing required for delivery.

“The next wave of hyperscale and AI infrastructure is being built where power is abundant, flexible, and economically advantaged,” said Bill Stein, Executive Managing Director and Chief Investment Officer at Primary Digital Infrastructure. “With an experienced, well-capitalized partner like New Era, together we will deliver a strategically located, hyperscale-ready campus designed to meet the demands of investment-grade tenants seeking reliable solutions for advanced computing needs.”

Primary Digital’s Texas momentum also includes participation in Stargate-related development, including a previously announced $15 billion joint venture with Crusoe Energy Systems and Blue Owl Capital to develop a 1.2-gigawatt AI campus in Abilene supported by more than $11.6 billion in debt and equity financing.

Momentum behind TCDC accelerated further as New Era announced a binding agreement to acquire Sharon AI’s 50% ownership interest in the project for $70 million, consolidating full ownership under New Era as development shifts from planning toward execution.

The transaction structure, reported as combining cash, deferred equity, and a senior secured promissory note, was designed to minimize shareholder dilution while enabling faster project execution.

In parallel, New Era closed on the acquisition of an additional 203 contiguous acres, expanding the campus footprint to 438 acres and strengthening its ability to support a multi-phase, gigawatt-scale AI and HPC development.

“Full ownership allows us to align capital with development, accelerating the project’s execution… As the campus now moves from planning into execution, we believe a simplified ownership structure is the right next step,” Gray said.

Together, these moves reinforce Texas’ position as one of the primary proving grounds for energy-anchored hyperscale AI infrastructure.

Beyond Texas: Energy-Centric AI Campuses Scale Further

Texas is not alone in redefining how compute and energy infrastructure are being co-developed.

In late January, Fidelis New Energy and 8090 Industries launched American Intelligence & Power Corporation (AIP Corp), a platform purpose-built to develop and operate fully integrated AI compute campuses paired with dedicated power microgrids.

The platform is anchored by the Monarch Compute Campus in Mason County, West Virginia, structured around as much as 8 gigawatts of planned generation capacity, combining natural gas generation and battery storage to supply AI workloads through a fully behind-the-meter microgrid.

Authorized under West Virginia’s HB2014 framework as an islanded microgrid utility, the project is designed to deliver high-reliability power for AI workloads without increasing costs for existing utility customers while enabling accelerated infrastructure deployment.

“AIP Corp was deliberately designed as an infrastructure-first platform focused on execution, reliability, and scale,” said Daniel Shapiro, CEO of AIP Corp.

Rayyan Islam, Co-Founder of 8090 Industries, framed the shift succinctly: “AI leadership is not a software problem. It is an energy and infrastructure problem.”

Backed by LuminArx Capital Management and other investors, AIP aims to replicate this integrated power-and-compute model nationally.

If Texas demonstrates how hyperscale AI development is accelerating today, projects like Monarch suggest how deeply power and compute may be integrated in the next phase of infrastructure expansion.

The New AI Infrastructure Model: Hub and Spoke

Viewed together, these developments reveal how AI infrastructure deployment is evolving beyond pure hyperscale expansion.

Hyperscale campuses provide training-scale compute and economic efficiency. Regional and edge deployments deliver latency, compliance, and sovereign control. Fiber and cloud platforms stitch them together.

This is not hyperscale versus edge. It is the emergence of a layered infrastructure stack in which centralized power and distributed intelligence increasingly operate together.

Where the Industry Is Heading Next

The AI infrastructure race is no longer simply about who can build the biggest campus or secure the largest power contracts. Increasingly, competitive advantage belongs to operators who can align power, capital, and deployment timelines; placing capacity where energy, regulation, and customer demand intersect.

The pattern is becoming clear across markets. Modular edge deployments extend AI capabilities closer to users and industrial operations, while energy-advantaged hyperscale campuses continue to scale the core infrastructure required to train and operate increasingly complex models.

In that sense, AI infrastructure is beginning to look less like a technology land grab and more like traditional infrastructure development again: capital-intensive, power-constrained, and execution-driven.

And as 2026 unfolds, the companies best positioned to succeed may simply be those able to deliver capacity where it is actually possible to build.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Quantum Elements cuts quantum error rates using AI-powered digital twin

“That’s pretty clever, actually,” Sutor says. “It’s a little microwave pulse. That fixes some of the errors.” The Quantum Elements paper specifically addressed quantum error correction in IBM’s 127-qubit superconducting processor. But these techniques might also be able to be generalized to other types of quantum computers, Sutor says. And

Read More »

How AWS is reinventing the telco revenue model

Consider what that means for the mobile operator and its relationship with its customers. Instead of selling a generic 5G pipe with a static SLA, a telco can now sell a dynamic, guaranteed slice for a specific use case—say, a remote robotic surgery setup or a high-density, low-latency industrial IoT

Read More »

What’s the biggest barrier to AI success?

AI’s challenge starts with definition. We hear all the time about how AI raises productivity, and many have experienced that themselves. But what, exactly, does “productivity” mean? To the average person, it means they can do things with less effort, which they like, so it generates a lot of favorable

Read More »

Trump Administration Keeps Coal Plant Open to Ensure Affordable, Reliable and Secure Power in the Northwest

Emergency order addresses critical grid reliability issues, lowering risk of blackouts and ensuring affordable electricity access. WASHINGTON—U.S. Secretary of Energy Chris Wright today issued an emergency order to ensure Americans in the Northwestern region of the United States have access to affordable, reliable and secure electricity. The order directs TransAlta to keep Unit 2 of the Centralia Generating Station in Centralia, Washington available to operate. Unit 2 of the coal plant was scheduled to shut down at the end of 2025. The reliable supply of power from the Centralia plant is essential to maintaining grid stability across the Northwest, and this order ensures that the region avoids unnecessary blackout risks and costs. “The last administration’s energy subtraction policies had the United States on track to likely experience significantly more blackouts in the coming years — thankfully, President Trump won’t let that happen,” said Energy Secretary Wright. “The Trump administration will continue taking action to keep America’s coal plants running so we can stop the price spikes and ensure we don’t lose critical generation sources. Americans deserve access to affordable, reliable, and secure energy to power their homes all the time, regardless of whether the wind is blowing or the sun is shining.” Thanks to President Trump’s leadership, coal plants across the country are reversing plans to shut down. On December 16, 2025, Secretary Wright issued an emergency order directing TransAlta to keep Unit 2 (729.9 MW) available to operate.According to DOE’s Resource Adequacy Report, blackouts were on track to potentially increase 100 times by 2030 if the U.S. continued to take reliable power offline as it did during the Biden administration. This order is in effect beginning on March 17, 2026, through June 14, 2026. ### 

Read More »

Brent retreats from highs after Trump signals Iran war nearing end

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Oil futures eased from recent highs Tuesday as markets reacted to comments from US President Donald Trump suggesting the war with Iran may be nearing its conclusion, easing concerns about prolonged disruptions to Middle East crude supplies. Brent crude had climbed above $100/bbl amid escalating tensions in the region and fears that the war could prolong disruptions to shipments through the Strait of Hormuz—one of the world’s most critical energy chokepoints and a transit route for roughly one-fifth of global oil supply. Prices pulled back after Pres. Trump said the war was “almost done,” prompting traders to reassess the risk premium that had built into crude markets during the latest escalation. The earlier gains were driven by the fact that the war had disrupted tanker traffic in the Strait of Hormuz, raising concerns about wider supply disruptions from major Gulf oil producers. While the latest remarks helped calm markets, analysts note that geopolitical risks remain elevated and price volatility is likely to persist as traders monitor developments in the region. Any renewed escalation could quickly send crude prices higher again.

Read More »

Southwest Arkansas lithium project moves toward FID with 10-year offtake deal

Smackover Lithium, a joint venture between Standard Lithium Ltd. and Equinor, through subsidiaries of Equinor ASA, signed the first commercial offtake agreement for the South West Arkansas Project (SWA Project) with commodities group Trafigura Trading LLC. Under the terms of a binding take-or-pay offtake agreement, the JV will supply Trafigura with 8,000 metric tonnes/year (tpy) of battery-quality lithium carbonate (Li2CO3) over a 10-year period, beginning at the start of commercial production. Smackover Lithium is expected to achieve final investment decision (FID) for the project, which aims to use direct lithium extraction technology to produce lithium from brine resources in the Smackover formation in southern Arkansas, in 2026, with first production anticipated in 2028. The project encompasses about 30,000 acres of brine leases in the region, with the initial phase of project development focused on production from the 20,854-acre Reynolds Brine Unit.   Front-end engineering design was completed in support of a definitive feasibility study with a principal recommendation that the project is ready to progress to FID.  While pricing terms of the Trafigura deal were kept confidential, Standard Lithium said they are “structured to support the anticipated financing for the project.” The JV is seeking to finalize customer offtake agreements for roughly 80% of the 22,500 tonnes of annual nameplate lithium carbonate capacity for the initial phase of the project. This agreement represents over 40% of the targeted offtake commitments. Formed in 2024, Smackover Lithium is developing multiple DLE projects in Southwest Arkansas and East Texas. Standard Lithium is operator of the projecs with 55% interest. Equinor holds the remaining 45% interest.

Read More »

Equinor makes oil and gas discoveries in the North Sea

Equinor Energy AS discovered oil in the Troll area and gas and condensate in the Sleipner area of the North Sea. Byrding C discovery well 35/11-32 S in production license (PL) 090 HS was made 5 km northwest of Fram field in Troll. The well was drilled by the COSL Innovator rig in 373 m of water to 3,517 m TVD subsea. It was terminated in the Heather formation from the Middle Jurassic. The primary exploration target was to prove petroleum in reservoir rocks from the Late Jurassic deep marine equivalent to the Sognefjord formation. The secondary target was to prove petroleum and investigate the presence of potential reservoir rocks in two prospective intervals from the Middle Jurassic in deep marine equivalents to the Fensfjord formation. The well encountered a 22-m oil column in sandstone layers in the Sognefjord formation with a total thickness of 82 m, of which 70 m was sandstone with moderate to good reservoir properties. The oil-water contact was encountered. The secondary exploration target in the Fensfjord formation did not prove reservoir rocks or hydrocarbons. The well was not formation-tested, but data and samples were collected. The well has been permanently plugged. Preliminary estimates indicate the size of the discovery is 4.4–8.2 MMboe. Oil discovered in Byrding C will be produced using existing or future infrastructure in the area. The Frida Kahlo discovery was drilled from the Sleipner B platform in production license PL 046 northwest of Sleipner Vest and is estimated to contain 5–9 MMboe of gas and condensate. The well will be brought on stream as early as April. The four most recent exploration wells in the Sleipner area, drilled over a 3-month period, include Lofn, Langemann, Sissel, and Frida Kahlo. All have all proven gas and condensate in the Hugin formation, with combined estimated

Read More »

IEA launches record strategic oil release as Middle East war disrupts supply

The International Energy Agency (IEA) on Mar. 11 approved the largest emergency oil stock release in its history, making 400 million bbl available from member-country reserves in response to market disruptions tied to the war in the Middle East. The coordinated action, agreed unanimously by the IEA’s 32 member countries, is intended to ease supply pressure and temper price volatility as crude markets react to disrupted flows through the Strait of Hormuz. “The conflict in the Middle East is having significant impacts on global oil and gas markets, with major implications for energy security, energy affordability and the global economy for oil,” IEA executive director Fatih Birol said. The release more than doubles the previous IEA record set in 2022, when member countries collectively made 182.7 million bbl available following Russia’s invasion of Ukraine. Under the IEA system, member countries are required to maintain emergency oil stocks equal to at least 90 days of net imports, giving the agency a mechanism to respond when severe disruptions threaten global supply. The move comes after crude prices surged amid concerns that the US-Iran war could lead to prolonged disruption of exports from the Gulf. Despite the planned stock release, traders remain uncertain about whether reserve barrels alone will be enough to offset losses if the disruption persists. IEA said the emergency barrels will be supplied to the market from government-controlled and obligated industry stocks held across member countries. The action marks the sixth coordinated stock release in the agency’s history and underscores the seriousness of the current supply shock. Earlier the day, Japanese Prime Minister Sanae Takaichi said that Japan might start using its strategic oil reserves as early as next week, citing Japan’s unusually high dependence on Middle Eastern crude oil.

Read More »

Infographic: Strait of Hormuz energy trade 2025

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Coordinated attacks Feb. 28 by the US and Israel on Iran and the since-escalated conflict have nearly halted shipping traffic through the Strait of Hormuz, which typically carries about 20% of the world’s crude oil and natural gas. OGJ Statistics Editor Laura Bell-Hammer compiled data to showcase 2025 energy trade through the critical transit chokepoint.   <!–> –> <!–> ]–> <!–> ]–>

Read More »

Available’s $5B Project Qestrel aims to roll out 1,000 AI-ready edge data centers by year’s end

Available is partnering with wireless infrastructure company Crown Castle, which owns, operates, and leases more than 40,000 cell towers and roughly 90,000 miles of fiber. “Our strategy is to industrialize and modularize deployment by building on telecom co-location and pre-existing physical infrastructure rather than greenfield hyperscale construction,” said Medina. Some initial sites are live (the company declined to say how many, due to “final contractual and commissioning milestones”) and 30 cities are expected to come online by early July. Available is prioritizing dense urban corridors, and early adoption has begun in “major Northeast corridors with a path to nationwide rollout,” Medina explained. The company’s infrastructure will be used by Strata Expanse, which specializes in 60 to 90 day AI data center deployments, and incorporated into Strata’s new full-stack, end-to-end Amphix AI Infrastructure Platform. The neocloud architecture will run up to 48 GPUs per site, bringing AI inferencing to the edge. Many sites will be pre-integrated with IBM’s watsonx; others will be AI-agnostic, allowing enterprises to run their preferred models. According to Available, Project Qestrel will provide:

Read More »

Cisco extends its Secure AI Factory with Nvidia

“Customers can now control and manage this environment and operate it like it was a traditional data center fabric,” Wollenweber said. “The ability to bring it under the same Nexus umbrella is actually a huge selling point for AI customers, because their IT infrastructure folks, their operational people that are running the network, already understand how to use these Nexus tools, and so they can now add AI workloads and kind of accelerated computing technologies like GPUs, but in that same Nexus umbrella,” Wollenweber said.  “As Al becomes operational and distributed, complexity becomes the enemy of scale. Fragmented architectures force customers to manage integration, policy enforcement, observability, and security across silos, increasing cost and slowing innovation,” said Wollenweber. “Architecting silicon, networking, compute, security, and Al software into a cohesive system gives organizations a unified operating model, stronger performance guarantees, and embedded trust.” Those are the driving ideas around Cisco Secure AI Factory with Nvidia, Wollenweber said. Introduced a year ago, Secure AI Factory with Nvidia integrates Cisco’s Hypershield and AI Defense packages to help protect the development, deployment, and use of AI models and applications. Hypershield uses AI to dynamically refine security policies based on application identity and behavior. It automates policy creation, optimization, and enforcement across workloads. AI Defense discovers the various models being used in a customer’s AI development and uses four features to help customers enforce AI protection: AI access, AI cloud visibility, AI model and application validation, and AI runtime protection. Cisco integrates Hybrid Mesh Firewall technology On the security side, Cisco said it will embed its Hybrid Mesh Firewall technology to allow for security policy enforcement on Nvidia BlueField data processing units (DPU) that are embedded in Nvidia GPU servers connected to Cisco Nexus One fabrics. Cisco Hybrid Mesh Firewall offers a distributed security fabric

Read More »

Middle East war fosters concerns about physical data center security

The most common issue that Guidepost talks about with its clients is insider threats, which can be anyone that is rightfully permitted into your data center. Data centers have very strict rules regarding movement of visitors, but employees pretty much have free rule of the place. “Insider threat could be someone simply putting a USB stick in a server or having access to a data device that they’re not supposed to,” he said. “A threat actor could potentially cause harm within the facility, whether that’s mechanical, electrical, plumbing spaces or the data halls themselves is our number one preventative item that we’re trying to thwart.” When it comes to external threats, Guidepost looks after vehicle-borne IEDs and vehicle ramming, even if it’s accidental. That’s why data centers have high, anti-climb perimeter fences, multi-layered gates. and vehicle barriers that are put in place help to prevent any unwanted vehicles outside of the facility. “It’s a lot of what we call Crime Prevention Through Environmental Design,” said Bekisz. “It’s a theory that we utilize in our industry for ensuring that we are detecting and thwarting individuals before they are willing to commit some type of offensive action or some type of unwanted behavior.” That includes simple things like lighting right or reducing the visibility of the data center through shrubs and trees and berms and using that in consortium with physical preventative devices. Drones are a growing problem, even if they are not being used in kamikaze attacks. Bekisz said the only thing you can do is put in drone detection, so you have some type of device in the air in the area of your facility, and then you call for support from local emergency services.

Read More »

Palantir partners with Nvidia to streamline AI data center deployment

This collaboration grants enterprises full control over their data, AI models, and applications while supporting the use of open-source AI models and related data acceleration tools. The Palantir AI OS reference architecture gives enterprises total control over their data, AI models and applications. It is particularly critical for customers with existing GPU infrastructure, latency-sensitive workflows, data sovereignty requirements, and high geographic distribution. “From our first deployment with the United States government and in every deployment since, our software has had to meet the moment in the most complex and sensitive environments where customers must maintain control,” says Akshay Krishnaswamy, Palantir’s chief architect in a statement. “Together with Nvidia — and building on many customers’ existing investments — we are proud to deliver a fully integrated AI operating system that is optimized for Nvidia accelerated compute infrastructure and enables customers to realize the promise of on-premises, edge, and sovereign cloud deployments,” he added. Sovereign AI is an emerging market that represents a country’s efforts to develop and maintain control of its own AI, using its own data, and keeping the data within its borders.

Read More »

Who’s in the data-center space race?

But not everyone is that optimistic. According to Gartner, space-based data centers won’t be useful for decades, so companies should focus on expanding capacity down here on Earth. “I honestly think the idea with the current landscape of putting data centers in space is ridiculous,” OpenAI CEO Sam Altman told The Indian Express in February. Current satellite computing can’t easily scale to data centers, agrees Holger Mueller, an analyst at Constellation Research. “Weight is still the restriction,” he says. “It’s the equivalent of you buying a tablet or small laptop to travel across Latin America versus putting in a data center in the Amazon. Different power requirements, investment, totally different setup.” Then there are issues like damaged solar panels from meteorite storms and satellite debris, he adds. “You would have to pay for operational redundancy, which is further investment.” “Data centers will be built where they are affordable,” he says. “I don’t see space happening soon. Remember the Microsoft submerged one? Crickets…” But he agrees that solar power is nice, though the sun is only visible from one side of the planet at any given time. And space is cold, he says. Cooling down in outer space In fact, space is very cold. Close to absolute zero cold. But vacuum is also a great insulator, and there’s no air to move the heat around. “You can’t convect heat away,” says Richard Bonner, CTO at Accelsius, a liquid cooling company. Bonner has worked on NASA research projects about the challenge of cooling in space and is very familiar with the problem. A small proportion of the heat might be turned back into useful electricity, but that’s not really a solution, he says, because computer chips don’t get quite that hot. Instead, heat is radiated. When an object warms up, it generates

Read More »

Community Opposition Emerges as New Gatekeeper for AI Data Center Expansion

The rapid global buildout of AI infrastructure is colliding with a new constraint that hyperscalers cannot solve with capital or GPUs: local opposition. In the first months of 2026, community resistance has already begun reshaping the development pipeline. A February analysis by Sightline Climate estimates that 30–50 percent of the data center capacity expected to come online in 2026 may not be delivered on schedule, reflecting a growing set of constraints that now include power availability, permitting challenges, and increasingly organized local opposition. The financial stakes are already substantial. Recent reporting indicates that tens of billions of dollars in planned data center development have been delayed or halted amid community pushback, including an estimated $98 billion worth of projects delayed or blocked in a single quarter of 2025, according to research cited by Data Center Watch. What had been framed throughout 2024 and 2025 as an inevitable expansion of hyperscale campuses, gigawatt-scale power agreements, and AI “factory” clusters is now encountering a different kind of gatekeeper: the communities expected to host the infrastructure. The shift is already visible in project outcomes. Across the United States, multiple projects were canceled, blocked, or fundamentally reshaped in the opening months of 2026 due to organized local opposition. Reporting from The Guardian found that 26 data center projects were canceled in December and January, compared with just one cancellation in October, suggesting that community resistance campaigns are increasingly capable of stopping projects before construction begins. At the same time, local governments are responding to community pressure with moratoriums, zoning restrictions, and permitting delays that can stall projects long enough to jeopardize financing or push developers to seek more favorable jurisdictions. While opposition to data center development is not new, the scale, coordination, and success rate of these efforts suggest a structural shift in how

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »