Stay Ahead, Stay ONMINE

Podcast: Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers

In the latest episode of the Data Center Frontier Show podcast, DCF Editor-in-Chief Matt Vincent sits down with Phill Lawson-Shanks, Chief Innovation Officer at Aligned Data Centers, for a wide-ranging discussion that touches on some of the most pressing trends and challenges shaping the future of the data center industry. From the role of nuclear […]

In the latest episode of the Data Center Frontier Show podcast, DCF Editor-in-Chief Matt Vincent sits down with Phill Lawson-Shanks, Chief Innovation Officer at Aligned Data Centers, for a wide-ranging discussion that touches on some of the most pressing trends and challenges shaping the future of the data center industry.

From the role of nuclear energy and natural gas in addressing the sector’s growing power demands, to the rapid expansion of Aligned’s operations in Latin America (LATAM), in the course of the podcast Lawson-Shanks provides deep insight into where the industry is headed.

Scaling Sustainability: Tracking Embodied Carbon and Scope 3 Emissions

A key focus of the conversation is sustainability, where Aligned continues to push boundaries in carbon tracking and energy efficiency. Lawson-Shanks highlights the company’s commitment to monitoring embodied carbon—an effort that began four years ago and has since positioned Aligned as an industry leader.

“We co-authored and helped found the Climate Accord with iMasons—taking sustainability to a whole new level,” he notes, emphasizing how Aligned is now extending its carbon traceability standards to ODATA’s facilities in LATAM. By implementing lifecycle assessments (LCAs) and tracking Scope 3 emissions, Aligned aims to provide clients with a detailed breakdown of their environmental impact.

“The North American market is still behind in lifecycle assessments and environmental product declarations. Where gaps exist, we look for adjacencies and highlight them—helping move the industry forward,” Lawson-Shanks explains.

The Nuclear Moment: A Game-Changer for Data Center Power

One of the most compelling segments of the discussion revolves around the growing interest in nuclear energy—particularly small modular reactors (SMRs) and microreactors—as a viable long-term power solution for data centers. Lawson-Shanks describes the recent industry buzz surrounding Oklo’s announcement of a 12-gigawatt deployment with Switch as a significant milestone, calling the move “inevitable.”

“There are dozens of nuclear plants operating in the U.S. today, but people just don’t pay much attention to them,” he says. “Companies like Oklo are designing advanced modular reactors that are walk-away safe, reuse spent fuel, and eliminate the risks associated with traditional light-water reactors. This is the path forward.”

However, he acknowledges that the widespread adoption of nuclear will take time, given the regulatory hurdles of the Nuclear Regulatory Commission (NRC) and the challenges of getting sites certified. Still, he remains optimistic: “We need this, and as an industry, we’re pre-buying energy because we see the challenges ahead.”

Bridging the Energy Gap with Natural Gas and Hydrogen

While nuclear is a long-term solution, data centers need reliable power sources today. Lawson-Shanks sees natural gas as a practical interim solution, provided emissions can be mitigated. He also points to hydrogen as an emerging technology with potential, though challenges remain.

“Hydrogen is really an energy transportation methodology rather than an energy source,” he explains. “It’s highly corrosive, and the infrastructure isn’t fully in place yet, but it’s something we’re closely monitoring.”

He predicts that natural gas reciprocating engines will serve as a bridge solution until nuclear modules become widely available. “Once we reach steady-state nuclear power, those gas engines could replace diesel generators, which we all want to phase out,” he says.

Explosive Growth in LATAM and the Evolution of Aligned’s Global Strategy

The conversation also covers Aligned’s expansion into Latin America following its acquisition of ODATA. Lawson-Shanks describes the region as a booming market, particularly in Brazil, where Aligned has access to renewable energy through its investment in wind farms.

“LATAM is an enormous growth market, and our waterless cooling system is ideal for places like Santiago, where water scarcity makes evaporative cooling unfeasible,” he explains.

Aligned is integrating its advanced cooling technologies—such as Delta³ and DeltaFlow—into ODATA’s new facilities, ensuring that sustainability remains a core component of their LATAM operations.

Innovating Beyond Cooling: The Future of Heat Reuse

Another forward-looking topic is Aligned’s interest in heat reuse, an area where Lawson-Shanks sees significant potential for innovation. Through its partnership with QScale in Canada, Aligned is exploring methods to capture and repurpose waste heat from data centers for other applications.

“Their heat reuse strategy is really interesting, and we’re looking at how we can implement similar solutions in North America,” he says, hinting at future developments to come.

Looking Ahead: A Future Shaped by Innovation and Sustainability

As the conversation wraps up, it’s clear that Lawson-Shanks sees the data center industry at an inflection point. The combination of sustainability commitments, new energy technologies, and rapid global expansion is forcing companies to rethink traditional models and embrace innovation at an unprecedented scale.

“We’ve always fought against the idea that data centers have to be built the same way they were in the 1970s,” he says. “We’re constantly redesigning, rethinking how we procure energy, and pushing the industry forward.”

With Aligned continuing to lead the charge in sustainability, energy innovation, and international expansion, the insights shared in this episode offer a compelling look at the challenges and opportunities ahead for the data center industry.

Here’s a timeline of the podcast’s key moments:

  • After introductions, exciting news about Lawson-Shanks and Aligned joining the 2025 DCF Trends Summit Editorial Advisory Board is shared. 0:02
  • Lawson-Shanks discusses the industry’s sustainability focus. The impact of ChatGPT on market dynamics is highlighted. 2:21
  • The rapid growth of cloud deployment is highlighted. Challenges in supply chain management due to factory shutdowns are discussed. 3:26
  • The emergence of agentic AI systems is brought up. The importance of proximity to cloud instances for effective data processing is emphasized. 4:49
  • A potential edge boom in 2025 is speculated upon. The construction of facilities for AI inference aligned with cloud interests is questioned. 7:06
  • Lawson-Shanks explains how a significant land grab for data center space has occurred. He describes how existing data centers are unable to accommodate new high-density paths. 8:20
  • On how the Aligned design architecture includes adaptive data center features. High-density cooling solutions are being implemented with liquid and air. 8:54
  • On how the demand for technology is increasing exponentially, and more space and technology will be required to meet future needs. 11:44
  • An overview of Open Compute Project (OCP) architecture is provided. The architecture includes discrete components for flexibility. 13:22
  • The importance of adhering to OCP standards is emphasized. Such adherence ensures safety and efficiency in data center operations. 14:09
  • A discussion about the critical role of data centers in industrial revolutions is presented. Data centers are described as essential infrastructure for modern technology. 17:14
  • Discussion now centers on the future of nuclear energy. The potential for small modular reactors is highlighted. 18:42
  • The importance of addressing public fears about radiation is emphasized. The benefits of advanced reactor designs are noted. 19:55
  • Concerns about energy transmission infrastructure are raised. The discussion notes that building new transmission lines can take decades. 21:16
  • Natural gas is discussed as a near-green energy source. Mitigation strategies for emissions are mentioned. 22:54
  • Hydrogen’s role as an energy transportation method is explored. The challenges of biofuel supply and infrastructure are highlighted. 23:36
  • Innovative approaches to data center design and energy procurement are emphasized. The importance of adapting to new methodologies is noted. 25:15
  • The need for tracking embodied carbon is highlighted. The discussion reveals how this initiative has been ongoing for four years and has led to significant developments. 27:28
  • The expansion of Aligned’s carbon tracking to ODATA in Latin America is discussed. This includes providing clients with lifecycle assessments and environmental product declarations. 27:53
  • The growth of the market in Latin America, especially in Brazil, is emphasized. The presence of green energy sources, such as wind farms, is noted as a positive factor. 29:24

DCF Show Podcast Quotes from Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers

On Market Demand and the Evolution of Data Centers

  • “Existing data centers in major metros are largely full, and many weren’t designed for the high-density workloads we’re seeing today.”
  • “The industry went through a phase where we were just stamping out the same boxes—buying land, building infrastructure. Now, there are real engineering challenges again, and that’s exciting.”
  • “We’re in an unprecedented time. The use of technology isn’t slowing down—it’s accelerating, and we need more space, more innovation, more infrastructure to support it.”
  • “AI isn’t just a trend—it’s a fundamental shift, and data centers have to evolve to support that scale.”

On Aligned’s Adaptive Data Center Architecture

  • “We designed our adaptive data center architecture so that we can integrate both air and liquid cooling seamlessly.”
  • “Our Delta Cube arrays allow us to do high-density cooling with just air. But for the foreseeable future, we will need both air and liquid cooling.”
  • “Liquid to the chip removes most of the heat—maybe 70-80%—but there are still DIMMs, storage arrays, and network components that require airflow.”
  • “We’re building infrastructure that has to last 20 to 30 years. That means designing for today’s workloads while being adaptable for future technologies.”

On Engineering Challenges and Innovation

  • “We’re designing for everything from 50 megawatts with air to 360 megawatts with liquid cooling, all in a redundant fashion.”
  • “We’re rethinking everything—electrical infrastructure, cooling, heat rejection, and even heat reuse. There are exciting possibilities ahead.”
  • “The reality is, these racks are huge now. They come pre-populated, they’re heavy, and they need to be moved safely. Many older data centers just weren’t designed with that in mind.”

On the Open Compute Project (OCP) and Industry Standards

  • “OCP started with Facebook—now Meta—disaggregating servers into their core components. It’s changed the way hyperscalers build infrastructure.”
  • “We worked closely with OCP to help define and ratify a data center standard for hyperscalers. That means clients know our facilities conform to those specifications from the start.”
  • “Something as simple as ensuring the right door heights, corridor angles, and loading capabilities makes a huge difference when deploying large-scale infrastructure.”

On Industry Leadership and Open Innovation

  • “All boats rise. We lead, but we don’t want to be exclusive—we want to pull the industry forward with us.”
  • “I started at Compaq, and they had a philosophy: Identify gaps in the market, solve them, patent the solution, and then release it to the industry. We take the same approach—innovation that benefits everyone.”
  • “Data centers are the engine of the fourth and now the fifth industrial revolution. They are critical infrastructure for everything we do.”

On the Growing Role of Nuclear Energy in Data Centers

  • “I think it’s inevitable. Absolutely inevitable.”
  • “There are dozens of nuclear plants across the U.S., but people just don’t pay that much attention to them.”
  • “I personally love the advanced modular reactors—Oklo in particular. They reuse spent fuel, they’re walk-away safe, and there’s no pressurization risk.”
  • “You could hug one of these things for a year and receive less radiation than I got flying across the country last night.”
  • “The biggest challenge isn’t generation—it’s transmission. It takes about 12 years to build out the infrastructure to actually pass electrons.”
  • “Some of the high-tension lines going into Virginia now were approved 25 years ago. That’s how long these things take.”
  • “Nuclear is classified as green energy, and all our energy is 100% renewable. This is the future for the whole industry.”

On Natural Gas and the Transition to Cleaner Power

  • “Natural gas isn’t green, but you can mitigate its impact. It’s what we have available to bridge the shortfall until nuclear modules are online.”
  • “We’re looking at natural gas reciprocating engines as a stopgap until we get steady-state, utility-grade nuclear power.”
  • “Eventually, I see those gas engines replacing the diesel generators we have today—because we all want to get away from that.”
  • “Hydrogen is interesting, but it’s really more of an energy transport method than a true energy source. There are still major challenges with its infrastructure and supply.”

On Data Center Innovation and Industry Change

  • “The traditional way of building data centers was designed in the 1970s during the golden age of the mainframe. And for years, everyone just kept doing the same thing.”
  • “At Aligned, we tore up the rulebook. We constantly rethink how we build, design, and procure energy.”
  • “We lead, but we also push the industry forward—we don’t just follow the predefined supply chain that’s existed for decades.”

On Aligned’s Acquisition of ODATA and Expansion in LATAM

  • “We acquired them over a year ago, but they’re very much an Aligned company. We let that amazing team run their business as they need to, while helping them leverage the core competencies that got us to where we are today.”
  • “Their new buildings will be designed around our methodologies—using Delta³ and DeltaFlow where appropriate.”
  • “LATAM is an enormous growth market. Brazil, in particular, is seeing extraordinary expansion, with strong green energy sources from wind farms.”
  • “Chile is still growing, and our waterless cooling system is ideal for Santiago, where water is too scarce to use for evaporative heat rejection.”

On Embodied Carbon and Sustainability Leadership

  • “Four years ago, we saw the need to start tracking embodied carbon. We’ve been doing that ever since, and it’s driven a lot of industry progress.”
  • “We co-authored and helped found the Climate Accord with iMasons—taking sustainability to a whole new level.”
  • “We’re now extending our carbon traceability standards to ODATA in LATAM, tracking second and third life for key components and providing clients with true Scope 3 carbon assessments.”
  • “North America is still behind the rest of the world in creating lifecycle assessments and environmental product declarations. Where gaps exist, we look for adjacencies and highlight them—helping move the industry forward.”

On QScale and Heat Reuse Innovation

  • “Our relationship with QScale in Canada is exciting. They’re focused on high-performance compute and are adopting our design methodologies.”
  • “Their heat reuse strategy is really interesting. We’re exploring ways to capture and repurpose waste heat in North America as well.”
Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

F5 tackles AI security with new platform extensions

F5 AI Guardrails deploys as a proxy between users and AI models. Wormke describes it as being inserted as a proxy layer at the “front door” of AI interaction, between AI applications, users and agents. It intercepts prompts before they reach the model and analyzes outputs before they return to

Read More »

AWS European cloud service launch raises questions over sovereignty

There are examples of similar scenarios in recent years. The International Criminal Court’s chief prosecutor was reportedly shut out of Microsoft applications following the imposition of US sanctions, for example. Other instances include Adobe cutting off Venezuelan customers in compliance with US sanctions against that country in 2019, while Microsoft

Read More »

IP Fabric 7.9 boosts visibility across hybrid environments

Multicloud and hybrid network viability has also been extended to include IPv6 path analysis, helping teams reason about connectivity in dual-stack and hybrid environments. This capability addresses a practical challenge for enterprises deploying IPv6 alongside existing IPv4 infrastructure. Network teams can now validate that applications can reach IPv6 endpoints and

Read More »

Petronas Names New COO

Petroliam Nasional Bhd. appointed Mohd Jukris Abdul Wahab as its new chief operating officer, enhancing its senior leadership structure amid an ongoing legal dispute over gas assets in Sarawak.  The appointment will be effective Feb. 1, and Jukris will concurrently hold his position as chief executive officer of Petronas’ upstream business, the state-owned oil and gas company said in a statement on Monday.  The COO would support group Chief Executive Officer Tengku Muhammad Taufik in matters involving federal and state governments, The Edge reported, citing an internal note.  Petronas filed a motion with Malaysia’s apex court last week to decide on the company’s operations in Sarawak, Malaysia’s biggest state. Since 2024, it has been locked in a dispute over gas distribution rights there with state-owned Petroleum Sarawak Bhd. The company has also been struggling with declining profit amid an oil price slump, and last year announced it was cutting around 10% of its workforce. The appointment of the new COO is in line with Petronas’ “transformation ambitions amid a dynamic global energy environment,” it said in Monday’s statement. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Batistas Poised for Venezuelan Oil Revival

The billionaire Batista brothers are eyeing a billion-barrel Venezuelan oil project that stands to benefit from US President Donald Trump’s planned revival of the South American nation’s energy sector. The Batistas, who control the world’s biggest meatpacker, are discreetly positioned on the outskirts of Venezuela’s oil sector via the stake one of their business associates holds in the Petrolera Roraima project, according to people familiar with the situation.  Prior to the ouster of strongman Nicolás Maduro earlier this month, a commercial representative of the Batistas obtained a stake in a cluster of oilfields formerly operated by ConocoPhillips. Fluxus, an oil company owned by the Batistas, could join that or other petroleum developments in the country once the business outlook clears up, said the people, who asked not to be named discussing non-public information. J&F SA, the Brazilian brothers’ holding company, said in response to questions that it doesn’t have any assets in Venezuela, and is closely monitoring events.  “Once a scenario of institutional stability and legal certainty is established, we will be ready to evaluate investments,” J&F said in an email.  The Batistas have taken a cautious approach to Venezuela since the US imposed sanctions because of extensive American investments that include chicken processor Pilgrim’s Pride Corp., people familiar with their business strategy said.  Although Trump has said the Venezuelan government “stole” oil riches claimed by American companies such as ConocoPhillips during a nationalization drive almost 20 years ago, he also has evinced no desire to reverse those asset seizures. That indicates the Batistas are in pole position to help expand the country’s oil production while US and European drillers await stronger financial and security guarantees. Since Maduro’s fall, Joesley Batista has emerged as a key figure in the post-Maduro transition. Last week, he flew from Washington to Caracas for

Read More »

Reliance Posts Refining Gains despite Sourcing Challenges

Reliance Industries Ltd saw revenue from its oil-to-chemicals segment for the quarter ended December 2025 (third quarter of financial year 2026) increase 8.4 percent from Q3 FY 2025 to $18 billion. That was helped by a two percent increase in refining throughput with 20.6 million metric tons of crude processed in the three-month period, despite challenges in procuring oil, according to an online statement by the diversified Indian conglomerate. “Agile crude sourcing helped sustain throughput despite procurement challenges”, Reliance said. “Partial resumption of Red Sea route also benefitted operations”, it added. Reliance operates what it says is the world’s biggest single-site refinery in Jamnagar, India. The facility has a declared processing capacity of 1.4 million barrels a day. The Q3 FY2026 statement said refinery utilization was maximized “to capture high margins”. Reliance reported 18.2 million metric tons in production meant for sale, up 1.7 percent year-on-year. Reliance’s fuel retailing network under the Reliance BP Mobility Ltd brand, a joint venture with BP PLC, expanded by 14 percent year-over-year to 2,125 outlets, driving volume growth of over 20 percent, according to the statement. A “sharp increase in transportation fuel cracks and higher sulfur realization” drove a 14.6 percent year-on-year increase to $1.18 billion in petrochemicals EBITDA. The improvement in transport fuel cracks was aided by “continued disruptions in Russian supply and unplanned outages in other regions”, Reliance said. “US/EU sanctions on Russian refiners further tightened fuel markets”. On the other hand, Reliance saw “weakness in downstream chemical margins and higher feedstock freight rates”. However, it added, “Favorable ethane cracking economics and domestic market placements continued to support profitability”. At the backdrop, both global and domestic demand for oil products grew year-on-year in Q3 FY2026, partially offset by a price decline, the statement noted. “Crude oil benchmarks declined y-o-y on expectations of

Read More »

Why Is the USA Natural Gas Price Rising Today?

Why is the U.S. natural gas price rising today? That was the question Rigzone asked Ole R. Hvalbye, a commodities analyst at Skandinaviska Enskilda Banken AB (SEB), in an exclusive interview on Monday. Responding to the question, Hvalbye highlighted to Rigzone that Henry Hub was trading around $3.5 per million British thermal units (MMBtu) today, “up from [around] … $3.1 per MMBtu before the weekend”, and noted that “the drivers look fairly straightforward and well known rather than structural”. “Short-term forecasts turned colder across parts of the U.S., lifting heating demand expectations and supporting front-end prices,” Hvalbye told Rigzone. “Feedgas flows remain elevated and firm, reinforcing near-term demand for U.S. gas and tightening the spot balance marginally,” he added. “After the recent sell-off, the market was relatively short, so colder weather and steady LNG demand triggered short-covering rather than fresh long positioning,” he continued. Hvalbye went on to state that, “on the supply side, there’s no disruption story”. “U.S. production remains strong, storage is still comfortable, and nothing suggests a sudden structural tightening from my data – i.e., a reason why the move looks tactical rather than fundamental,” he pointed out. Hvalbye highlighted to Rigzone that today’s price increase “isn’t a clean breakout”, adding that prices “are roughly back to where they were a week ago, so part of today’s move is simply retracing last week’s dip”. “In short: weather plus LNG demand plus positioning explain today’s strength. It’s a bounce, not a regime shift,” he added. In a separate exclusive interview with Rigzone on Monday, Art Hogan, Chief Market Strategist at B. Riley Wealth, said U.S. natural gas “is bouncing off a 13-week low of $3.10 last week after the weather outlook for late January shifted colder”. “The colder than normal outlook is expected to drive strong heating demand

Read More »

Var Energi Raises Estimates for New Barents Sea Oil Discovery

An appraisal well has confirmed Vår Energi ASA’ Zagato oil discovery in the Goliat area on Norway’s side of the North Sea, with preliminary estimated recoverable resources of 21-25 million barrels of oil equivalent (MMboe), the Norwegian Offshore Directorate (NOD) said. That is equivalent to 3.3-11.9 million standard cubic meters of oil equivalent (MMscmoe), up from the previous estimate of 2.8-10.1 MMscmoe before appraisal well 7122/8-3 A was drilled, the upstream regulator said in a press release. The latest target represents the 14th exploration well drilled in production license 229, awarded under the Barents Sea Project in 1997, the NOD noted. Var Energi said separately, “The latest well tested two intervals with each showing maximum flow rates of more than 4,000 barrels of oil per day, confirming reservoir quality”. “The production tests confirmed good quality reservoirs and oil quality similar to the Goliat field”, Vår Energi said. Goliat, discovered 2000, started producing 2016 and expanded with the startup of the Snadd and Goliat West accumulations in 2017 and 2021 respectively, according to field information on government website Norskpetroleum.no. Operator Vår Energi (65 percent) and partner Equinor ASA (35 percent) have now drilled five wells in the Goliat Ridge, Vår Energi noted. “Including the latest well, the Goliat Ridge is estimated to contain gross discovered recoverable resources of 35-138 MMboe, and with additional prospective resources taking the total gross potential to over 200 MMboe”, it said. “A tie-back to the nearby Goliat FPSO [floating production, storage and offloading vessel] is being planned, targeting first production in 2019. “Vår Energi was recently awarded an adjacent license to the Goliat field in the 2025 Awards in Predefined Areas, which offers additional prospectivity on trend with the Goliat Ridge discovery”. Norskpetroleum.no says plans for Goliat include a connection to the Equinor-operated gas liquefaction facility on Melkøya island.   “The recent discoveries reinforce Vår Energi’s position as a leading exploration company on the Norwegian continental shelf and continue to strengthen our ability to sustain high-value production of

Read More »

Where Will the WTI Oil Price Land in 2026 and 2027?

According to the U.S. Energy Information Administration’s (EIA) latest short term energy outlook (STEO) which was published on January 13, the West Texas Intermediate (WTI) spot price average will drop in 2026 and 2027. The EIA projected in this STEO that the WTI spot price will come in at $52.21 per barrel this year and $50.36 per barrel next year. The commodity averaged $65.40 per barrel in 2025, the EIA’s January STEO showed. A quarterly breakdown included in the outlook forecast that the WTI spot price will come in at $54.93 per barrel in the first quarter of 2026, $52.67 per barrel in the second quarter, $52.03 per barrel in the third quarter, $49.34 per barrel in the fourth quarter, $49.00 per barrel in the first quarter of 2027, $50.66 per barrel in the second quarter, $50.68 per barrel in the third quarter, and $51.00 per barrel in the fourth quarter of next year. In its previous STEO, which was released in December, the EIA projected that the WTI spot price would average $65.32 per barrel in 2025 and $51.42 per barrel in 2026. That STEO did not offer an average WTI spot price forecast for 2027. The EIA’s November STEO saw the WTI spot price averaging $65.15 per barrel in 2025 and $51.26 per barrel in 2026. A chart hosted on the EIA’s website, which was last updated on January 14 and displayed the annual average Cushing, OK, WTI spot price, on a free on board basis, from 1986 to 2025, showed that this commodity hit a peak in 2008, at $99.67 per barrel. The commodity saw its lowest price, between 1986 and 2025, in 1986, at $15.05 per barrel, the chart highlighted. The highest price the commodity has seen this decade came in 2022, at $94.90 per barrel,

Read More »

RISC-V chip designer SiFive integrates Nvidia NVLink Fusion to power AI data centers

RISC-V pioneer SiFive has signed a deal with Nvidia to incorporate Nvidia NVLink Fusion into its data center products. The agreement means that SiFive will be able to connect its RISC-V CPUs to Nvidia GPUs and accelerators over a high bandwidth interconnect that lets multiple GPUs share compute and memory resources, offering more options to operators of AI data centers. Historically, RISC-V technology has not had access to these types of high-level interconnects and pathways. In a statement, Patrick Little, president and CEO of SiFive, said, “AI infrastructure is no longer built from generic components, it is co-designed from the ground up. By integrating NVLink Fusion with SiFive’s high-performance compute subsystems, we’re enabling customers with an open and customizable CPU platform that pairs seamlessly with Nvidia’s AI Infrastructure to deliver exceptional efficiency at data center scale.”

Read More »

NVIDIA’s Rubin Redefines the AI Factory

The Architecture Shift: From “GPU Server” to “Rack-Scale Supercomputer” NVIDIA’s Rubin architecture is built around a single design thesis: “extreme co-design.” In practice, that means GPUs, CPUs, networking, security, software, power delivery, and cooling are architected together; treating the data center as the compute unit, not the individual server. That logic shows up most clearly in the NVL72 system. NVLink 6 serves as the scale-up spine, designed to let 72 GPUs communicate all-to-all with predictable latency, something NVIDIA argues is essential for mixture-of-experts routing and synchronization-heavy inference paths. NVIDIA is not vague about what this requires. Its technical materials describe the Rubin GPU as delivering 50 PFLOPS of NVFP4 inference and 35 PFLOPS of NVFP4 training, with 22 TB/s of HBM4 bandwidth and 3.6 TB/s of NVLink bandwidth per GPU. The point of that bandwidth is not headline-chasing. It is to prevent a rack from behaving like 72 loosely connected accelerators that stall on communication. NVIDIA wants the rack to function as a single engine because that is what it will take to drive down cost per token at scale. The New Idea NVIDIA Is Elevating: Inference Context Memory as Infrastructure If there is one genuinely new concept in the Rubin announcements, it is the elevation of context memory, and the admission that GPU memory alone will not carry the next wave of inference. NVIDIA describes a new tier called NVIDIA Inference Context Memory Storage, powered by BlueField-4, designed to persist and share inference state (such as KV caches) across requests and nodes for long-context and agentic workloads. NVIDIA says this AI-native context tier can boost tokens per second by up to 5× and improve power efficiency by up to 5× compared with traditional storage approaches. The implication is clear: the path to cheaper inference is not just faster GPUs.

Read More »

Power shortages, carbon capture, and AI automation: What’s ahead for data centers in 2026

“Despite a broader use of AI tools in enterprises and by consumers, that does not mean that AI compute, AI infrastructure in general, will be more evenly spread out,” said Daniel Bizo, research director at Uptime Institute, during the webinar. “The concentration of AI compute infrastructure is only increasing in the coming years.” For enterprises, the infrastructure investment remains relatively modest, Uptime Institute found. Enterprises will limit investment to inference and only some training, and inference workloads don’t require dramatic capacity increases. “Our prediction, our observation, was that the concentration of AI compute infrastructure is only increasing in the coming years by a couple of points. By the end of this year, 2026, we are projecting that around 10 gigawatts of new IT load will have been added to the global data center world, specifically to run generative AI workloads and adjacent workloads, but definitely centered on generative AI,” Bizo said. “This means these 10 gigawatts or so load, we are talking about anywhere between 13 to 15 million GPUs and accelerators deployed globally. We are anticipating that a majority of these are and will be deployed in supercomputing style.” 2. Developers will not outrun the power shortage The most pressing challenge facing the industry, according to Uptime, is that data centers can be built in less than three years, but power generation takes much longer. “It takes three to six years to deploy a solar or wind farm, around six years for a combined-cycle gas turbine plant, and even optimistically, it probably takes more than 10 years to deploy a conventional nuclear power plant,” said Max Smolaks, research analyst at Uptime Institute. This mismatch was manageable when data centers were smaller and growth was predictable, the report notes. But with projects now measured in tens and sometimes hundreds of

Read More »

Google warns transmission delays are now the biggest threat to data center expansion

The delays stem from aging transmission infrastructure unable to handle concentrated power demands. Building regional transmission lines currently takes seven to eleven years just for permitting, Hanna told the gathering. Southwest Power Pool has projected 115 days of potential loss of load if transmission infrastructure isn’t built to match demand growth, he added. These systemic delays are forcing enterprises to reconsider fundamental assumptions about cloud capacity. Regions including Northern Virginia and Santa Clara that were prime locations for hyperscale builds are running out of power capacity. The infrastructure constraints are also reshaping cloud competition around power access rather than technical capabilities. “This is no longer about who gets to market with the most GPU instances,” Gogia said. “It’s about who gets to the grid first.” Co-location emerges as a faster alternative to grid delays Unable to wait years for traditional grid connections, hyperscalers are pursuing co-location arrangements that place data centers directly adjacent to power plants, bypassing the transmission system entirely. Pricing for these arrangements has jumped 20% in power-constrained markets as demand outstrips availability, with costs flowing through to cloud customers via regional pricing differences, Gogia said. Google is exploring such arrangements, though Hanna said the company’s “strong preference is grid-connected load.” “This is a speed to power play for us,” he said, noting Google wants facilities to remain “front of the meter” to serve the broader grid rather than operating as isolated power sources. Other hyperscalers are negotiating directly with utilities, acquiring land near power plants, and exploring ownership stakes in power infrastructure from batteries to small modular nuclear reactors, Hanna said.

Read More »

OpenAI turns to Cerebras in a mega deal to scale AI inference infrastructure

Analysts expect AI workloads to grow more varied and more demanding in the coming years, driving the need for architectures tuned for inference performance and putting added pressure on data center networks. “This is prompting hyperscalers to diversify their computing systems, using Nvidia GPUs for general-purpose AI workloads, in-house AI accelerators for highly optimized tasks, and systems such as Cerebras for specialized low-latency workloads,” said Neil Shah, vice president for research at Counterpoint Research. As a result, AI platforms operating at hyperscale are pushing infrastructure providers away from monolithic, general-purpose clusters toward more tiered and heterogeneous infrastructure strategies. “OpenAI’s move toward Cerebras inference capacity reflects a broader shift in how AI data centers are being designed,” said Prabhu Ram, VP of the industry research group at Cybermedia Research. “This move is less about replacing Nvidia and more about diversification as inference scales.” At this level, infrastructure begins to resemble an AI factory, where city-scale power delivery, dense east–west networking, and low-latency interconnects matter more than peak FLOPS, Ram added. “At this magnitude, conventional rack density, cooling models, and hierarchical networks become impractical,” said Manish Rawat, semiconductor analyst at TechInsights. “Inference workloads generate continuous, latency-sensitive traffic rather than episodic training bursts, pushing architectures toward flatter network topologies, higher-radix switching, and tighter integration of compute, memory, and interconnect.”

Read More »

Cisco’s 2026 agenda prioritizes AI-ready infrastructure, connectivity

While most of the demand for AI data center capacity today comes from hyperscalers and neocloud providers, that will change as enterprise customers delve more into the AI networking world. “The other ecosystem members and enterprises themselves are becoming responsible for an increasing proportion of the AI infrastructure buildout as inferencing and agentic AI, sovereign cloud, and edge AI become more mainstream,” Katz wrote. More enterprises will move to host AI on premises via the introduction of AI agents that are designed to inject intelligent insight into applications and help improve operations. That’s where the AI impact on enterprise network traffic will appear, suggests Nolle. “Enterprises need to host AI to create AI network impact. Just accessing it doesn’t do much to traffic. Having cloud agents access local data center resources (RAG etc.) creates a governance issue for most corporate data, so that won’t go too far either,” Nolle said.  “Enterprises are looking at AI agents, not the way hyperscalers tout agentic AI, but agents running on small models, often open-source, and are locally hosted. This is where real AI traffic will develop, and Cisco could be vulnerable if they don’t understand this point and at least raise it in dialogs where AI hosting comes up,” Nolle said. “I don’t expect they’d go too far, because the real market for enterprise AI networking is probably a couple years out.” Meanwhile, observers expect Cisco to continue bolstering AI networking capabilities for enterprise branch, campus and data centers as well as hyperscalers, including through optical support and other gear.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »