Stay Ahead, Stay ONMINE

Podcast: Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers

In the latest episode of the Data Center Frontier Show podcast, DCF Editor-in-Chief Matt Vincent sits down with Phill Lawson-Shanks, Chief Innovation Officer at Aligned Data Centers, for a wide-ranging discussion that touches on some of the most pressing trends and challenges shaping the future of the data center industry. From the role of nuclear […]

In the latest episode of the Data Center Frontier Show podcast, DCF Editor-in-Chief Matt Vincent sits down with Phill Lawson-Shanks, Chief Innovation Officer at Aligned Data Centers, for a wide-ranging discussion that touches on some of the most pressing trends and challenges shaping the future of the data center industry.

From the role of nuclear energy and natural gas in addressing the sector’s growing power demands, to the rapid expansion of Aligned’s operations in Latin America (LATAM), in the course of the podcast Lawson-Shanks provides deep insight into where the industry is headed.

Scaling Sustainability: Tracking Embodied Carbon and Scope 3 Emissions

A key focus of the conversation is sustainability, where Aligned continues to push boundaries in carbon tracking and energy efficiency. Lawson-Shanks highlights the company’s commitment to monitoring embodied carbon—an effort that began four years ago and has since positioned Aligned as an industry leader.

“We co-authored and helped found the Climate Accord with iMasons—taking sustainability to a whole new level,” he notes, emphasizing how Aligned is now extending its carbon traceability standards to ODATA’s facilities in LATAM. By implementing lifecycle assessments (LCAs) and tracking Scope 3 emissions, Aligned aims to provide clients with a detailed breakdown of their environmental impact.

“The North American market is still behind in lifecycle assessments and environmental product declarations. Where gaps exist, we look for adjacencies and highlight them—helping move the industry forward,” Lawson-Shanks explains.

The Nuclear Moment: A Game-Changer for Data Center Power

One of the most compelling segments of the discussion revolves around the growing interest in nuclear energy—particularly small modular reactors (SMRs) and microreactors—as a viable long-term power solution for data centers. Lawson-Shanks describes the recent industry buzz surrounding Oklo’s announcement of a 12-gigawatt deployment with Switch as a significant milestone, calling the move “inevitable.”

“There are dozens of nuclear plants operating in the U.S. today, but people just don’t pay much attention to them,” he says. “Companies like Oklo are designing advanced modular reactors that are walk-away safe, reuse spent fuel, and eliminate the risks associated with traditional light-water reactors. This is the path forward.”

However, he acknowledges that the widespread adoption of nuclear will take time, given the regulatory hurdles of the Nuclear Regulatory Commission (NRC) and the challenges of getting sites certified. Still, he remains optimistic: “We need this, and as an industry, we’re pre-buying energy because we see the challenges ahead.”

Bridging the Energy Gap with Natural Gas and Hydrogen

While nuclear is a long-term solution, data centers need reliable power sources today. Lawson-Shanks sees natural gas as a practical interim solution, provided emissions can be mitigated. He also points to hydrogen as an emerging technology with potential, though challenges remain.

“Hydrogen is really an energy transportation methodology rather than an energy source,” he explains. “It’s highly corrosive, and the infrastructure isn’t fully in place yet, but it’s something we’re closely monitoring.”

He predicts that natural gas reciprocating engines will serve as a bridge solution until nuclear modules become widely available. “Once we reach steady-state nuclear power, those gas engines could replace diesel generators, which we all want to phase out,” he says.

Explosive Growth in LATAM and the Evolution of Aligned’s Global Strategy

The conversation also covers Aligned’s expansion into Latin America following its acquisition of ODATA. Lawson-Shanks describes the region as a booming market, particularly in Brazil, where Aligned has access to renewable energy through its investment in wind farms.

“LATAM is an enormous growth market, and our waterless cooling system is ideal for places like Santiago, where water scarcity makes evaporative cooling unfeasible,” he explains.

Aligned is integrating its advanced cooling technologies—such as Delta³ and DeltaFlow—into ODATA’s new facilities, ensuring that sustainability remains a core component of their LATAM operations.

Innovating Beyond Cooling: The Future of Heat Reuse

Another forward-looking topic is Aligned’s interest in heat reuse, an area where Lawson-Shanks sees significant potential for innovation. Through its partnership with QScale in Canada, Aligned is exploring methods to capture and repurpose waste heat from data centers for other applications.

“Their heat reuse strategy is really interesting, and we’re looking at how we can implement similar solutions in North America,” he says, hinting at future developments to come.

Looking Ahead: A Future Shaped by Innovation and Sustainability

As the conversation wraps up, it’s clear that Lawson-Shanks sees the data center industry at an inflection point. The combination of sustainability commitments, new energy technologies, and rapid global expansion is forcing companies to rethink traditional models and embrace innovation at an unprecedented scale.

“We’ve always fought against the idea that data centers have to be built the same way they were in the 1970s,” he says. “We’re constantly redesigning, rethinking how we procure energy, and pushing the industry forward.”

With Aligned continuing to lead the charge in sustainability, energy innovation, and international expansion, the insights shared in this episode offer a compelling look at the challenges and opportunities ahead for the data center industry.

Here’s a timeline of the podcast’s key moments:

  • After introductions, exciting news about Lawson-Shanks and Aligned joining the 2025 DCF Trends Summit Editorial Advisory Board is shared. 0:02
  • Lawson-Shanks discusses the industry’s sustainability focus. The impact of ChatGPT on market dynamics is highlighted. 2:21
  • The rapid growth of cloud deployment is highlighted. Challenges in supply chain management due to factory shutdowns are discussed. 3:26
  • The emergence of agentic AI systems is brought up. The importance of proximity to cloud instances for effective data processing is emphasized. 4:49
  • A potential edge boom in 2025 is speculated upon. The construction of facilities for AI inference aligned with cloud interests is questioned. 7:06
  • Lawson-Shanks explains how a significant land grab for data center space has occurred. He describes how existing data centers are unable to accommodate new high-density paths. 8:20
  • On how the Aligned design architecture includes adaptive data center features. High-density cooling solutions are being implemented with liquid and air. 8:54
  • On how the demand for technology is increasing exponentially, and more space and technology will be required to meet future needs. 11:44
  • An overview of Open Compute Project (OCP) architecture is provided. The architecture includes discrete components for flexibility. 13:22
  • The importance of adhering to OCP standards is emphasized. Such adherence ensures safety and efficiency in data center operations. 14:09
  • A discussion about the critical role of data centers in industrial revolutions is presented. Data centers are described as essential infrastructure for modern technology. 17:14
  • Discussion now centers on the future of nuclear energy. The potential for small modular reactors is highlighted. 18:42
  • The importance of addressing public fears about radiation is emphasized. The benefits of advanced reactor designs are noted. 19:55
  • Concerns about energy transmission infrastructure are raised. The discussion notes that building new transmission lines can take decades. 21:16
  • Natural gas is discussed as a near-green energy source. Mitigation strategies for emissions are mentioned. 22:54
  • Hydrogen’s role as an energy transportation method is explored. The challenges of biofuel supply and infrastructure are highlighted. 23:36
  • Innovative approaches to data center design and energy procurement are emphasized. The importance of adapting to new methodologies is noted. 25:15
  • The need for tracking embodied carbon is highlighted. The discussion reveals how this initiative has been ongoing for four years and has led to significant developments. 27:28
  • The expansion of Aligned’s carbon tracking to ODATA in Latin America is discussed. This includes providing clients with lifecycle assessments and environmental product declarations. 27:53
  • The growth of the market in Latin America, especially in Brazil, is emphasized. The presence of green energy sources, such as wind farms, is noted as a positive factor. 29:24

DCF Show Podcast Quotes from Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers

On Market Demand and the Evolution of Data Centers

  • “Existing data centers in major metros are largely full, and many weren’t designed for the high-density workloads we’re seeing today.”
  • “The industry went through a phase where we were just stamping out the same boxes—buying land, building infrastructure. Now, there are real engineering challenges again, and that’s exciting.”
  • “We’re in an unprecedented time. The use of technology isn’t slowing down—it’s accelerating, and we need more space, more innovation, more infrastructure to support it.”
  • “AI isn’t just a trend—it’s a fundamental shift, and data centers have to evolve to support that scale.”

On Aligned’s Adaptive Data Center Architecture

  • “We designed our adaptive data center architecture so that we can integrate both air and liquid cooling seamlessly.”
  • “Our Delta Cube arrays allow us to do high-density cooling with just air. But for the foreseeable future, we will need both air and liquid cooling.”
  • “Liquid to the chip removes most of the heat—maybe 70-80%—but there are still DIMMs, storage arrays, and network components that require airflow.”
  • “We’re building infrastructure that has to last 20 to 30 years. That means designing for today’s workloads while being adaptable for future technologies.”

On Engineering Challenges and Innovation

  • “We’re designing for everything from 50 megawatts with air to 360 megawatts with liquid cooling, all in a redundant fashion.”
  • “We’re rethinking everything—electrical infrastructure, cooling, heat rejection, and even heat reuse. There are exciting possibilities ahead.”
  • “The reality is, these racks are huge now. They come pre-populated, they’re heavy, and they need to be moved safely. Many older data centers just weren’t designed with that in mind.”

On the Open Compute Project (OCP) and Industry Standards

  • “OCP started with Facebook—now Meta—disaggregating servers into their core components. It’s changed the way hyperscalers build infrastructure.”
  • “We worked closely with OCP to help define and ratify a data center standard for hyperscalers. That means clients know our facilities conform to those specifications from the start.”
  • “Something as simple as ensuring the right door heights, corridor angles, and loading capabilities makes a huge difference when deploying large-scale infrastructure.”

On Industry Leadership and Open Innovation

  • “All boats rise. We lead, but we don’t want to be exclusive—we want to pull the industry forward with us.”
  • “I started at Compaq, and they had a philosophy: Identify gaps in the market, solve them, patent the solution, and then release it to the industry. We take the same approach—innovation that benefits everyone.”
  • “Data centers are the engine of the fourth and now the fifth industrial revolution. They are critical infrastructure for everything we do.”

On the Growing Role of Nuclear Energy in Data Centers

  • “I think it’s inevitable. Absolutely inevitable.”
  • “There are dozens of nuclear plants across the U.S., but people just don’t pay that much attention to them.”
  • “I personally love the advanced modular reactors—Oklo in particular. They reuse spent fuel, they’re walk-away safe, and there’s no pressurization risk.”
  • “You could hug one of these things for a year and receive less radiation than I got flying across the country last night.”
  • “The biggest challenge isn’t generation—it’s transmission. It takes about 12 years to build out the infrastructure to actually pass electrons.”
  • “Some of the high-tension lines going into Virginia now were approved 25 years ago. That’s how long these things take.”
  • “Nuclear is classified as green energy, and all our energy is 100% renewable. This is the future for the whole industry.”

On Natural Gas and the Transition to Cleaner Power

  • “Natural gas isn’t green, but you can mitigate its impact. It’s what we have available to bridge the shortfall until nuclear modules are online.”
  • “We’re looking at natural gas reciprocating engines as a stopgap until we get steady-state, utility-grade nuclear power.”
  • “Eventually, I see those gas engines replacing the diesel generators we have today—because we all want to get away from that.”
  • “Hydrogen is interesting, but it’s really more of an energy transport method than a true energy source. There are still major challenges with its infrastructure and supply.”

On Data Center Innovation and Industry Change

  • “The traditional way of building data centers was designed in the 1970s during the golden age of the mainframe. And for years, everyone just kept doing the same thing.”
  • “At Aligned, we tore up the rulebook. We constantly rethink how we build, design, and procure energy.”
  • “We lead, but we also push the industry forward—we don’t just follow the predefined supply chain that’s existed for decades.”

On Aligned’s Acquisition of ODATA and Expansion in LATAM

  • “We acquired them over a year ago, but they’re very much an Aligned company. We let that amazing team run their business as they need to, while helping them leverage the core competencies that got us to where we are today.”
  • “Their new buildings will be designed around our methodologies—using Delta³ and DeltaFlow where appropriate.”
  • “LATAM is an enormous growth market. Brazil, in particular, is seeing extraordinary expansion, with strong green energy sources from wind farms.”
  • “Chile is still growing, and our waterless cooling system is ideal for Santiago, where water is too scarce to use for evaporative heat rejection.”

On Embodied Carbon and Sustainability Leadership

  • “Four years ago, we saw the need to start tracking embodied carbon. We’ve been doing that ever since, and it’s driven a lot of industry progress.”
  • “We co-authored and helped found the Climate Accord with iMasons—taking sustainability to a whole new level.”
  • “We’re now extending our carbon traceability standards to ODATA in LATAM, tracking second and third life for key components and providing clients with true Scope 3 carbon assessments.”
  • “North America is still behind the rest of the world in creating lifecycle assessments and environmental product declarations. Where gaps exist, we look for adjacencies and highlight them—helping move the industry forward.”

On QScale and Heat Reuse Innovation

  • “Our relationship with QScale in Canada is exciting. They’re focused on high-performance compute and are adopting our design methodologies.”
  • “Their heat reuse strategy is really interesting. We’re exploring ways to capture and repurpose waste heat in North America as well.”
Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Broadcom strikes chip deals with Google, Anthropic

Anthropic said this week that the AI startup’s annual revenue run rate has now crossed $30 billion, up from about $9 billion the previous year. “We are making our most significant compute commitment to date to keep pace with our unprecedented growth,” said Krishna Rao, CFO of Anthropic, in a

Read More »

Google owns the most AI compute, and it built it its way

Google, for its part, will continue to be “one of the largest, if not the largest,” consumer of compute resources, said Bill Wong, research fellow at Info-Tech Research Group. “Its business model drives that global demand, specifically through the widespread use of Google search and Gemini, which it provides for

Read More »

Cisco to acquire Galileo for AI observability

“We have this opportunity to be a trust layer, not just for … network activity, but actually what’s happening at the application layer, at the workload layer, between agents, between workloads, between data,” Peter Bailey, senior vice president and general manager of Cisco’s security business, told Network World last month.

Read More »

Energy Department Continues Initiating Strategic Petroleum Reserve Emergency Exchanges

WASHINGTON — The U.S. Department of Energy (DOE) today issued a Request for Proposal (RFP) for an emergency exchange of up to 30 million barrels of sweet crude oil from the Strategic Petroleum Reserve (SPR). This action continues DOE’s execution of the 172-million-barrel release President Trump authorized as part of a coordinated release of 400 million barrels from IEA member nations’ strategic reserves. The Department will continue issuing RFPs to deliver on President Trump’s commitment to the coordinated release, addressing global supply disruptions and strengthening energy security for the United States and its allies. The crude oil will originate from the SPR’s West Hackberry site. This action builds on the Department’s two previous Emergency Exchange RFPs, which together quickly moved to award approximately 55 million barrels from the Bayou Choctaw, Bryan Mound, and West Hackberry SPR sites. These earlier exchanges demonstrated the SPR’s ability to rapidly deliver crude oil to the market under emergency authorities. Today’s announcement of the 30-million-barrel exchange leverages the full capabilities of the SPR, alongside the President’s limited Jones Act waiver, to accelerate critical near-term oil flows into the market and strengthen supply resilience. “Today’s action strengthens efforts to move oil quickly to market and address short-term supply pressures,” said Kyle Haustveit, Assistant Secretary of the Hydrocarbons and Geothermal Energy Office. “With two earlier exchanges, we are building on proven momentum. Thanks to President Trump, we are managing the SPR responsibly, returning premium barrels later to bolster long-term readiness at no cost to taxpayers.” Under DOE’s exchange authority, participating companies will return the borrowed 30-million-barrels with additional premium barrels by next year. This exchange delivers immediate crude to refiners and the market while generating additional barrels for the American people at no cost to taxpayers. Bids for this solicitation are due no later than 11:00 A.M. Central Time on Monday, April 13, 2026. For more information on the SPR, please visit DOE’s website. 

Read More »

DOE’s Office of Energy Dominance Financing Announces Conditional Commitment for a Domestic Medical Isotope Manufacturing Facility

WASHINGTON—The U.S. Department of Energy’s (DOE) Office of Energy Dominance Financing (EDF) issued a conditional commitment for a loan of up to $263 million to SHINE Chrysalis, LLC (SHINE) to support the construction of Chrysalis, a high-volume medical isotope production facility in Janesville, Wisconsin. The production facility will create a supply of reliable and secure medical isotopes made with fusion and fission technology. Millions of patients rely on medical isotopes for life-saving diagnostic imaging and cancer treatments, but most of that supply comes from overseas with limited capacity to meet demand.  Utilizing fusion and fission technology, the facility will establish the only domestic commercial supply of molybdenum-99 (Mo-99). Mo-99 is a life-saving medical isotope essential for diagnostic imaging; its decay product of technetium-99m (Tc-99m) is used in over 40,000 medical procedures in the United States each day. “The SHINE Chrysalis project is vital to improving the nuclear supply chain and contributing to a strong next-generation nuclear workforce while onshoring this critical production and improving national security,” EDF Director Gregory A. Beard said. “Using EDF’s loan authority to further commercialize a project long supported by DOE is Trump’s policy at work: ensuring a reliable and secure domestic supply chain while lowering costs.” Thanks to President Trump and the Working Families Tax Cut, today’s announcement highlights the Energy Department’s role in advancing President Trump’s Executive Order, Reinvigorating the Nuclear Industrial Base, by maximizing the use of uranium through an innovative recycling process. This project will support hundreds of construction and operation jobs, contributing to a strong next-generation American nuclear workforce. SHINE has developed and demonstrated its technology through significant support from the National Nuclear Security Administration (NNSA) over the last 16 years. With NNSA funding, the National Laboratories also played a key role in developing SHINE’s technology. “SHINE is the key to

Read More »

United States and Australia meet for Mining, Minerals and Metals Investment Ministerial

We, the Australian Minister for Resources and Northern Australia, the Hon Madeleine King MP, and Secretaries and senior representatives from the United States, including Secretary of Interior Doug Burgum, Administrator of the U.S. Environmental Protection Agency Lee Zeldin, Chairman of the U.S. Export Import Bank John Jovanovic, and Assistant Secretary of Energy Audrey Robertson,  held our inaugural Mining, Minerals, and Metals Investment Ministerial in Tokyo on 14 March 2026, to advance cooperation under the landmark bilateral agreement, the United States–Australia Framework for Securing Supply in the Mining and Processing of Critical Minerals and Rare Earths (the Framework). Under the Framework, Australia and the United States are delivering concrete outcomes to strengthen, secure, and diversify critical minerals and rare earth supply chains. Within six months of agreement of the Framework, we have each taken measures to provide at least USD $1 billion in financing to key critical minerals projects. By mobilising government and private sector capital, these investments support the development of our shared industrial base and strengthen longer term supply for defence, manufacturing, and energy supply chains. To build resilience, enhance stability, and bolster economic security in support of our shared critical minerals interests, Australia and the United States today announce the establishment of the Critical Minerals Supply Security Response Group and commit to deeper cooperation between our key agencies. In line with the Framework, the Critical Minerals Supply Security Response Group, led by senior representatives from the United States Department of Energy and the Australian Department of Industry, Science and Resources, will cooperate on priority minerals and supply chain vulnerabilities and coordinate efforts to accelerate the delivery of processed minerals under the Framework. Australia and the United States also commit to leveraging shared policy and interagency regulatory tools and, where appropriate, investments to secure critical minerals supply, including through cooperation between Australia’s

Read More »

Energy Security for Indo-Pacific Endurance, a Global Growth Center of the 21st Century

We, the ministers and representatives of Australia, Bangladesh, Brunei, Japan, Malaysia, New Zealand, Philippines, Republic of Korea, Singapore, Timor-Leste, United States, and Vietnam met in Tokyo, Japan, on March 14–15, 2026, to hold the historic Indo-Pacific Energy Security Ministerial and Business Forum. The forum was co-hosted by the Chair and Vice Chair of the U.S. National Energy Dominance Council, Secretary of the Interior Doug Burgum, Secretary of Energy Chris Wright, and Japanese Minister of Economy, Trade and Industry Akazawa Ryosei. We affirm our shared determination to work collectively to ensure stable and secure energy supply in the Indo-Pacific region. To this end, we focused on three key themes: reliable energy for Indo-Pacific growth and security; securing energy supply chains, infrastructure and maritime routes; and enabling trade and investment. To support these goals, Ministers affirm the value of: The necessity of reliable, affordable, secure and dispatchable energy from all sources depending on each country’s situation, in meeting the region’s surging energy demand.  Promoting quality as a key procurement mechanism to mitigate risk of operational liabilities. Protecting against rising cyber threats to the security of the energy grid, critical infrastructure, vehicles, and devices. Investment in comprehensive energy infrastructure that encompasses the entire energy supply chain from upstream development facilities to downstream equipment to support an affordable, reliable, and secure energy supply including baseload electricity.  Continuing to supply affordable and reliable energy sources in the Indo-Pacific region, including through emergency response measures, to benefit both producers and consumer countries. While maintaining strong relations with current partners, expanding and diversifying energy suppliers and fuel types in order to strengthen energy security. Promoting transparent, long-term energy contracts that reduce market volatility. As the global economy expands, so too does demand for energy driven by AI and electrification, we, as countries committed to a free and

Read More »

Energy Department Issues Funding Opportunity to Strengthen American Critical Minerals and Materials Supply Chain

WASHINGTON—The U.S. Department of Energy’s (DOE) Office of Critical Minerals and Energy Innovation (CMEI) and Hydrocarbons and Geothermal Energy Office (HGEO) today announced a funding opportunity of up to $69 million for technologies or processes that advance the domestic production and refining of critical materials. Projects selected through this Notice of Funding Opportunity (NOFO) will address the greatest technical obstacles to a stronger critical materials supply chain. “This funding will help establish a more secure and affordable supply of the critical minerals and materials that are foundational to American energy dominance, national security, and industrial competitiveness,” said Assistant Secretary of Energy (EERE) Audrey Robertson. DOE is seeking projects that bridge the gap between bench-scale innovations and commercially viable technologies. Selected project teams will form industry-led partnerships and conduct research and development with support from the U.S. national laboratories. The NOFO, which is part of DOE’s Critical Minerals and Materials Accelerator Program and jointly funded by CMEI’s Advanced Materials and Manufacturing Technologies Office and HGEO’s Office of Geothermal, has three primary topic areas: Production and material efficiency for critical materials including rare earth elements Processes to refine and alloy gallium, gallium nitride, germanium, and silicon carbide Cost-competitive direct lithium extraction, separation, and processing CMEI will host an informational webinar on April 16, 2026, to discuss the NOFO and application requirements. Letters of intent are due on April 21, 2026, by 5 p.m. ET. Deadlines for full applications will be staggered based on topic area, starting in May 2026. For more details on sub-topics and deadlines, visit the NOFO landing page. The Critical Minerals and Materials Accelerator is one of several programs developed through DOE’s Critical Materials Collaborative. This NOFO is part of $1 billion in critical materials funding announced by DOE in August 2025, and follows the Manufacturing Deployment Office’s announcement

Read More »

Latin America returns to the energy security conversation at CERAWeek

With geopolitical risk central to conversations about energy, and with long-cycle supply once again in focus, Latin America’s mix of hydrocarbons and export potential drew renewed attention at CERAWeek by S&P Global in Houston. Argentina, resource story to export platform Among the regional stories, Argentina stood out as Vaca Muerta was no longer discussed simply as a large unconventional resource, but whether the country could turn resource quality into sustained export capacity.  Country officials talked about scale: more operators, more services, more infrastructure, and a larger industrial base around the unconventional play. Daniel González, Vice Minister of Energy and Mining for Argentina, put it plainly: “The time has come to expand the Vaca Muerta ecosystem.” What is at stake now is not whether the basin works, but whether the country can build enough above-ground capacity and regulatory consistency to keep development moving. Horacio Marín, chairman and chief executive officer of YPF, offered an expansive version of that argument. He said Argentina’s energy exports could reach $50 billion/year by 2031, backed by roughly $130 billion in cumulative investment in oil, LNG, and transportation infrastructure. He said Argentine crude output could reach 1 million b/d by end-2026. He said Argentina wants to be seen less as a recurrent frontier story and more as a future supplier with scale. “The time to invest in Vaca Muerta is now,” Marín said. The LNG piece is starting to take shape. Eni, YPF, and XRG signed a joint development agreement in February to move Argentina LNG forward, with a first phase planned at 12 million tonnes/year. Southern Energy—backed by PAE, YPF, Pampa Energía, Harbour Energy, and Golar LNG—holds a long-term agreement with SEFE for 2 million tonnes/year over 8 years. The movement by global standards is early-stage and relatively modest, but it adds to Argentina’s export

Read More »

OpenAI puts part of Stargate project on hold over runaway power costs

OpenAI has postponed plans to open one of the data centers central to its Stargate project. It announced its plan to open the data center in the UK with great fanfare last September, when it was regarded as a major boost for the country’s nascent AI industry, as well as proving a step up for OpenAI’s international credentials. At the time, Sam Altman, CEO of OpenAI, said, “The UK has been a longstanding pioneer of AI, and is now home to world-class researchers, millions of ChatGPT users, and a government that quickly recognized the potential of this technology.” All of that has been quietly forgotten. The plans for the data center in Northumberland, in the Northeast of England, have been put on hold, with the project ready to be revived when the conditions are ripe for major infrastructure investment, according to a report by the BBC.

Read More »

Neoclouds gain momentum in a supply-constrained world

And since they used the same hardware, both neoclouds and traditional cloud providers are subject to the same shortage problem. Component suppliers are reporting significant shortages due to demand for AI data centers and Synergy sees neoclouds also experiencing delays just like traditional cloud providers. “Demand is currently outstripping supply,” said Dinsmore. “It will take a while before that starts to come into more balance.” Among neoclouds, CoreWeave stands out as the most direct challenger to traditional hyperscale cloud providers. Meanwhile, OpenAI and Anthropic represent a distinct but increasingly important category, and that is platform-centric providers offering cloud-like access to foundational models and AI development environments. Synergy says that as demand for AI infrastructure accelerates, neoclouds are positioning themselves as focused alternatives to traditional hyperscale providers such as Amazon, Microsoft and Google.

Read More »

What is AI networking? How it adds intelligence to your infrastructure

The end goal is to make networks more reliable, efficient and performant. Enterprises are already seeing notable results when AI is applied to IT operations, including shorter deployment times, a decrease in trouble tickets, and faster time to resolution. With the help of AI, networks  will become more autonomous and self-healing (that is, able to address issues without the need for human intervention). In fact, Tier 1 and Tier 2 infrastructure is moving toward ‘no human in the loop,’ Nick Lippis, co-founder and co-chair of enterprise user community ONUG, recently told Network World. In time, humans will only need to step in for policy exceptions and high-risk decisions. “Layering in AI capabilities makes LAN management applications easier to use and more accessible across an organization,” Dell’Oro Group analyst Sian Morgan said. Gartner predicts that, by 2030, AI agents will drive most network activities, up from “minimal adoption” in 2025. The firm emphasizes that leaders who overlook the AI networking shift “risk higher MTTR [meantime to repair], rising costs, and growing security exposure.” The core components of AI networking It’s important to note that the use of AI and machine learning (ML) in network management is not new. AI for IT operations (AIOps), for instance, is a common practice that uses automation to improve broader IT operations. AI networking is specific to the network itself, covering domains including multi-cloud software, wired and wireless LAN, data center switching, SD-WAN and managed network services (MNS). The incorporation of generative AI, in particular, has brought AI networking to the fore, as enterprise leaders are rethinking every single aspect of their business, networking included.

Read More »

Aria Networks raises $125M and debuts its approach for AI-optimized networks

That embedded telemetry feeds adaptive tuning of Dynamic Load Balancing parameters, Data Center Quantized Congestion Notification (DCQCN) and failover logic without waiting for a threshold breach or a manual intervention. The platform architecture is layered. At the lowest levels, agents react in microseconds to link-level events such as transceiver flaps, rerouting leaf-spine traffic in milliseconds. At higher layers, agents make more strategic decisions about flow placement across the cluster. At the cloud layer, a large language model-based agent surfaces correlated insights to operators in natural language, allowing them to ask questions about specific jobs or alert conditions and receive context-aware responses. Karam argued that simply bolting an LLM onto an existing architecture does not deliver the same result. “If you ask it to do anything, it could hallucinate and bring down the network,” he said. “It doesn’t have any of the context or the data that’s required for this approach to be made safe.” Aria also exposes an MCP server, allowing external systems such as job schedulers and LLM routers to query network state directly and integrate it into their own decision-making. MFU and token efficiency as the target metrics Traditional networking is often evaluated in terms of bandwidth and latency. Aria is centering its platform around two metrics: Model FLOPS Utilization (MFU) and token efficiency. MFU is defined as the ratio of achieved FLOPS per accelerator to the theoretical peak. In practice, Karam said, MFU for training workloads typically runs between 33% and 45%, and inference often comes in below 30%. “The network has a major impact on the MFU, and therefore the token efficiency, because the network touches every aspect, every other component in your cluster,” Karam said.

Read More »

New v2 UALink specification aims to catch up to NVLink

But given there are no products currently available using UALink 1.0, UALink 2.0 might be viewed as a premature launch Need to play catch up David Harold, senior analyst with Jon Peddie Research, was guarded in his reaction. “While 2.0 is a significant step forward from 1.0, we need to bear in mind that even 1.0 solutions aren’t shipping yet – they aren’t due until later this year. So, Nvidia is way ahead of the open alternatives on connectivity, indeed ahead of the proprietary or Ethernet based solutions too,” he said. What this means, he added, is that non-Nvidia alternatives are currently lagging in the market. “They need to play catch up on several fronts, not just networking. … I can’t think of a single shipping product that meaningfully has advantages over a Nvidia solution,” he said. “Ultimately UALink remains desirable since it will enable heterogeneous, multi-vendor environments but it’s quite a way behind NVLink today. ” There are plenty of signs that organizations will find it hard to break free of the Nvidia dominance, however. A couple of months ago, RISC-V pioneer SiFive signed a deal with Nvidia to incorporate Nvidia NVLink Fusion into its data center products, a departure for RISC companies. According to Harold, other companies could be joining it. “Custom ASIC company MediaTek is an NVLink partner, and they told me last week that they are planning to integrate it directly into next-generation custom silicon for AI applications,” he said. “This will enable a wider range of companies to use NVLink as their high-speed interconnect.” Other options And, Harold noted, Nvidia is already looking at other options. “Nvidia is now shifting to look at the copper limit for networking speed, with an interest in using optical connectivity instead,” said Harold.

Read More »

Nvidia’s SchedMD acquisition puts open-source AI scheduling under scrutiny

Is the concern valid? Dr. Danish Faruqui, CEO of Fab Economics, a US-based AI hardware and datacenter advisory, said the risk was real. “The skepticism that Nvidia may prioritize its own hardware in future software updates, potentially delaying or under-optimizing support for rivals, is a feasible outcome,” he said. As the primary developer, Nvidia now controls Slurm’s official development roadmap and code review process, Faruqui said, “which could influence how quickly competing chips are integrated on new development or continuous improvement elements.” Owning the control plane alongside GPUs and networking infrastructure such as InfiniBand, he added, allows Nvidia to create a tightly vertically integrated stack that can lead to what he described as “shallow moats, where advanced features are only available or performant on Nvidia hardware.” One concrete test of that, industry observers say, will be how quickly Nvidia integrates support for AMD’s next-generation chips into Slurm’s codebase compared with how quickly it integrates its own forthcoming hardware and networking technologies, such as InfiniBand. Does the Bright Computing precedent hold? Analysts point to Nvidia’s 2022 acquisition of Bright Computing as a reference point, saying the software became optimized for Nvidia chips in ways that disadvantaged users of competing hardware. Nvidia disputed that characterization, saying Bright Computing supports “nearly any CPU or GPU-accelerated cluster.” Rawat said the comparison was instructive but imperfect. “Nvidia’s acquisition of Bright Computing highlights its preference for vertical integration, embedding Bright tightly into DGX and AI Factory stacks rather than maintaining a neutral, multi-vendor orchestration role,” he said. “This reflects a broader strategic pattern — Nvidia seeks to control the full-stack AI infrastructure experience.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Constellations

I. We had crash-landed on the planet. We were far from home. The spaceship could not be repaired, and the rescue beacon had failed. Besides

Read More »