Stay Ahead, Stay ONMINE

This startup thinks slime mold can help us design better cities

It is a yellow blob with no brain, yet some researchers believe a curious organism known as slime mold could help us build more resilient cities. Humans have been building cities for 6,000 years, but slime mold has been around for 600 million. The team behind a new startup called Mireta wants to translate the organism’s biological superpowers into algorithms that might help improve transit times, alleviate congestion, and minimize climate-related disruptions in cities worldwide. Mireta’s algorithm mimics how slime mold efficiently distributes resources through branching networks. The startup’s founders think this approach could help connect subway stations, design bike lanes, or optimize factory assembly lines. They claim its software can factor in flood zones, traffic patterns, budget constraints, and more. “It’s very rational to think that some [natural] systems or organisms have actually come up with clever solutions to problems we share,” says Raphael Kay, Mireta’s cofounder and head of design, who has a background in architecture and mechanical engineering and is currently a PhD candidate in materials science and mechanical engineering at Harvard University. As urbanization continues—about 60% of the global population will live in metropolises by 2030—cities must provide critical services while facing population growth, aging infrastructure, and extreme weather caused by climate change. Kay, who has also studied how microscopic sea creatures could help researchers design zero-energy buildings, believes nature’s time-tested solutions may offer a path toward more adaptive urban systems. Officially known as Physarum polycephalum, slime mold is neither plant, animal, nor fungus but a single-­celled organism older than dinosaurs. When searching for food, it extends tentacle-like projections in multiple directions simultaneously. It then doubles down on the most efficient paths that lead to food while abandoning less productive routes. This process creates optimized networks that balance efficiency with resilience—a sought-after quality in transportation and infrastructure systems. The organism’s ability to find the shortest path between multiple points while maintaining backup connections has made it a favorite among researchers studying network design. Most famously, in 2010 researchers at Hokkaido University reported results from an experiment in which they dumped a blob of slime mold onto a detailed map of Tokyo’s railway system, marking major stations with oat flakes. At first the brainless organism engulfed the entire map. Days later, it had pruned itself back, leaving behind only the most efficient pathways. The result closely mirrored Tokyo’s actual rail network. Since then, researchers worldwide have used slime mold to solve mazes and even map the dark matter holding the universe together. Experts across Mexico, Great Britain, and the Iberian peninsula have tasked the organism with redesigning their roadways—though few of these experiments have translated into real-world upgrades. Historically, researchers working with the organism would print a physical map and add slime mold onto it. But Kay believes that Mireta’s approach, which replicates slime mold’s pathway-building without requiring actual organisms, could help solve more complex problems. Slime mold is visible to the naked eye, so Kay’s team studied how the blobs behave in the lab, focusing on the key behaviors that make these organisms so good at creating efficient networks. Then they translated these behaviors into a set of rules that became an algorithm. Some experts aren’t convinced. According to Geoff Boeing, an associate professor at the University of Southern California’s Department of Urban Planning and Spatial Analysis, such algorithms don’t address “the messy realities of entering a room with a group of stakeholders and co-visioning a future for their community.” Modern urban planning problems, he says, aren’t solely technical issues: “It’s not that we don’t know how to make infrastructure networks efficient, resilient, connected—it’s that it’s politically challenging to do so.” Michael Batty, a professor emeritus at University College London’s Centre for Advanced Spatial Analysis, finds the concept more promising. “There is certainly potential for exploration,” he says, noting that humans have long drawn parallels between biological systems and cities. For decades now, designers have looked to nature for ideas—think ventilation systems inspired by termite mounds or bullet trains modeled after the kingfisher’s beak.  Like Boeing, Batty worries that such algorithms could reinforce top-down planning when most cities grow from the bottom up. But for Kay, the algorithm’s beauty lies in how it mimics bottom-up biological growth—like the way slime mold starts from multiple points and connects organically rather than following predetermined paths.  Since launching earlier this year, Mireta, which is based in Cambridge, Massachusetts, has worked on about five projects. And slime mold is just the beginning. The team is also looking at algorithms inspired by ants, which leave chemical trails that strengthen with use and have their own decentralized solutions for network optimization. “Biology has solved just about every network problem you can imagine,” says Kay. Elissaveta M. Brandon is an independent journalist interested in how design, culture, and technology shape the way we live.

It is a yellow blob with no brain, yet some researchers believe a curious organism known as slime mold could help us build more resilient cities.

Humans have been building cities for 6,000 years, but slime mold has been around for 600 million. The team behind a new startup called Mireta wants to translate the organism’s biological superpowers into algorithms that might help improve transit times, alleviate congestion, and minimize climate-related disruptions in cities worldwide.

Mireta’s algorithm mimics how slime mold efficiently distributes resources through branching networks. The startup’s founders think this approach could help connect subway stations, design bike lanes, or optimize factory assembly lines. They claim its software can factor in flood zones, traffic patterns, budget constraints, and more.

“It’s very rational to think that some [natural] systems or organisms have actually come up with clever solutions to problems we share,” says Raphael Kay, Mireta’s cofounder and head of design, who has a background in architecture and mechanical engineering and is currently a PhD candidate in materials science and mechanical engineering at Harvard University.

As urbanization continues—about 60% of the global population will live in metropolises by 2030—cities must provide critical services while facing population growth, aging infrastructure, and extreme weather caused by climate change. Kay, who has also studied how microscopic sea creatures could help researchers design zero-energy buildings, believes nature’s time-tested solutions may offer a path toward more adaptive urban systems.

Officially known as Physarum polycephalum, slime mold is neither plant, animal, nor fungus but a single-­celled organism older than dinosaurs. When searching for food, it extends tentacle-like projections in multiple directions simultaneously. It then doubles down on the most efficient paths that lead to food while abandoning less productive routes. This process creates optimized networks that balance efficiency with resilience—a sought-after quality in transportation and infrastructure systems.

The organism’s ability to find the shortest path between multiple points while maintaining backup connections has made it a favorite among researchers studying network design. Most famously, in 2010 researchers at Hokkaido University reported results from an experiment in which they dumped a blob of slime mold onto a detailed map of Tokyo’s railway system, marking major stations with oat flakes. At first the brainless organism engulfed the entire map. Days later, it had pruned itself back, leaving behind only the most efficient pathways. The result closely mirrored Tokyo’s actual rail network.

Since then, researchers worldwide have used slime mold to solve mazes and even map the dark matter holding the universe together. Experts across Mexico, Great Britain, and the Iberian peninsula have tasked the organism with redesigning their roadways—though few of these experiments have translated into real-world upgrades.

Historically, researchers working with the organism would print a physical map and add slime mold onto it. But Kay believes that Mireta’s approach, which replicates slime mold’s pathway-building without requiring actual organisms, could help solve more complex problems. Slime mold is visible to the naked eye, so Kay’s team studied how the blobs behave in the lab, focusing on the key behaviors that make these organisms so good at creating efficient networks. Then they translated these behaviors into a set of rules that became an algorithm.

Some experts aren’t convinced. According to Geoff Boeing, an associate professor at the University of Southern California’s Department of Urban Planning and Spatial Analysis, such algorithms don’t address “the messy realities of entering a room with a group of stakeholders and co-visioning a future for their community.” Modern urban planning problems, he says, aren’t solely technical issues: “It’s not that we don’t know how to make infrastructure networks efficient, resilient, connected—it’s that it’s politically challenging to do so.”

Michael Batty, a professor emeritus at University College London’s Centre for Advanced Spatial Analysis, finds the concept more promising. “There is certainly potential for exploration,” he says, noting that humans have long drawn parallels between biological systems and cities. For decades now, designers have looked to nature for ideas—think ventilation systems inspired by termite mounds or bullet trains modeled after the kingfisher’s beak

Like Boeing, Batty worries that such algorithms could reinforce top-down planning when most cities grow from the bottom up. But for Kay, the algorithm’s beauty lies in how it mimics bottom-up biological growth—like the way slime mold starts from multiple points and connects organically rather than following predetermined paths. 

Since launching earlier this year, Mireta, which is based in Cambridge, Massachusetts, has worked on about five projects. And slime mold is just the beginning. The team is also looking at algorithms inspired by ants, which leave chemical trails that strengthen with use and have their own decentralized solutions for network optimization. “Biology has solved just about every network problem you can imagine,” says Kay.

Elissaveta M. Brandon is an independent journalist interested in how design, culture, and technology shape the way we live.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Three options for wireless power in the enterprise

Sensors such as these can be attached to pallets to track its location, says Srivastava. “People in Europe are very conscious about where their food is coming from and, to comply with regulations, companies need to have sensors on the pallets,” he says. “Or they might need to know that

Read More »

IBM unveils advanced quantum computer in Spain

IBM executives and officials from the Basque Government and regional councils in front of Europe’s first IBM Quantum System Two, located at the IBM-Euskadi Quantum Computational Center in San Sebastián, Spain. The Basque Government and IBM unveil the first IBM Quantum System Two in Europe at the IBM-Euskadi Quantum Computational

Read More »

USA Crude Oil Stocks Rise Week on Week

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), increased by 3.5 million barrels from the week ending October 3 to the week ending October 10, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. That report was released on October 16 and included data for the week ending October 10. It showed that crude oil stocks, not including the SPR, stood at 423.8 million barrels on October 10, 420.3 million barrels on October 3, and 420.6 million barrels on October 11, 2024. Crude oil in the SPR stood at 407.7 million barrels on October 10, 407.0 million barrels on October 3, and 383.9 million barrels on October 11, 2024, the report revealed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.696 billion barrels on October 10, the report highlighted. Total petroleum stocks were up 2.4 million barrels week on week and up 60.7 million barrels year on year, the report showed. “At 423.8 million barrels, U.S. crude oil inventories are about four percent below the five year average for this time of year,” the EIA said in its weekly petroleum status report. “Total motor gasoline inventories decreased by 0.3 million barrels from last week and are slightly below the five year average for this time of year. Both finished gasoline and blending components inventories decreased last week,” it added. “Distillate fuel inventories decreased by 4.5 million barrels last week and are about seven percent below the five year average for this time of year. Propane/propylene inventories increased by 1.9 million barrels from last week and are 11 percent above the five year average for this time of year,” it

Read More »

BofA Sees Oil Price Floor ‘Likely Forming at $55’

A BofA Global Research report sent to Rigzone by the BofA team recently noted that BofA sees “a[n] [oil price] floor likely forming at $55 per barrel”. That report also revealed that the company is maintaining its Brent forecast of $61 per barrel in the fourth quarter of 2025 and $64 per barrel in the first half of 2026. The report went on to warn, however, that “if U.S.-China trade tensions escalate in the midst of the OPEC+ production ramp up, Brent could drop below $50 per barrel”. In the report, BofA said market participants have been “sick worried about a crude oil glut for almost a year now” and pointed out that front month Brent and WTI crude oil prices have come down by about 50 percent from their respective peaks of $128 per barrel and $124 per barrel in 2022. “Of course, weaker oil prices this year have a lot to do with OPEC+ agreeing to increase quotas within the Group of 8 by about four million barrels per day over 18 months starting in April 2025,” the report noted. “Oil markets have already been on a surplus for some time, although inventories across the OECD remain low because most excess barrels have gone into Chinese strategic storage,” it added. “Rapid strategic oil stockpiling in China and a looming surplus in 1H26 have resulted in an odd term structure in Brent: tight in the front, loose in the back,” it continued. “Yet, oil prices have come down quickly in recent days as China reimposed some limits on rare earth elements (REE), the U.S. threatened China with fresh tariffs, and Iran threw down the gauntlet by turning on transponders to show the world where its oil is going,” the report went on to state. BofA noted in its report

Read More »

Sanctioned China Port Receives Russian LNG

A shipment of Russian liquefied natural gas arrived at a Chinese terminal for the first time since the UK sanctioned the port facility, underscoring Beijing’s appetite for Moscow’s energy supplies despite Western efforts to curb such trade. The Arctic Mulan vessel, carrying fuel from the already blacklisted Arctic LNG 2 plant in Russia, landed at the Beihai LNG station on Friday, according to ship-tracking data compiled by Bloomberg. The UK sanctioned the terminal because it has been receiving the restricted Russian cargoes since late August. The move comes as Washington is in the midst of an escalating trade war with Beijing, while President Donald Trump is trying to broker a peace deal with Russia over the conflict in Ukraine. Trump and other Western nations have been looking to tighten Russia’s oil and gas exports in a bid to reduce its revenues. While the sanctions include a wind-down period that runs until Nov. 13, at least one shipment that appears to be en route to southern China from the Arctic region could arrive after that date – a sign Beijing likely won’t slow its trade with Russia. Two additional cargoes in East Asia are also heading to southern China, ship data show. Russia and China had anticipated possible Western retaliation against Beihai. The Asian nation designated the terminal as the sole entry point for cargoes from Arctic LNG 2 – a Russian project already sanctioned by the US and UK. Other Chinese importers have since stopped using the terminal. Arctic Mulan loaded an LNG shipment from a floating storage unit in eastern Russia in early October, according to ship-tracking data. The fuel in storage was sourced from the Arctic LNG 2 project. The storage facility and Arctic Mulan have been previously sanctioned by western nations. What do you think? We’d love to hear

Read More »

Equinor Starts Production in Bacalhau offshore Brazil

Equinor ASA and its partners have launched production in Bacalhau in Brazil’s offshore Santos basin, which the Norwegian majority state-owned energy major said is the biggest field it has developed overseas. Discovered 2012 in Santos’ pre-salt region by primarily state-owned Petroleo Brasileiro SA, the ultra-deepwater Bacalhau holds recoverable reserves exceeding one billion barrels of oil equivalent (boe), according to Equinor, which took over as operator 2016. “Phase 1 development consists of 19 wells, producers and injectors, which will be brought online in sequence as we ramp up and sustain production”, Equinor said in a press release Thursday. “Equinor will be positioned to provide an update in 2026 during the ramp-up phase”. Bacalhau has one of the biggest floating production, storage and offloading (FPSO) vessels deployed in Brazil, designed to produce up to 220,000 barrels of crude a day and store at least two million barrels, according to contractor MODEC Inc. Philippe Mathieu, Equinor executive vice president for international exploration and production, said, “Brazil is a core area for us and Bacalhau will be a major contributor to Equinor’s goal of generating more than $5 billion of free cashflow by 2030 from our international portfolio”. “Bacalhau will also deliver positive ripple effects and long-term benefits to Brazil’s economy, creating approximately 50,000 jobs over its 30-year lifetime”, Mathieu added. Equinor president and chief executive Anders Opedal said, “Bacalhau represents a new generation of projects that bring together scale, cost-efficiency and lower carbon intensity”. “With this development, we are strengthening the longevity of our oil and gas production and securing value creation for decades to come”, Opedal added. The FPSO uses combined-cycle gas turbines to curb emissions. “With an expected CO2 intensity of around nine kilograms per boe, and advanced abatement across flaring, processing, power generation and storage, the field sets a new

Read More »

Santos Posts Lower Quarterly Production, Narrows 2025 Projection

Santos Ltd on Thursday reported a four percent quarter-on-quarter decrease in output to 21.3 million barrels of oil equivalent (MMboe) as a decline in Western Australia offset increases across the rest of Santos’ portfolio. Sales volumes fell 10 percent sequentially to 21.5 MMboe in the third quarter “mainly due to maintenance activities in Western Australia, lower crude oil volumes due to timing of liftings and lower third-party gas purchases”, Adelaide-based Santos said in a statement on its website. Santos narrowed its 2025 production forecast from 90-95 MMboe to 89-91 MMboe and sales guidance from 92-99 MMboe to 93-95 MMboe. The projection downgrade “is primarily due to the slower-than-anticipated start-up of the BW Opal FPSO, and the impact of floods on Cooper Basin production with recovery efforts extending into the fourth quarter, where 155 wells are still offline due to flood water levels receding slower than expected”, it said. “On 12 October 2025 first gas into the export pipeline was achieved and first production at Darwin LNG is expected in the coming weeks. The delay was driven by the later-than-planned departure of the BW Opal from the Singapore shipyard and the resolution of software issues affecting the safety systems identified during commissioning”. Santos said September 22 Darwin LNG’s new source field, Barossa, had started producing natural gas through floating production, storage and offloading vessel BW Opal and that the liquefaction facility had received reauthorization from Australia’s Northern Territory. The Darwin LNG life extension project can produce up to about 3.7 million metric tons a year of liquefied natural gas (LNG), according to Santos. Darwin LNG’s previous source field, Timor-Leste’s Bayu-Undan, stopped exporting gas to the liquefaction facility late 2023 due to depletion, though Santos said 2024 Bayu-Undan would continue sending gas to the Northern Territory until the end of that year.

Read More »

Energy Department Announces Fusion Science and Technology Roadmap to Accelerate Commercial Fusion Power

WASHINGTON—The U.S. Department of Energy (DOE) released its Fusion Science and Technology (FS&T) Roadmap, a national strategy to accelerate the development and commercialization of fusion energy on the most rapid, responsible timeline in history. The Roadmap defines DOE’s Build–Innovate–Grow strategy to align public investment and private innovation to deliver commercial fusion power to the grid by the mid-2030s. This effort advances President Trump’s Executive Order Unleashing American Energy, reinforcing the Administration’s commitment to expand domestic energy production and restore U.S. energy dominance. By accelerating progress toward commercial fusion power, DOE is strengthening America’s grid, rebuilding critical supply chains, and securing a new era of abundant, reliable, American-made energy. “The Fusion Science and Technology Roadmap brings unprecedented coordination across America’s fusion enterprise,” said Energy Department Under Secretary for Science Dr. Darío Gil. “For the first time, DOE, industry, and our National Labs will be aligned with a shared purpose—to accelerate the path to commercial fusion power and strengthen America’s leadership in energy innovation. Thanks to President Trump’s leadership, the Department is streamlining the full strength of the U.S. scientific and industrial base to deliver fusion energy faster than ever before.” The FS&T Roadmap was unveiled as part of a series of U.S. Fusion Energy Enterprise Events being held this week in Washington, D.C. this week. The Summit brings together leaders from government, industry, and academia to discuss the future of American fusion energy. Developed with input from more than 600 scientists, engineers, and industry stakeholders, the Roadmap identifies the key research, materials, and technology gaps that must be closed to realize a Fusion Pilot Plant (FPP) and strengthen U.S. leadership in the global fusion industry. The FS&T Roadmap establishes a unified strategy for the U.S. fusion enterprise built around three primary drivers: Build critical infrastructure to close fusion materials and technology

Read More »

Roundup: Digital Realty Marks Major Milestones in AI, Quantum Computing, Data Center Development

Key features of the DRIL include: • High-Density AI and HPC Testing. The DRIL supports AI and high-performance computing (HPC) workloads with high-density colocation, accommodating workloads up to 150 kW per cabinet. • AI Infrastructure Optimization. The ePlus AI Experience Center lets businesses explore AI-specific power, cooling, and GPU resource requirements in an environment optimized for AI infrastructure. • Hybrid Cloud Validation. With direct cloud connectivity, users can refine hybrid strategies and onboard through cross connects. • AI Workload Orchestration. Customers can orchestrate AI workloads across Digital Realty’s Private AI Exchange (AIPx) for seamless integration and performance. • Latency Testing Across Locations. Enterprises can test latency scenarios for seamless performance across multiple locations and cloud destinations. The firm’s Northern Virginia campus is the primary DRIL location, but companies can also test latency scenarios between there and other remote locations. DRIL rollout to other global locations is already in progress, and London is scheduled to go live in early 2026. Digital Realty, Redeployable Launch Pathway for Veteran Technical Careers As new data centers are created, they need talented workers. To that end, Digital Realty has partnered with Redeployable, an AI-powered career platform for veterans, to expand access to technical careers in the United Kingdom and United States. The collaboration launched a Site Engineer Pathway, now live on the Redeployable platform. It helps veterans explore, prepare for, and transition into roles at Digital Realty. Nearly half of veterans leave their first civilian role within a year, often due to unclear expectations, poor skill translation, and limited support, according to Redeployable. The Site Engineer Pathway uses real-world relevance and replaces vague job descriptions with an experience-based view of technical careers. Veterans can engage in scenario-based “job drops” simulating real facility and system challenges so they can assess their fit for the role before applying. They

Read More »

BlackRock’s $40B data center deal opens a new infrastructure battle for CIOs

Everest Group partner Yugal Joshi said, “CIOs are under significant pressure to clearly define their data center strategy beyond traditional one-off leases. Given most of the capacity is built and delivered by fewer players, CIOs need to prepare for a higher-price market with limited negotiation power.” The numbers bear this out. Global data center costs rose to $217.30 per kilowatt per month in the first quarter of 2025, with major markets seeing increases of 17-18% year-over-year, according to CBRE. Those prices are at levels last seen in 2011-2012, and analysts expect them to remain elevated. Gogia said, “The combination of AI demand, energy scarcity, and environmental regulation has permanently rewritten the economics of running workloads. Prices that once looked extraordinary have now become baseline.” Hyperscalers get first dibs The consolidation problem is compounded by the way capacity is being allocated. North America’s data center vacancy rate fell to 1.6% in the first half of 2025, with Northern Virginia posting just 0.76%, according to CBRE Research. More troubling for enterprises: 74.3% of capacity currently under construction is already preleased, primarily to cloud and AI providers. “The global compute market is no longer governed by open supply and demand,” Gogia said. “It is increasingly shaped by pre-emptive control. Hyperscalers and AI majors are reserving capacity years in advance, often before the first trench for power is dug. This has quietly created a two-tier world: one in which large players guarantee their future and everyone else competes for what remains.” That dynamic forces enterprises into longer planning cycles. “CIOs must forecast their infrastructure requirements with the same precision they apply to financial budgets and talent pipelines,” Gogia said. “The planning horizon must stretch to three or even five years.”

Read More »

Nvidia, Infineon partner for AI data center power overhaul

The solution is to convert power right at the GPU on the server board and to upgrade the backbone to 800 volts. That should squeeze more reliability and efficiency out of the system while dealing with the heat, Infineon stated.   Nvidia announced the 800 Volt direct current (VDC) power architecture at Computex 2025 as a much-needed replacement for the 54 Volt backbone currently in use, which is overwhelmed by the demand of AI processors and increasingly prone to failure. “This makes sense with the power needs of AI and how it is growing,” said Alvin Nguyen, senior analyst with Forrester Research. “This helps mitigate power losses seen from lower voltage and AC systems, reduces the need for materials like copper for wiring/bus bars, better reliability, and better serviceability.” Infineon says a shift to a centralized 800 VDC architecture allows for reduced power losses, higher efficiency and reliability. However, the new architecture requires new power conversion solutions and safety mechanisms to prevent potential hazards and costly server downtimes such as service and maintenance.

Read More »

Meta details cutting-edge networking technologies for AI infrastructure

ESUN initiative As part of its standardization efforts, Meta said it would be a key player in the new Ethernet for Scale-Up Networking (ESUN) initiative that brings together AMD, Arista, ARM, Broadcom, Cisco, HPE Networking, Marvell, Microsoft, NVIDIA, OpenAI and Oracle to advance the networking technology to handle the growing scale-up domain for AI systems. ESUN will focus solely on open, standards-based Ethernet switching and framing for scale-up networking—excluding host-side stacks, non-Ethernet protocols, application-layer solutions, and proprietary technologies. The group will focus on the development and interoperability of XPU network interfaces and Ethernet switch ASICs for scale-up networks, the OCP wrote in a blog. ESUN will actively engage with other organizations such as Ultra-Ethernet Consortium (UEC) and long-standing IEEE 802.3 Ethernet to align open standards, incorporate best practices, and accelerate innovation, the OCP stated. Data center networking milestones The launch of ESUN is just one of the AI networking developments Meta shared at the event. Meta engineers also announced three data center networking innovations aimed at making its infrastructure more flexible, scalable, and efficient: The evolution of Meta’s Disaggregated Scheduled Fabric (DSF) to support scale-out interconnect for large AI clusters that span entire data center buildings. A new Non-Scheduled Fabric (NSF) architecture based entirely on shallow-buffer, disaggregated Ethernet switches that will support our largest AI clusters like Prometheus. The addition of Minipack3N, based on Nvidia’s Ethernet Spectrum-4 ASIC, to Meta’s portfolio of 51Tbps OCP switches that use OCP’s Switch Abstraction Interface and Meta’s Facebook Open Switching System (FBOSS) software stack. DSF is Meta’s open networking fabric that completely separates switch hardware, NICs, endpoints, and other networking components from the underlying network and uses OCP-SAI and FBOSS to achieve that, according to Meta. It supports Ethernet-based RoCE RDMA over Converged Ethernet (RoCE/RDMA)) to endpoints, accelerators and NICs from multiple vendors, such as Nvidia,

Read More »

Arm joins Open Compute Project to build next-generation AI data center silicon

Keeping up with the demand comes down to performance, and more specifically, performance per watt. With power limited, OEMs have become much more involved in all aspects of the system design, rather than pulling silicon off the shelf or pulling servers or racks off the shelf. “They’re getting much more specific about what that silicon looks like, which is a big departure from where the data center was ten or 15 years ago. The point here being is that they look to create a more optimized system design to bring the acceleration closer to the compute, and get much better performance per watt,” said Awad. The Open Compute Project is a global industry organization dedicated to designing and sharing open-source hardware configurations for data center technologies and infrastructure. It covers everything from silicon products to rack and tray design.  It is hosting its 2025 OCP Global Summit this week in San Jose, Calif. Arm also was part of the Ethernet for Scale-Up Networking (ESUN) initiative announced this week at the Summit that included AMD, Arista, Broadcom, Cisco, HPE Networking, Marvell, Meta, Microsoft, and Nvidia. ESUN promises to advance Ethernet networking technology to handle scale-up connectivity across accelerated AI infrastructures. Arm’s goal by joining OCP is to encourage knowledge sharing and collaboration between companies and users to share ideas, specifications and intellectual property. It is known for focusing on modular rather than monolithic designs, which is where chiplets come in. For example, customers might have multiple different companies building a 64-core CPU and then choose IO to pair it with, whether like PCIe or an NVLink. They then choose their own memory subsystem, deciding whether to go HBM, LPDDR, or DDR. It’s all mix and match like Legos, Awad said.

Read More »

BlackRock-Led Consortium to Acquire Aligned Data Centers in $40 Billion AI Infrastructure Deal

Capital Strategy and Infrastructure Readiness The AIP consortium has outlined an initial $30 billion in equity, with potential to scale toward $100 billion including debt over time as part of a broader AI infrastructure buildout. The Aligned acquisition represents a cornerstone investment within that capital roadmap. Aligned’s “ready-to-scale” platform – encompassing land, permits, interconnects, and power roadmaps – is far more valuable today than a patchwork of single-site developments. The consortium framed the transaction as a direct response to the global AI buildout crunch, targeting critical land, energy, and equipment bottlenecks that continue to constrain hyperscale expansion. Platform Overview: Aligned’s Evolution and Strategic Fit Aligned Data Centers has rapidly emerged as a scale developer and operator purpose-built for high-density, quick-turn capacity demanded by hyperscalers and AI platforms. Beyond the U.S., Aligned extended its reach across the Americas through its acquisition of ODATA in Latin America, creating a Pan-American presence that now spans more than 50 campuses and over 5 GW of capacity. The company has repeatedly accessed both public and private capital markets, most recently securing more than $12 billion in new equity and debt financing to accelerate expansion. Aligned’s U.S.–LATAM footprint provides geographic diversification and proximity to fast-growing AI regions. The buyer consortium’s global relationships – spanning utilities, OEMs, and sovereign-fund partners – help address power, interconnect, and supply-chain constraints, all of which are critical to sustaining growth in the AI data-center ecosystem. Macquarie Asset Management built Aligned from a niche U.S. operator into a 5 GW-plus, multi-market platform, the kind of asset infrastructure investors covet as AI demand outpaces grid and supply-chain capacity. Its sale at this stage reflects a broader wave of industry consolidation among large-scale digital-infrastructure owners. Since its own acquisition by BlackRock in early 2024, GIP has strengthened its position as one of the world’s top owners

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »