Stay Ahead, Stay ONMINE

Colossal raises $200M to “de-extinct” the woolly mammoth, thylacine and dodo

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Colossal BioSciences has raised $200 million in a new round of funding to bring back extinct species like the woolly mammoth. Dallas- and Boston-based Colossal is making strides in the scientific breakthroughs toward “de-extinction,” or bringing […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Colossal BioSciences has raised $200 million in a new round of funding to bring back extinct species like the woolly mammoth.

Dallas- and Boston-based Colossal is making strides in the scientific breakthroughs toward “de-extinction,” or bringing back extinct species like the woolly mammoth, thylacine and the dodo.

I would be remiss if I did not mention this is the plot of Michael Crichton’s novel Jurassic Park, where scientists used the DNA found in mosquitoes preserved in amber to bring back the Tyrannosaurus Rex and other dinosaurs. I mean, what could go wrong when science fiction becomes reality? Kidding aside, this is pretty amazing work and I’m not surprised to see game dev Richard Garriott among the investors.

TWG Global, a diversified holding company with operating businesses and investments in technology/AI, financial services, private lending and sports and media, jointly led by Mark Walter and Thomas Tull.

Since launching in September 2021, Colossal has raised $435 million in total funding. This latest round of capital places the company at a $10.2 billion valuation. Colossal will leverage this latest infusion of capital to continue to advance its genetic engineering technologies while pioneering new revolutionary software, wetware and hardware solutions, which have applications beyond de-extinction including species preservation and human healthcare.

“Our recent successes in creating the technologies necessary for our end-to-end de-extinction toolkit have been met with enthusiasm by the investor community. TWG Global and our other partners have been bullish in their desire to help us scale as quickly and efficiently as possible,” said CEO Colossal Ben Lamm, in a statement. “This funding will grow our team, support new technology development, expand our de-extinction species list, while continuing to allow us to carry forth our mission to make extinction a thing of the past.”

Colossal employs over 170 scientists and partners with labs in Boston, Dallas, and Melbourne, Australia. In addition, Colossal sponsors over 40 full time postdoctoral scholars and research programs in 16 partner labs at some of the most prestigious universities around the globe.

Colossal’s scientific advisory board has grown to include over 95 of the top scientists working in genomics, ancient DNA, ecology, conservation, developmental biology, and paleontology. Together, these teams are tackling some of the hardest problems in biology, including mapping genotypes to traits and behaviors, understanding developmental pathways to phenotypes like craniofacial shape, tusk formation, and coat color patterning, and developing new tools for multiplex and large-insert genome engineering.

“Colossal is the leading company working at the intersection of AI, computational biology and genetic engineering for both de-extinction and species preservation,” said Mark Walter, CEO of TWG Global, in a statement. “Colossal has assembled a world-class team that has already driven, in a short period of time, significant technology innovations and impact in advancing conservation, which is a core value of TWG Global. We are thrilled to support Colossal as it accelerates and scales its mission to combat the animal extinction crisis.”

“Colossal is a revolutionary genetics company making science fiction into science fact. We are creating the technology to build de-extinction science and scale conservation biology particularly for endangered and at-risk species. I could not be more appreciative of the investor support for this important mission,” said George Church, Colossal cofounder and a professor of genetics at Harvard Medical School and professor of Health Sciences and Technology at Harvard and the Massachusetts Institute of Technology (MIT).

In October 2024, the Colossal Foundation was launched, a sister 501(c)(3) focused on overseeing the deployment and application of Colossal-developed science and technology innovations. The organization currently supports 48 conservation partners and their global initiatives around the world.

This includes partners like Re:wild, Save The Elephants, Biorescue, Birdlife International, Conservation Nation, Sezarc, Mauritian Wildlife Foundation, Aussie Ark, International Elephant Foundation, Saving Animals From Extinction. Currently the Colossal Foundation is focused on supporting conservation partners who are working on new innovative technologies that can be applied to conservation and those who benefit from the development and deployment of new genetic rescue and de-extinction technologies to help combat the biodiversity extinction crisis.

Tracking Progress on Colossal’s De-Extinction Projects

Ben Lamm is CEO of Colossal Biosciences

The first step in every de-extinction project is to recover and analyze preserved genetic material and use that data to identify each species’ core genomic components. In addition to recruiting Beth Shapiro, a global leader in ancient DNA research, as Colossal’s chief science officer, Colossal has built a team of Ph.D experts in ancient DNA among its scientific advisors, including Love Dalen, Andrew Pask, Tom Gilbert, Michael Hofreiter, Hendrik Poinar, Erez Lieberman Aiden, and Matthew Wooler.

With this team, Colossal continues to push advances in ancient DNA through support to academic labs and internal scientific research. All three core species – mammoth, thylacine, and dodo – have already benefited from this coalescence of expertise. As an example, Colossal now has the most contiguous and complete ancient genomes to date for each of these three species; these genomes are the blueprints from which these species’ core traits will be engineered.

The path from ancient genome to living species requires a systems model approach to innovation across computational biology, cellular engineering, genetic engineering, embryology, and animal husbandry, with refinement and tuning in each step along the de-extinction pipeline occurring simultaneously. To date, Colossal’s scientists have achieved monumental breakthroughs at each step for each of the three flagship species.

In the last three years, Colossal’s first major project to be announced, the woolly mammoth project, generated new genomic resources, made breakthroughs in cell biology and genome engineering, and explored the ecological impact of de-extinction, with implications for mammoths, elephants, and species across the vertebrate tree of life.

Woolly Mammoth De-extinction Project Progress

The mammoth team has generated chromosome-scale reference genomes for the African elephant, Asian elephant, and rock hyrax, all of which have been released on the National Center for Biotechnology Information database; it has generated the first de novo assembled mammoth genome – that is, a genome generated using only the ancient DNA reads rather than mapped to a reference genome. This genome identified several genetic loci that are missing in reference-guided assemblies.

And it has acquired and aligned 60+ ancient genomes for woolly mammoth and Columbian mammoth in collaboration with key scientific advisor, Love Dalen and Tom van der Valk. This data, in combination with 30+ genomes for extant elephant species including Asian, African, and Bornean elephants, have dramatically increased the accuracy of mammoth-specific variant calling.

The team has derived, characterized, and biobanked 10+ primary cell lines from acquired tissue for Asian elephants, rock hyrax, and aardvark for use in company conservation and de-extinction pipelines; and it became the first to derive pluripotent stem cells for Asian elephants. These cells are essential for in-vitro embryogenesis and gametogenesis. There are numerous other steps forward.

“These mammoth milestones mark a pivotal step forward for de-extinction technologies,” said Love Dalen, professor at the Centre for Paleogenetics, University of Stockholm, and a key advisor to the mammoth project, in a statement. “The dedication of the team at Colossal to precision and scientific rigor is truly inspiring, and I have no doubt they will be successful in resurrecting core mammoth traits.”

Thylacine De-extinction Project Progress

A thylacine image generated by Microsoft Copilot.

The Colossal thylacine team recently made announcements demonstrating progress on the various work streams critical for the de-extinction of the thylacine.

Since that team’s inception two years ago the Australia and Texas-based teams have generated the highest-quality ancient genome to date for a Thylacine, at 99.9% complete, using ancient long reads and ancient RNA – a world’s first and once thought to be an impossible goal – creating the genomic blueprint for Thylacine de-extinction.

They have generated ancient genomes for 11 individuals thylacines, thereby gaining understanding of fixed variants versus population-level variation in thylacines pre-extinction and enabling more accurate prediction of de-extinction targets.

And they have assembled telomere-to-telomere genome sequences for all dasyurid species – the evolutionary cousins of thylacines– providing resources both to improve Colossal’s understanding thylacine evolution and underpin thylacine engineering efforts, and to aid in the conservation of threatened marsupial species. They made numerous other advances as well.

“These milestones put us ahead of schedule on many of the critical technologies needed to underpin de-extinction efforts. At the same time, it creates major advances in genomics, stem cell generation and engineering, and marsupial reproductive technologies that are paving the way for the de-extinction of the thylacine and is revolutionizing conservation science for marsupials. Colossal’s work demonstrates that with innovation and perseverance, we can offer groundbreaking solutions to safeguard biodiversity— and the team is already doing this in many visionary ways,” said Andrew Pask, Ph.D., in a statement.

Dodo De-extinction Project Progress

Dodo image generated by Microsoft Copilot.

The Colossal Avian Genomics Group is currently focused on the Dodo project as well as building a distinct suite of tools for avian genome engineering which differs from some of the company’s mammalian projects. The dodo specific team’s progress includes generating a complete, high coverage genome for the dodo, its sister extinct species the solitaire, and the critically endangered manumea (also known as the “tooth-billed pigeon” and “little dodo”).

They also generated and published a chromosome-scale assembly of the Nicobar pigeon (the dodo’s closest relative) as well as developed a population-scale data set of Nicobar pigeon genomes for computational identification of dodo-specific traits.

And the team developed a machine learning approach to identify genes associated with craniofacial shape in birds for gene-editing targets toward resurrect the dodo’s unique bill morphology; and they processed more than 10,000 eggs and optimized culture conditions for growing primordial germ cells (PGCs) for four bird species. The team also made a number of other strides.

“As we advance our understanding of avian genomics and developmental biology, we’re seeing remarkable progress in the tools and techniques needed to restore lost bird species,” said Colossal’s chief science officer Beth Shapiro, in a statement. “The unique challenges of avian reproduction require bespoke approaches to genetic engineering, for example, and our dodo team has had impressive success translating tools developed for chickens to tools that have even greater success in pigeons. While work remains, the pace of discovery within our dodo team has exceeded expectations.”

Colossal’s Support of Global Conservation and De-Extinction Efforts

By 2050, it is projected that over 50% of the world’s animal species may be extinct. Now around 27,000 species per year go extinct, compared to the natural rate of 10 to 100 species per year. Over the past 50 years (1970–2020), the average size of monitored wildlife populations has shrunk by 73%.

That extinction crisis will have cascading, negative impacts on human health and wellbeing including reductions in drinkable water, increases in land desertification and increases in food insecurity. While current conservation efforts are imperative to protecting species, more and newer technologies and techniques are required that can scale in response to the speed humanity is changing the planet and destroying ecosystems.

Colossal was created to respond to this crisis. And, Colossal’s growing de-extinction and species preservation toolkit of software, wetware and hardware solutions provides new, scalable approaches to this systems-level existential threat and biodiversity crisis.

“The technological advances we’re seeing in genetic engineering and synthetic biology are rapidly transforming our understanding of what’s possible in species restoration,” said Shapiro. “While the path to de-extinction is complex, each step forward brings us closer to understanding how we might responsibly reintroduce traits from lost species. The real promise lies not just in the technology, but also in how we might apply these tools to protect and restore endangered species and ecosystems.”

The breakthroughs in Colossal’s core projects create a ripple effect across species conservation. Each Colossal core species is tied to conservation efforts that support other endangered and at-risk species in the respective animal’s family group.

The company’s work toward mammoth restoration has simultaneously advanced reproductive and genetic technologies that can help preserve endangered elephant species, while the dodo program is pioneering avian genetic tools that will benefit threatened bird species worldwide. Through the Colossal Foundation and its partnerships with leading conservation organizations, Colossal is transforming these scientific advances into practical solutions that can help protect and restore vulnerable species across multiple taxonomic families.

It has key initiatives such as Colossal’s $7.5M in new donations to fund ancient DNA research across a diverse selection of species; the development of a gene-engineering solution to create cane toad toxin resistance for Australia’s endangered Northern Quoll; a partnership with the international conservation organization Re:wild on a suite of initiatives to preserve the world’s most threatened species.

It has a joint 10-year conservation strategy to save some of the world’s most threatened species by leveraging the power of Colossal’s genetic technologies and Re:wild’s experience and partnerships for species conservation across the world. There are a number of other efforts under way too.

“Colossal is advancing the development of genetic technologies for conservation at a rapid pace. Their cutting-edge technologies are changing what is possible in species conservation and are permitting us to envision a world where many more Critically Endangered species not only survive but thrive,” said Barney Long, PhD, senior director of conservation strategies for Re:wild, in a statement.

Colossal’s additional strategic investors include funds such as USIT, Animal Capital, Breyer Capital, At One Ventures, In-Q-Tel, BOLD Capital, Peak 6, and Draper Associates among others and private investors including Robert Nelsen, Peter Jackson, Fran Walsh, Ric Edelman, Brandon Fugal, Paul Tudor Jones, Richard Garriott, Giammaria Giuliani, Sven-Olof Lindblad, Victor Vescovo, and Jeff Wilke.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

NetOps teams struggle with AI readiness

Some 87% of respondents indicated that internet and cloud environments are creating network blind spots in many areas. Half of organizations reported a lack of adequate insight into public clouds, 44% of respondents indicated transit and peering networks created blind spots, and 43% said remote work environments lack visibility. Other

Read More »

NuEnergy Completes Drilling for ‘Early Gas Sales’ Project in Indonesia

NuEnergy Gas Ltd said it had completed drilling for the fourth and final well in its “Early Gas Sales” project under the initial development plan for the Tanjung Enim coalbed methane (CBM) production sharing contract (PSC) in Indonesia. “Gas shows were observed at surface via surface logging equipment, confirming the presence of methane across multiple seams”, the Australian company said in a stock filing. The TE-B01-003 well, drilled 451 meters (1,479.66 feet) deep, intersected five coal seams at depths ranging between 299 and 419 meters, according to NuEnergy. “NuEnergy has installed a progressive cavity pump system for the TE-B01-003 well and preparations are now underway to commence dewatering – a key step toward establishing stable gas flow and optimizing well performance”, the company said. “Gas will be gathered at the surface facility and delivered to the gas processing facility upon reaching target production levels”. It added, “Pursuant to the signed heads of agreement with PT Perusahaan Gas Negara Tbk (PGN), gas produced from the drilled wells, TE-B06-001, TE-B06-002, TE-B06-003 well and the TE-B01-003 well, will be delivered via an infield pipeline to PGN’s processing and distribution facility”. The Early Gas Sales project will sell one million standard cubic feet a day (MMscfd) to Indonesian state-owned gas distributor PGN, toward the 25-MMscfd initial plan for the Tanjung Enim license, according to NuEnergy. On September 8, it announced approval from the Energy and Mineral Resources Ministry for the one-MMscfd sale through its subsidiary Dart Energy (Tanjung Enim) Pte Ltd (DETE). “With the gas allocation approval now secured, DETE will proceed with finalizing the Gas Sale and Purchase Agreement with PGN”, NuEnergy said then. Meanwhile the bigger Tanjung Enim Plan of Development (POD) 1 was approved June 2021 “under a gross split scheme which will allow the PSC to proceed field development, surface facility

Read More »

Hammerfest LNG Workers Got Ill from MEG Tanks Venting, Equinor Finds

Dozens of workers at the Hammerfest LNG export terminal on the Norwegian island of Melkoya got ill from exposure to vented gas from tanks storing monoethylene glycol (MEG) during the one year to summer 2025, a probe by operator Equinor ASA has revealed. “We must acknowledge that we should have gone more in-depth to identify the causes when the first incidents of exposure occurred at Melkoya last summer”, Christina Dreetz, Equinor senior vice president for onshore plants, said in an online statement by the majority state-owned company. “Through measures implemented both during and after the investigation, we now have routines that enable us to manage risk more effectively”. The statement reported, “During a period of high activity at Hammerfest LNG, from summer 2024 to summer 2025, 37 people sought medical attention on four different occasions and nine people were absent from work following the exposure incidents. Some experienced health issues such as headaches, nausea and dizziness, while others noticed nothing. “Reactions to vented gas and the associated odor is a cause of the various health issues experienced by personnel, but it is unlikely that the exposure has led to long-term health issues”. Equinor’s investigation “points to insufficient risk assessment before the project start-up and follow-up as the reason why several incidents occurred during the one-year period”, the statement said. Equinor found that venting from the tanks housing MEG, a chemical used to prevent hydrate formation in pipelines from the Snohvit field to the liquefaction facility, had been the main cause of the exposure incidents. The tanks are designed so that vented gas consists of nitrogen and water vapor, according to Equinor. “Changes in the well stream in the MEG tanks or temperature fluctuations have contributed to changes in the composition of the vented gas. This has resulted in odors and,

Read More »

Oil Falls on Rising Fuel Stocks

Crude retreated after a US government report showed rising inventories of fuel and other refined products, easing supply concerns while investors tracked stalled diplomatic attempts to end Russia’s war on Ukraine. West Texas Intermediate dropped 2.1% to trade above $59 a barrel, the biggest loss in a week. Ukrainian President Volodymyr Zelenskiy arrived in Turkey to “reinvigorate negotiations,” raising eyebrows among investors that had all but written off a deescalation of a conflict that has spurred restrictions on Russia’s energy sector. An Axios report that Washington has been working in consultation with the Kremlin to draft a new plan also eased supply concerns, though Moscow denied any talks. US envoy Steve Witkoff was expected to meet Ukrainian leaders in Turkey on Wednesday but postponed his trip, Axios reported. The developments may help cushion the impact of US sanctions against Russia’s two biggest oil producers, Rosneft PJSC and Lukoil PJSC, which are set to kick in within days. The US Treasury claimed the restrictions are already undermining Russia’s funding capacity. That’s particularly visible in surging diesel-market tightness, in which Russia is a significant player, raising concerns about shortages of heating fuel just ahead of winter. Some of those fears were allayed after the US Energy Information Administration reported on Wednesday that gasoline and distillate inventories in the US expanded for the first time in more than a month. Heating oil futures dropped as much as 5.2% after touching the highest since April 2024 on Tuesday, leading the energy complex lower. “Higher refining activity and lower implied demand for both helped gasoline and distillate inventories rise, albeit modestly for distillates,” said Matt Smith, Americas lead oil analyst at Kpler. The 3.4 million-barrel decline in US crude inventories last week was smaller than the American Petroleum Institute’s 4.4 million estimate, helping temper some

Read More »

Some load forecasts using ‘unrealistically high load factors’: Grid Strategies VP

Dive Brief: Significant load growth is likely to arrive as forecast, but uncertainties associated with data centers are complicating load growth estimation, as are “unrealistically high load factors for the new large loads” in some load forecasts, said John Wilson, a vice president at Grid Strategies. Wilson is one of the lead authors of a November report which found the five-year forecast of U.S. utility peak load growth has increased from 24 GW to 166 GW over the past three years — by more than a factor of six. The report concluded that the “data center portion of utility load forecasts is likely overstated by roughly 25 GW,” based on reports from market analysts. Dive Insight: Despite projected load growth, many utility third-quarter earnings reports have shown relatively flat deliveries of electricity. Wilson said he thinks a definitive answer as to whether or not load growth is materializing will come next year. “If [large loads] start to get put off or canceled, and the load doesn’t come in, then we could see a lot of revisions to forecasts that are really large,” he said. The utility forecast for added data center load by 2030 is 90 GW, “nearly 10% of forecast peak load,” the report said, but “data center market analysts indicate that data center growth is unlikely to require much more than 65 GW through 2030.” Wilson said he thinks the overestimation could be due “simply to the challenge that utilities have in understanding whether a potential customer is pursuing just the site in their service area, or whether they’re pursuing multiple sites and they’re not planning on building out all of them.” This is information that utilities haven’t typically gathered, he said, although he’s seeing a trend toward utilities making those questions part of their application process. Wilson said another factor

Read More »

Winter peak demand is rising faster than resource additions: NERC

Listen to the article 4 min This audio is auto-generated. Please let us know if you have feedback. Dive Brief: Peak demand on the bulk power system will be 20 GW higher this winter than last, but total resources to meet the peak have only increased 9.4 GW, according to a report released Tuesday by the North American Electric Reliability Corp. Despite the mismatch, all regions of the bulk power system should have sufficient resources for expected peak demand this winter, NERC said in its 2025-2026 Winter Reliability Assessment. However, several regions could face challenges in the event of extreme weather. There have been 11 GW of batteries and 8 GW of demand response resources added to the bulk power system since last winter, NERC said. Solar, thermal and hydro have also seen small additions, but contributions from wind resources are 14 GW lower following capacity accounting changes in some markets.  Dive Insight: NERC officials described a mixed bag heading into the winter season. “The bulk power system is entering another winter with pockets of elevated risk, and the drivers are becoming more structural than seasonal,” said John Moura, NERC’s director of reliability assessments and performance analysis. “We’re seeing steady demand growth, faster than previous years, landing on a system that’s still racing to build new resources, navigating supply chain constraints and integrating large amounts of variable, inverter-based generation.” Aggregate peak demand across NERC’s footprint will be 20 GW, or 2.5%, higher than last winter. “Essentially, you have a doubling between the last several successive [winter reliability assessments],” said Mark Olson, NERC’s manager of reliability assessment. Nearly all of NERC’s assessment areas “are reporting year-on-year demand growth with some forecasting increases near 10%,” the reliability watchdog said. The U.S. West, Southeast and Mid-Atlantic — areas with significant data center development — have

Read More »

Energy Secretary Strengthens Midwest Grid Reliability Heading into Winter Months

WASHINGTON—U.S. Secretary of Energy Chris Wright issued an emergency order to address critical grid reliability issues facing the Midwestern region of the United States heading into the cold winter months. The emergency order directs the Midcontinent Independent System Operator (MISO), in coordination with Consumers Energy, to ensure that the J.H. Campbell coal-fired power plant in West Olive, Michigan remains available for operation and to take every step to minimize costs for the American people. The Campbell Plant was scheduled to shut down on May 31, 2025 — 15 years before the end of its scheduled design life. “Because of the last administration’s dangerous energy subtraction policies targeting reliable and affordable energy sources, the United States continues to face an energy emergency,” said Energy Secretary Wright. “The Trump administration will keep taking action to reverse these energy subtraction policies, lowering energy costs and minimizing the risks of blackouts. Americans deserve access to affordable, reliable and secure energy regardless of whether the wind is blowing or the sun is shining, especially in dangerously cold weather.”  Since the Department of Energy’s (DOE) original order issued on May 23, the Campbell plant has proven critical to MISO’s operations, operating regularly during periods of high energy demand and low levels of intermittent energy production. A subsequent order was issued on August 20, 2025. As outlined in DOE’s Resource Adequacy Report, power outages could increase by 100 times in 2030 if the U.S. continues to take reliable power offline. The emergency conditions that led to the issuance of the original orders persist.MISO’s service area will continue to face emergency conditions both in the near and long term. Two recent winter studies (2024 – 2025 NERC Winter Reliability Assessment and the 2023 – 2024 NERC Winter Reliability Assessment) have assessed the MISO assessment area as an elevated risk, with the “potential

Read More »

AWS boosts its long-distance cloud connections with custom DWDM transponder

By controlling the entire hardware stack, AWS can implement comprehensive security measures that would be challenging with third-party solutions, Rehder stated. “This initial long-haul deployment represents just the first implementation of the in-house technology across our extensive long-haul network. We have already extended deployment to Europe, with plans to use the AWS DWDM transponder for all new long-haul connections throughout our global infrastructure,” Rehder wrote. Cloud vendors are some of the largest optical users in the world, though not all develop their own DWDM or other optical systems, according to a variety of papers on the subject. Google develops its own DWDM, for example, but others like Microsoft Azure develop only parts and buy optical gear from third parties. Others such as IBM, Oracle and Alibaba have optical backbones but also utilize third-party equipment. “We are anticipating that the time has come to interconnect all those new AI data centers being built,” wrote Jimmy Yu, vice president at Dell’Oro Group, in a recent optical report. “We are forecasting data center interconnect to grow at twice the rate of the overall market, driven by increased spending from cloud providers. The direct purchases of equipment for DCI will encompass ZR/ZR+ optics for IPoDWDM, optical line systems for transport, and DWDM systems for high-performance, long-distance terrestrial and subsea transmission.”

Read More »

Nvidia’s first exascale system is the 4th fastest supercomputer in the world

The world’s fourth exascale supercomputer has arrived, pitting Nvidia’s proprietary chip technologies against the x86 systems that have dominated supercomputing for decades. For the 66th edition of the TOP500, El Capitan holds steady at No. 1 while JUPITER Booster becomes the fourth exascale system on the list. The JUPITER Booster supercomputer, installed in Germany, uses Nvidia CPUs and GPUs and delivers a peak performance of exactly 1 exaflop, according to the November TOP500 list of supercomputers, released on Monday. The exaflop measurement is considered a major milestone in pushing computing performance to the limits. Today’s computers are typically measured in gigaflops and teraflops—and an exaflop translates to 1 billion gigaflops. Nvidia’s GPUs dominate AI servers installed in data centers as computing shifts to AI. As part of this shift, AI servers with Nvidia’s ARM-based Grace CPUs are emerging as a high-performance alternative to x86 chips. JUPITER is the fourth-fastest supercomputer in the world, behind three systems with x86 chips from AMD and Intel, according to TOP500. The top three supercomputers on the TOP500 list are in the U.S. and owned by the U.S. Department of Energy. The top two supercomputers—the 1.8-exaflop El Capitan at Lawrence Livermore National Laboratory and the 1.35-exaflop Frontier at Oak Ridge National Laboratory—use AMD CPUs and GPUs. The third-ranked 1.01-exaflop Aurora at Argonne National Laboratory uses Intel CPUs and GPUs. Intel scrapped its GPU roadmap after the release of Aurora and is now restructuring operations. The JUPITER Booster, which was assembled by France-based Eviden, has Nvidia’s GH200 superchip, which links two Nvidia Hopper GPUs with CPUs based on ARM designs. The CPU and GPU are connected via Nvidia’s proprietary NVLink interconnect, which is based on InfiniBand and provides bandwidth of up to 900 gigabytes per second. JUPITER first entered the Top500 list at 793 petaflops, but

Read More »

Samsung’s 60% memory price hike signals higher data center costs for enterprises

Industry-wide price surge driven by AI Samsung is not alone in raising prices. In October, TrendForce reported that Samsung and SK Hynix raised DRAM and NAND flash prices by up to 30% for Q4. Similarly, SK Hynix said during its October earnings call that its HBM, DRAM, and NAND capacity is “essentially sold out” for 2026, with the company posting record quarterly operating profit exceeding $8 billion, driven by surging AI demand. Industry analysts attributed the price increases to manufacturers redirecting production capacity. HBM production for AI accelerators consumes three times the wafer capacity of standard DRAM, according to a TrendForce report, citing remarks from Micron’s Chief Business Officer. After two years of oversupply, memory inventories have dropped to approximately eight weeks from over 30 weeks in early 2023. “The memory industry is tightening faster than expected as AI server demand for HBM, DDR5, and enterprise SSDs far outpaces supply growth,” said Manish Rawat, semiconductor analyst at TechInsights. “Even with new fab capacity coming online, much of it is dedicated to HBM, leaving conventional DRAM and NAND undersupplied. Memory is shifting from a cyclical commodity to a strategic bottleneck where suppliers can confidently enforce price discipline.” This newfound pricing power was evident in Samsung’s approach to contract negotiations. “Samsung’s delayed pricing announcement signals tough behind-the-scenes negotiations, with Samsung ultimately securing the aggressive hike it wanted,” Rawat said. “The move reflects a clear power shift toward chipmakers: inventories are normalized, supply is tight, and AI demand is unavoidable, leaving buyers with little room to negotiate.” Charlie Dai, VP and principal analyst at Forrester, said the 60% increase “signals confidence in sustained AI infrastructure growth and underscores memory’s strategic role as the bottleneck in accelerated computing.” Servers to cost 10-25% more For enterprises building AI infrastructure, these supply dynamics translate directly into

Read More »

Arista, Palo Alto bolster AI data center security

“Based on this inspection, the NGFW creates a comprehensive, application-aware security policy. It then instructs the Arista fabric to enforce that policy at wire speed for all subsequent, similar flows,” Kotamraju wrote. “This ‘inspect-once, enforce-many’ model delivers granular zero trust security without the performance bottlenecks of hairpinning all traffic through a firewall or forcing a costly, disruptive network redesign.” The second capability is a dynamic quarantine feature that enables the Palo Alto NGFWs to identify evasive threats using Cloud-Delivered Security Services (CDSS). “These services, such as Advanced WildFire for zero-day malware and Advanced Threat Prevention for unknown exploits, leverage global threat intelligence to detect and block attacks that traditional security misses,” Kotamraju wrote. The Arista fabric can intelligently offload trusted, high-bandwidth “elephant flows” from the firewall after inspection, freeing it to focus on high-risk traffic. When a threat is detected, the NGFW signals Arista CloudVision, which programs the network switches to automatically quarantine the compromised workload at hardware line-rate, according to Kotamraju: “This immediate response halts the lateral spread of a threat without creating a performance bottleneck or requiring manual intervention.” The third feature is unified policy orchestration, where Palo Alto Networks’ management plane centralizes zone-based and microperimeter policies, and CloudVision MSS responds with the offload and enforcement of Arista switches. “This treats the entire geo-distributed network as a single logical switch, allowing workloads to be migrated freely across cloud networks and security domains,” Srikanta and Barbieri wrote. Lastly, the Arista Validated Design (AVD) data models enable network-as-a-code, integrating with CI/CD pipelines. AVDs can also be generated by Arista’s AVA (Autonomous Virtual Assist) AI agents that incorporate best practices, testing, guardrails, and generated configurations. “Our integration directly resolves this conflict by creating a clean architectural separation that decouples the network fabric from security policy. This allows the NetOps team (managing the Arista

Read More »

AMD outlines ambitious plan for AI-driven data centers

“There are very beefy workloads that you must have that performance for to run the enterprise,” he said. “The Fortune 500 mainstream enterprise customers are now … adopting Epyc faster than anyone. We’ve seen a 3x adoption this year. And what that does is drives back to the on-prem enterprise adoption, so that the hybrid multi-cloud is end-to-end on Epyc.” One of the key focus areas for AMD’s Epyc strategy has been our ecosystem build out. It has almost 180 platforms, from racks to blades to towers to edge devices, and 3,000 solutions in the market on top of those platforms. One of the areas where AMD pushes into the enterprise is what it calls industry or vertical workloads. “These are the workloads that drive the end business. So in semiconductors, that’s telco, it’s the network, and the goal there is to accelerate those workloads and either driving more throughput or drive faster time to market or faster time to results. And we almost double our competition in terms of faster time to results,” said McNamara. And it’s paying off. McNamara noted that over 60% of the Fortune 100 are using AMD, and that’s growing quarterly. “We track that very, very closely,” he said. The other question is are they getting new customer acquisitions, customers with Epyc for the first time? “We’ve doubled that year on year.” AMD didn’t just brag, it laid out a road map for the next two years, and 2026 is going to be a very busy year. That will be the year that new CPUs, both client and server, built on the Zen 6 architecture begin to appear. On the server side, that means the Venice generation of Epyc server processors. Zen 6 processors will be built on 2 nanometer design generated by (you guessed

Read More »

Building the Regional Edge: DartPoints CEO Scott Willis on High-Density AI Workloads in Non-Tier-One Markets

When DartPoints CEO Scott Willis took the stage on “the Distributed Edge” panel at the 2025 Data Center Frontier Trends Summit, his message resonated across a room full of developers, operators, and hyperscale strategists: the future of AI infrastructure will be built far beyond the nation’s tier-one metros. On the latest episode of the Data Center Frontier Show, Willis expands on that thesis, mapping out how DartPoints has positioned itself for a moment when digital infrastructure inevitably becomes more distributed, and why that moment has now arrived. DartPoints’ strategy centers on what Willis calls the “regional edge”—markets in the Midwest, Southeast, and South Central regions that sit outside traditional cloud hubs but are increasingly essential to the evolving AI economy. These are not tower-edge micro-nodes, nor hyperscale mega-campuses. Instead, they are regional data centers designed to serve enterprises with colocation, cloud, hybrid cloud, multi-tenant cloud, DRaaS, and backup workloads, while increasingly accommodating the AI-driven use cases shaping the next phase of digital infrastructure. As inference expands and latency-sensitive applications proliferate, Willis sees the industry’s momentum bending toward the very markets DartPoints has spent years cultivating. Interconnection as Foundation for Regional AI Growth A key part of the company’s differentiation is its interconnection strategy. Every DartPoints facility is built to operate as a deeply interconnected environment, drawing in all available carriers within a market and stitching sites together through a regional fiber fabric. Willis describes fiber as the “nervous system” of the modern data center, and for DartPoints that means creating an interconnection model robust enough to support a mix of enterprise cloud, multi-site disaster recovery, and emerging AI inference workloads. The company is already hosting latency-sensitive deployments in select facilities—particularly inference AI and specialized healthcare applications—and Willis expects such deployments to expand significantly as regional AI architectures become more widely

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »