Stay Ahead, Stay ONMINE

Colossal raises $200M to “de-extinct” the woolly mammoth, thylacine and dodo

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Colossal BioSciences has raised $200 million in a new round of funding to bring back extinct species like the woolly mammoth. Dallas- and Boston-based Colossal is making strides in the scientific breakthroughs toward “de-extinction,” or bringing […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Colossal BioSciences has raised $200 million in a new round of funding to bring back extinct species like the woolly mammoth.

Dallas- and Boston-based Colossal is making strides in the scientific breakthroughs toward “de-extinction,” or bringing back extinct species like the woolly mammoth, thylacine and the dodo.

I would be remiss if I did not mention this is the plot of Michael Crichton’s novel Jurassic Park, where scientists used the DNA found in mosquitoes preserved in amber to bring back the Tyrannosaurus Rex and other dinosaurs. I mean, what could go wrong when science fiction becomes reality? Kidding aside, this is pretty amazing work and I’m not surprised to see game dev Richard Garriott among the investors.

TWG Global, a diversified holding company with operating businesses and investments in technology/AI, financial services, private lending and sports and media, jointly led by Mark Walter and Thomas Tull.

Since launching in September 2021, Colossal has raised $435 million in total funding. This latest round of capital places the company at a $10.2 billion valuation. Colossal will leverage this latest infusion of capital to continue to advance its genetic engineering technologies while pioneering new revolutionary software, wetware and hardware solutions, which have applications beyond de-extinction including species preservation and human healthcare.

“Our recent successes in creating the technologies necessary for our end-to-end de-extinction toolkit have been met with enthusiasm by the investor community. TWG Global and our other partners have been bullish in their desire to help us scale as quickly and efficiently as possible,” said CEO Colossal Ben Lamm, in a statement. “This funding will grow our team, support new technology development, expand our de-extinction species list, while continuing to allow us to carry forth our mission to make extinction a thing of the past.”

Colossal employs over 170 scientists and partners with labs in Boston, Dallas, and Melbourne, Australia. In addition, Colossal sponsors over 40 full time postdoctoral scholars and research programs in 16 partner labs at some of the most prestigious universities around the globe.

Colossal’s scientific advisory board has grown to include over 95 of the top scientists working in genomics, ancient DNA, ecology, conservation, developmental biology, and paleontology. Together, these teams are tackling some of the hardest problems in biology, including mapping genotypes to traits and behaviors, understanding developmental pathways to phenotypes like craniofacial shape, tusk formation, and coat color patterning, and developing new tools for multiplex and large-insert genome engineering.

“Colossal is the leading company working at the intersection of AI, computational biology and genetic engineering for both de-extinction and species preservation,” said Mark Walter, CEO of TWG Global, in a statement. “Colossal has assembled a world-class team that has already driven, in a short period of time, significant technology innovations and impact in advancing conservation, which is a core value of TWG Global. We are thrilled to support Colossal as it accelerates and scales its mission to combat the animal extinction crisis.”

“Colossal is a revolutionary genetics company making science fiction into science fact. We are creating the technology to build de-extinction science and scale conservation biology particularly for endangered and at-risk species. I could not be more appreciative of the investor support for this important mission,” said George Church, Colossal cofounder and a professor of genetics at Harvard Medical School and professor of Health Sciences and Technology at Harvard and the Massachusetts Institute of Technology (MIT).

In October 2024, the Colossal Foundation was launched, a sister 501(c)(3) focused on overseeing the deployment and application of Colossal-developed science and technology innovations. The organization currently supports 48 conservation partners and their global initiatives around the world.

This includes partners like Re:wild, Save The Elephants, Biorescue, Birdlife International, Conservation Nation, Sezarc, Mauritian Wildlife Foundation, Aussie Ark, International Elephant Foundation, Saving Animals From Extinction. Currently the Colossal Foundation is focused on supporting conservation partners who are working on new innovative technologies that can be applied to conservation and those who benefit from the development and deployment of new genetic rescue and de-extinction technologies to help combat the biodiversity extinction crisis.

Tracking Progress on Colossal’s De-Extinction Projects

Ben Lamm is CEO of Colossal Biosciences

The first step in every de-extinction project is to recover and analyze preserved genetic material and use that data to identify each species’ core genomic components. In addition to recruiting Beth Shapiro, a global leader in ancient DNA research, as Colossal’s chief science officer, Colossal has built a team of Ph.D experts in ancient DNA among its scientific advisors, including Love Dalen, Andrew Pask, Tom Gilbert, Michael Hofreiter, Hendrik Poinar, Erez Lieberman Aiden, and Matthew Wooler.

With this team, Colossal continues to push advances in ancient DNA through support to academic labs and internal scientific research. All three core species – mammoth, thylacine, and dodo – have already benefited from this coalescence of expertise. As an example, Colossal now has the most contiguous and complete ancient genomes to date for each of these three species; these genomes are the blueprints from which these species’ core traits will be engineered.

The path from ancient genome to living species requires a systems model approach to innovation across computational biology, cellular engineering, genetic engineering, embryology, and animal husbandry, with refinement and tuning in each step along the de-extinction pipeline occurring simultaneously. To date, Colossal’s scientists have achieved monumental breakthroughs at each step for each of the three flagship species.

In the last three years, Colossal’s first major project to be announced, the woolly mammoth project, generated new genomic resources, made breakthroughs in cell biology and genome engineering, and explored the ecological impact of de-extinction, with implications for mammoths, elephants, and species across the vertebrate tree of life.

Woolly Mammoth De-extinction Project Progress

The mammoth team has generated chromosome-scale reference genomes for the African elephant, Asian elephant, and rock hyrax, all of which have been released on the National Center for Biotechnology Information database; it has generated the first de novo assembled mammoth genome – that is, a genome generated using only the ancient DNA reads rather than mapped to a reference genome. This genome identified several genetic loci that are missing in reference-guided assemblies.

And it has acquired and aligned 60+ ancient genomes for woolly mammoth and Columbian mammoth in collaboration with key scientific advisor, Love Dalen and Tom van der Valk. This data, in combination with 30+ genomes for extant elephant species including Asian, African, and Bornean elephants, have dramatically increased the accuracy of mammoth-specific variant calling.

The team has derived, characterized, and biobanked 10+ primary cell lines from acquired tissue for Asian elephants, rock hyrax, and aardvark for use in company conservation and de-extinction pipelines; and it became the first to derive pluripotent stem cells for Asian elephants. These cells are essential for in-vitro embryogenesis and gametogenesis. There are numerous other steps forward.

“These mammoth milestones mark a pivotal step forward for de-extinction technologies,” said Love Dalen, professor at the Centre for Paleogenetics, University of Stockholm, and a key advisor to the mammoth project, in a statement. “The dedication of the team at Colossal to precision and scientific rigor is truly inspiring, and I have no doubt they will be successful in resurrecting core mammoth traits.”

Thylacine De-extinction Project Progress

A thylacine image generated by Microsoft Copilot.

The Colossal thylacine team recently made announcements demonstrating progress on the various work streams critical for the de-extinction of the thylacine.

Since that team’s inception two years ago the Australia and Texas-based teams have generated the highest-quality ancient genome to date for a Thylacine, at 99.9% complete, using ancient long reads and ancient RNA – a world’s first and once thought to be an impossible goal – creating the genomic blueprint for Thylacine de-extinction.

They have generated ancient genomes for 11 individuals thylacines, thereby gaining understanding of fixed variants versus population-level variation in thylacines pre-extinction and enabling more accurate prediction of de-extinction targets.

And they have assembled telomere-to-telomere genome sequences for all dasyurid species – the evolutionary cousins of thylacines– providing resources both to improve Colossal’s understanding thylacine evolution and underpin thylacine engineering efforts, and to aid in the conservation of threatened marsupial species. They made numerous other advances as well.

“These milestones put us ahead of schedule on many of the critical technologies needed to underpin de-extinction efforts. At the same time, it creates major advances in genomics, stem cell generation and engineering, and marsupial reproductive technologies that are paving the way for the de-extinction of the thylacine and is revolutionizing conservation science for marsupials. Colossal’s work demonstrates that with innovation and perseverance, we can offer groundbreaking solutions to safeguard biodiversity— and the team is already doing this in many visionary ways,” said Andrew Pask, Ph.D., in a statement.

Dodo De-extinction Project Progress

Dodo image generated by Microsoft Copilot.

The Colossal Avian Genomics Group is currently focused on the Dodo project as well as building a distinct suite of tools for avian genome engineering which differs from some of the company’s mammalian projects. The dodo specific team’s progress includes generating a complete, high coverage genome for the dodo, its sister extinct species the solitaire, and the critically endangered manumea (also known as the “tooth-billed pigeon” and “little dodo”).

They also generated and published a chromosome-scale assembly of the Nicobar pigeon (the dodo’s closest relative) as well as developed a population-scale data set of Nicobar pigeon genomes for computational identification of dodo-specific traits.

And the team developed a machine learning approach to identify genes associated with craniofacial shape in birds for gene-editing targets toward resurrect the dodo’s unique bill morphology; and they processed more than 10,000 eggs and optimized culture conditions for growing primordial germ cells (PGCs) for four bird species. The team also made a number of other strides.

“As we advance our understanding of avian genomics and developmental biology, we’re seeing remarkable progress in the tools and techniques needed to restore lost bird species,” said Colossal’s chief science officer Beth Shapiro, in a statement. “The unique challenges of avian reproduction require bespoke approaches to genetic engineering, for example, and our dodo team has had impressive success translating tools developed for chickens to tools that have even greater success in pigeons. While work remains, the pace of discovery within our dodo team has exceeded expectations.”

Colossal’s Support of Global Conservation and De-Extinction Efforts

By 2050, it is projected that over 50% of the world’s animal species may be extinct. Now around 27,000 species per year go extinct, compared to the natural rate of 10 to 100 species per year. Over the past 50 years (1970–2020), the average size of monitored wildlife populations has shrunk by 73%.

That extinction crisis will have cascading, negative impacts on human health and wellbeing including reductions in drinkable water, increases in land desertification and increases in food insecurity. While current conservation efforts are imperative to protecting species, more and newer technologies and techniques are required that can scale in response to the speed humanity is changing the planet and destroying ecosystems.

Colossal was created to respond to this crisis. And, Colossal’s growing de-extinction and species preservation toolkit of software, wetware and hardware solutions provides new, scalable approaches to this systems-level existential threat and biodiversity crisis.

“The technological advances we’re seeing in genetic engineering and synthetic biology are rapidly transforming our understanding of what’s possible in species restoration,” said Shapiro. “While the path to de-extinction is complex, each step forward brings us closer to understanding how we might responsibly reintroduce traits from lost species. The real promise lies not just in the technology, but also in how we might apply these tools to protect and restore endangered species and ecosystems.”

The breakthroughs in Colossal’s core projects create a ripple effect across species conservation. Each Colossal core species is tied to conservation efforts that support other endangered and at-risk species in the respective animal’s family group.

The company’s work toward mammoth restoration has simultaneously advanced reproductive and genetic technologies that can help preserve endangered elephant species, while the dodo program is pioneering avian genetic tools that will benefit threatened bird species worldwide. Through the Colossal Foundation and its partnerships with leading conservation organizations, Colossal is transforming these scientific advances into practical solutions that can help protect and restore vulnerable species across multiple taxonomic families.

It has key initiatives such as Colossal’s $7.5M in new donations to fund ancient DNA research across a diverse selection of species; the development of a gene-engineering solution to create cane toad toxin resistance for Australia’s endangered Northern Quoll; a partnership with the international conservation organization Re:wild on a suite of initiatives to preserve the world’s most threatened species.

It has a joint 10-year conservation strategy to save some of the world’s most threatened species by leveraging the power of Colossal’s genetic technologies and Re:wild’s experience and partnerships for species conservation across the world. There are a number of other efforts under way too.

“Colossal is advancing the development of genetic technologies for conservation at a rapid pace. Their cutting-edge technologies are changing what is possible in species conservation and are permitting us to envision a world where many more Critically Endangered species not only survive but thrive,” said Barney Long, PhD, senior director of conservation strategies for Re:wild, in a statement.

Colossal’s additional strategic investors include funds such as USIT, Animal Capital, Breyer Capital, At One Ventures, In-Q-Tel, BOLD Capital, Peak 6, and Draper Associates among others and private investors including Robert Nelsen, Peter Jackson, Fran Walsh, Ric Edelman, Brandon Fugal, Paul Tudor Jones, Richard Garriott, Giammaria Giuliani, Sven-Olof Lindblad, Victor Vescovo, and Jeff Wilke.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

ADNOC, OMV advance formation of Borouge Group International

ADNOC and OMV Aktiengesellschaft signed an asset usage agreement for the Borouge 4 (B4) production complex, advancing the duo’s formation of Borouge Group International AG. The formation of Borouge Group International AG, through the combination of Borouge Plc and Borealis, and acquisition of Nova Chemicals, is progressing according to plan,

Read More »

Nile adds microsegmentation and native NAC to its secure NaaS platform

Identity is the authentication layer that feeds the NAC replacement. For users and employees, Nile pulls identity from Active Directory, including group and role membership, which maps directly to policy enforcement. Corporate devices can authenticate through RADIUS using certificates, which carry additional device metadata. For wired connections, Nile supports 802.1X

Read More »

IDC: Dell leads server market driven by AI infrastructure needs

For calendar year 2025 the market finished growing 80.4% compared to 2024, reaching a yearly record of $444.1 billion dollars revenue. Dell Technologies clearly leads the OEM market with $12.5 billion in total revenue share, accounting for 10% of total sales. IDC attributed this to outstanding growth on accelerated servers.

Read More »

Cloud providers seek to shape European sovereignty legislation

Finally, they say, there should be taxpayer-funded investments in cloud and AI infrastructure and support for the European development of key components such as memory and chips and the incorporation of strict environmental sustainability requirements. “It’s important to realize that the proposal is not just about the technical aspects but

Read More »

Energy Department Begins Delivering SPR Barrels at Record Speeds

WASHINGTON — The U.S. Department of Energy (DOE) today announced the award of contracts for the initial phase of the Strategic Petroleum Reserve (SPR) Emergency Exchange as directed by President Trump. The first oil shipments began today—just nine days after President Trump and the Department of Energy announced the United States would lead a coordinated release of emergency oil reserves among International Energy Agency (IEA) member nations to address short-term supply disruptions. Under these initial awards, DOE will move forward with an exchange of 45.2 million barrels of crude oil and receive 55 million barrels in return, all at no cost to the taxpayer. This represents the first tranche of the United States’ 172-million-barrel release. Companies will receive 10 million barrels from the Bayou Choctaw SPR site, 15.7 million barrels from Bryan Mound, and 19.5 million barrels from West Hackberry. “Thanks to President Trump, the Energy Department began this first exchange at record speeds to address short-term supply disruptions while also strengthening the Strategic Petroleum Reserve by returning additional barrels at no cost to taxpayers,” said Kyle Haustveit, Assistant Secretary of the Hydrocarbons and Geothermal Energy Office. “This exchange not only maintains reliability in the current market but will generate hundreds of millions of dollars in value in the form of additional barrels for the American people when the barrels are returned.” This initial action will ultimately add close to 10 million barrels to the SPR’s inventory when the barrels are returned. Taxpayers will benefit from both the short-term support for global supply and long-term growth of the SPR’s inventory. This helps protects U.S. and global energy security. The Trump Administration continues to pursue additional opportunities to strengthen the reserve and restore its long-term readiness as a cornerstone of American energy security. For more information on the Strategic Petroleum Reserve and DOE’s

Read More »

Then & Now: Oil prices, US shale, offshore, and AI—Deborah Byers on what changed since 2017

In this Then & Now episode of the Oil & Gas Journal ReEnterprised podcast, Managing Editor and Content Strategist Mikaila Adams reconnects with Deborah Byers, nonresident fellow at Rice University’s Baker Institute Center for Energy Studies and former EY Americas industry leader, to revisit a set of questions first posed in 2017. In 2017, the industry was emerging from a downturn and recalibrating strategy; today, it faces heightened geopolitical risk, market volatility, and a rapidly evolving technology landscape. The conversation examines how those earlier perspectives have aged—covering oil price bands and the speed of recovery from geopolitical shocks, the role of US shale relative to OPEC in balancing global supply, and the shift from scarcity to economic abundance driven by technology and capital discipline. Adams and Byers also compare the economics and risk profiles of shale and offshore development, including the growing role of Brazil, Guyana, and the Gulf of Mexico, and discuss how infrastructure and regulatory constraints shape market outcomes. The episode further explores where digital transformation—particularly artificial intelligence—is delivering tangible returns across upstream operations, from predictive maintenance and workforce planning to capital project execution. The discussion concludes with insights on consolidation and scale in the Permian basin, the strategic rationale behind recent megamergers, and the industry’s ongoing challenge to attract and retain next‑generation talent through flexibility, technical opportunity, and purpose‑driven work.

Read More »

Eni plans tieback of new gas discoveries offshore Libya

Eni North Africa, a unit of Eni SPA, together with Libya’s National Oil Corp., plans to develop two new gas discoveries offshore Libya as tiebacks to existing infrastructure. The gas discoveries were made offshore Libya, about 85 km off the coast in about 650 ft of water. Bahr Essalam South 2 (BESS 2) and Bahr Essalam South 3 (BESS 3), adjacent geological structures, were successfully drilled through the exploration well C1-16/4 and the appraisal well B2-16/4 about 16 km south of Bahr Essalam gas field, which lies about 110 km from the Tripoli coast. Gas-bearing intervals were encountered in both wells within the Metlaoui formation, the main productive reservoir of the area. The acquired data indicate the presence of a high-quality reservoir, with productive capacity confirmed by the well test already carried out on the first well. Preliminary volumetric estimates indicate that the BESS 2 and BESS 3 structures jointly contain more than 1 tcf of gas in place. Their proximity to Bahr Essalam field will enable rapid development through tie-back, the operator said. The gas produced will be supplied to the Libyan domestic market and for export to Italy. Bahr Essalam produces through the Sabratha platform to the Mellitah onshore treatment plant.

Read More »

Azule Energy launches first non-associated gas production offshore Angola

Azule Energy has started natural gas production from the New Gas Consortium (NGC)’s Quiluma shallow water field offshore Angola. Start-up of the gas delivery from Quiluma field follows the November 2025 introduction of gas into the onshore gas plant, marking the beginning of production operations. The initial gas export will be 150 MMscfd and will ramp up to 330 MMscfd by yearend, the operator said in a release Mar. 13.  In a separate release Mar. 17, NGC partner TotalEnergies said the startup marks the first development of a non-associated gas field in Angola, noting that the gas produced “will be a stable and important source of gas supply for the Angola LNG plant that is delivering LNG to both the European and Asian markets.” The non-associated gas of NGC Phase 1 will come from Quiluma and Maboqueiro shallow water fields with additional potential related to gas from Blocks 2, 3, and 15/14 areas. An onshore plant will process gas from the fields and connect to the Angola LNG plant, aimed at a reliable feedstock supply to the plant, sited near Soyo in the Zaire province in north Angola. The plant holds a capacity of 400MMscfd of gas and 20,000 b/d of condensates. Azule Energy, a 50-50 joint venture between bp and Eni, is operator of NGC project with 37.4% interest. Partners are TotalEnergies (11.8%), Cabinda Gulf Oil Co., a subsidiary of Chevron (31%), and Sonangol E&P (19.8%).

Read More »

Equinor eyes Barents Sea oil province expansion with potential oil discovery tieback

Equinor Energy AS and partners will consider a tie back of a new oil discovery to Johan Castberg field in the Barents Sea, 220 km northwest of Hammerfest. Preliminary discovery volume estimates at the in the Polynya Tubåen prospect are 2.3–3.8 million std cu m of recoverable oil equivalent (14–24 MMboe). Wildcat well 7220/7-5, the 17th exploration well in production license 532, was drilled about 16 km southwest of discovery well 7220/8-1 well by the COSL Prospector rig in 361 m of water, according to the Norwegian Offshore Directorate. The well was drilled to a vertical depth of 1,119 m subsea. It was terminated in the Fruholmen formation from the Upper Triassic. The objective was to prove petroleum in Lower Jurassic reservoir rocks in the Tubåen formation. The well encountered a 26-m gas column and a 26-m oil column in the Tubåen formation in reservoir rocks totaling 39 m, with good to very good reservoir quality. The total thickness in the Tubåen formation is 125 m. The gas-oil contact was encountered at 972 m subsea, and the oil-water contact was encountered at 998 m subsea. The well was not formation-tested, but extensive volumes of data and samples were collected. It will now be permanently plugged. ‘New’ Barents Sea oil province The discovery comes as Equinor aims to increase volumes in the Johan Castberg area—originally estimated at 500–700 million bbl—by an additional 200–500 million bbl, with plans to drill 1-2 exploration wells per year in the region, Equinor said. “With Johan Castberg, we opened a new oil province in the Barents Sea one year ago. It is encouraging that we are now making new discoveries in the area,” said Grete Birgitte Haaland, area director for Exploration and Production North at Equinor. Production at Johan Castberg began in 2025.  In June 2025, the Drivis

Read More »

Westcott named Woodside CEO

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Woodside Energy has appointed Elizabeth (Liz) Westcott as chief executive officer and managing director. Westcott, who has served as Woodside’s acting chief executive since the departure of Meg O’Neill in December 2025 to lead bp plc, has more than 30 years’ experience in the global energy industry. Westcott joined Woodside in 2023 as executive vice-president Australian operations, and in 2024 was appointed executive vice-president and chief operating officer Australia, leading Woodside’s Australian projects and business operations. Prior to joining Woodside, she most recently held the role of chief operating officer at EnergyAustralia. Liz had a 25-year career at ExxonMobil working in Australia, the United Kingdom, and Italy, including a secondment in 2013 to Adriatic LNG as managing director.

Read More »

Executive Roundtable: AI Infrastructure Enters Its Execution Era

Miranda Gardiner, iMasons Climate Accord:  Since 2023, the digital infrastructure industry has moved definitively from planning to execution in the AI infrastructure cycle. Industry analysts forecast continued exponential growth, with active capacity at least doubling between now and 2030 and total capacity potentially tripling, quintupling, or more. In practical terms, we’ll see more digital infrastructure capacity come online in the next five year than has been built in the past 30 years, representing a historic industrial transformation requiring trillions of dollars in capital expenditure and a workforce measured in the millions. Design and organizational flexibility, integrated execution of sustainable solutions, and community-centered workforce development will separate those that thrive from those that struggle. Effective organizations will pivot quickly under these constantly shifting conditions and the leaders will be those that build fast but build right, as strategic flexibility balances long-term performance, efficiency, and regulatory compliance. We already know the resource intensity required to bring AI resources online and are working diligently to ensure this short-term, delivering streamlined and optimized solutions for everything from site selection to cooling and power management while lower lifecycle emissions. Additionally, in some regions, grid interconnection timelines and power availability are already the pacing item for data center development. Organizations that align their sustainability targets and energy procurement strategies will have a clearer path to execution. An operational model capable of delivering multiple large-scale facilities simultaneously across regions is another key piece to successful outcomes. Standardized, repeatable frameworks that reduce engineering time and accelerate permitting. We hear often about collaboration and strong partnerships, and these will be critical with utilities, regulators, and equipment manufacturers to anticipate bottlenecks before they impact schedules. Execution discipline will increasingly determine competitive advantage as the industry scales. The world and, especially, our host communities, are watching closely. Projects that move forward

Read More »

Jensen Huang Maps the AI Factory Era at NVIDIA GTC 2026

SAN JOSE, Calif. — If there was a single message that emerged from Jensen Huang’s keynote at Nvidia’s GTC conference this week, it was this: the artificial intelligence revolution is entering its infrastructure phase. For the past several years, the technology industry has been preoccupied with training ever larger models. But in Huang’s telling, that era is already giving way to something far bigger: the industrial-scale deployment of AI systems that run continuously, generating intelligence on demand. “The inference inflection point has arrived,” Huang told the audience gathered at the SAP Center. That shift carries enormous implications for the data center industry. Instead of episodic bursts of compute used to train models, the next generation of AI systems will require persistent, high-throughput infrastructure designed to serve billions, and eventually trillions, of inference requests every day. And the scale of the buildout Huang envisions is staggering. Throughout the keynote, the Nvidia CEO repeatedly referenced what he believes will become a trillion-dollar global market for AI infrastructure in the coming years, spanning accelerated computing systems, networking fabrics, storage architectures, power systems, and the facilities required to house them. At that scale, Huang argued, data centers are no longer simply IT facilities. They are truly becoming AI factories: industrial systems designed to convert electricity into tokens. “Tokens are the new commodity,” Huang said. “AI factories are the infrastructure that produces them.” Across more than two hours on stage, Huang sketched the architecture of that new computing platform, introducing new computing systems, networking technologies, software frameworks, and infrastructure blueprints designed to support what Nvidia believes will be the largest computing buildout in history. Four main themes defined the presentation: • The arrival of the inference inflection point.• The emergence of OpenClaw as a foundational operating layer for AI agents.• New hybrid inference architectures involving

Read More »

Executive Roundtable: The Coordination Imperative

Christopher Gorthy, DPR Construction:  Early collaboration of key stakeholders has become the baseline to deliver these complex projects. The teams that are successful in these environments are the ones who combine effective meeting structures with enough in‑person interaction to build real trust. Pairing those relationships with the right tools can help track key decision making, document reasoning, and keep everyone aligned on “The Why,” creating more predictable outcomes. Where the industry continues to feel fragmented is around liability, risk, and comfort with sharing design and model data. Achieving the speed these projects demand requires the entire team to understand each partner’s constraints and then working together to solve problems, communicating clearly and documenting decisions as they go. All of our partnerships are solving equations with multiple variables. Our teams must provide early feedback and solutions when faced with impacts or delays outside our control, and even earlier communications of impacts that cannot be mitigated. Open communication channels, whether through shared digital platforms or recurring working sessions, are critical to staying ahead of risk. As projects get bigger, alignment with financial institutions, insurance entities and private equity partners also have become essential.   The number of trade partners capable of taking on contracts of this size is limited, so making sure we are setting up our partners for success while also working to expand the network of qualified trade partners is a key strategy.  From a tactical standpoint, the most effective projects operate from a single integrated schedule that ties together the owner, vendors, general contractor, trades, commissioning teams, and all other stakeholders. Reinforcing this with consistent two‑ to three‑week look‑ahead reviews and onsite schedule coordination meetings regardless of contractual structure significantly increases alignment and efficiency at the project level.

Read More »

Jensen Huang After the Keynote: Inside Nvidia’s GTC 2026 Press Briefing

The Data Center as Token Factory If there was one line of thinking that defined the session, it was Huang’s insistence that the industry must stop thinking about computers as systems for data entry and retrieval. That, he said, is the old paradigm. The new one is a “token manufacturing system.” That phrase landed because it compresses a lot of Nvidia’s strategy into a single mental model. In this view, the modern data center is no longer just a warehouse of servers or a cloud abstraction layer. It is a factory, and the unit of output is increasingly the token. For Data Center Frontier readers, this is a familiar direction of travel, but Huang pushed it further than most CEOs do. He repeatedly tied Nvidia’s roadmap to token throughput, token economics, and performance per watt. He is clearly trying to establish a new baseline metric for AI infrastructure value. Not raw capacity, but how much useful intelligence a facility can produce from a fixed power envelope. That point also surfaced in his discussion of Grace and Vera CPUs. Huang’s argument was not that Nvidia intends to win every classical CPU market. It was that traditional measures such as cores per dollar are insufficient in AI data centers where the real economic risk is leaving extremely valuable GPUs idle. In other words, the CPU matters because it must move work fast enough to keep the GPU estate productive. In a power-limited, AI-heavy environment, the purpose of the CPU changes. It is no longer optimized for the old hyperscale rental model. It is optimized for keeping the token factory fed. That is a subtle but major shift. It suggests that the next-generation AI data center will be increasingly engineered around the productivity of the overall system rather than around legacy component economics.

Read More »

Project Stalled: Grid Bottlenecks Threaten the Fifth Industrial Revolution

The defining feature of our current data center cycle isn’t a shortage of customers or capital; it’s a shortage of power that can actually be delivered on time. In the space of three years, large‑load interconnection queues have gone from a planning tool to the main reason otherwise viable AI campuses are missing their deployment windows. Multi‑year delays for large loads are quickly becoming the norm, not the exception, in major markets, turning what should be a sprint to deploy AI into a long and uncertain wait. At the grid level, the same pattern is visible in the queues. Across U.S. markets, that queuing infrastructure is now a primary source of delay. Regional operators from PJM to ERCOT and NYISO report steep increases in both the number and size of large‑load requests, with data centers and other energy‑intensive digital infrastructure accounting for a growing share of new demand ( https://insidelines.pjm.com/pjm-board-outlines-plans-to-integrate-large-loads-reliably/,  https://www.nyiso.com/-/energy-intensive-projects-in-nyiso-s-interconnection-queue/,  https://www.latitudemedia.com/news/ercots-large-load-queue-has-nearly-quadrupled-in-a-single-year/). In practice, that means more projects are being told that meaningful capacity will not be available on the timeline their customers expect, forcing them into redesigns, phased power ramps, or alternative power strategies. Time, in other words, has become the scarcest resource in the data center economy. The same 60 MW AI facility that looks attractive at a 17.1% IRR when delivered on schedule can see its returns fall to 12.6% with a three‑month delay and to 8.8% with a six‑month delay—nearly halving its investment case ( https://www.thefastmode.com/expert-opinion/47210-what-we-learned-in-2025-about-data-center-builds-why-delays-will-persist-in-2026-without-greater-visibility). That is why, in this industrial revolution, the metric that matters most is speed‑to‑power: how quickly real, reliable megawatts can be made available at the fence line, not how many gigawatts exist on slides or in press releases. In this industrial revolution, that metric will do more to determine who wins than any short‑term race to buy chips or secure logos.

Read More »

Roundtable: Designing for an Uncertain AI Demand Curve

For the third installment of our Executive Roundtable for the First Quarter of 2026, Data Center Frontier examines a question at the heart of AI infrastructure strategy: How to design for a demand curve that refuses to sit still. The rapid evolution of artificial intelligence workloads has introduced a new kind of uncertainty into data center development. Training clusters continue to scale, inference workloads are proliferating, and enterprise adoption is accelerating in ways that challenge even the most aggressive forecasts. Yet beneath that growth lies a fundamental ambiguity. Not just how much capacity will be needed, but when, where, and in what form. For developers and operators, this creates a tension between speed and flexibility. The pressure to deliver capacity quickly has never been greater, as hyperscale and neocloud players race to secure power and bring AI infrastructure online. At the same time, the risk of overbuilding (or locking into infrastructure that may not align with future workloads, densities, or architectures) has become increasingly difficult to ignore. Nowhere is this tension more visible than in power and electrical design. Decisions around substation sizing, transmission commitments, switchgear capacity, and on-site generation are being made years in advance of fully understood demand profiles. These choices carry long-term consequences, shaping not only capital efficiency but the ability to adapt as AI technologies and use cases continue to evolve. The result is a shift in design philosophy. Increasingly, the industry is moving away from static, one-time provisioning toward architectures that prioritize modularity, scalability, and optionality, seeking to preserve flexibility without sacrificing near-term delivery. In this roundtable, our panel explores how developers, operators, and suppliers are navigating that balance, and what it will take to future-proof AI infrastructure in an era defined by both unprecedented growth and persistent uncertainty.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »