Stay Ahead, Stay ONMINE

AI materials discovery now needs to move into the real world

.cst-large, .cst-default { width: 100%; } @media (max-width: 767px) { .cst-block { overflow-x: hidden; } } @media (min-width: 630px) { .cst-large { margin-left: -25%; width: 150%; } @media (min-width: 960px) { .cst-large { margin-left: -16.666666666666664%; width: 140.26%; } } @media (min-width: 1312px) { .cst-large { width: 145.13%; } } } @media (min-width: 60rem) { .flourish-embed { width: 60vw; transform: translateX(-50%); left: 50%; position: relative; } } The microwave-size instrument at Lila Sciences in Cambridge, Massachusetts, doesn’t look all that different from others that I’ve seen in state-of-the-art materials labs. Inside its vacuum chamber, the machine zaps a palette of different elements to create vaporized particles, which then fly through the chamber and land to create a thin film, using a technique called sputtering. What sets this instrument apart is that artificial intelligence is running the experiment; an AI agent, trained on vast amounts of scientific literature and data, has determined the recipe and is varying the combination of elements.  Later, a person will walk the samples, each containing multiple potential catalysts, over to a different part of the lab for testing. Another AI agent will scan and interpret the data, using it to suggest another round of experiments to try to optimize the materials’ performance.   This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next. For now, a human scientist keeps a close eye on the experiments and will approve the next steps on the basis of the AI’s suggestions and the test results. But the startup is convinced this AI-controlled machine is a peek into the future of materials discovery—one in which autonomous labs could make it far cheaper and faster to come up with novel and useful compounds.  Flush with hundreds of millions of dollars in new funding, Lila Sciences is one of AI’s latest unicorns. The company is on a larger mission to use AI-run autonomous labs for scientific discovery—the goal is to achieve what it calls scientific superintelligence. But I’m here this morning to learn specifically about the discovery of new materials.  Lila Sciences’ John Gregoire (background) and Rafael Gómez-Bombarelli watch as an AI-guided sputtering instrument makes samples of thin-film alloys.CODY O’LOUGHLIN We desperately need better materials to solve our problems. We’ll need improved electrodes and other parts for more powerful batteries; compounds to more cheaply suck carbon dioxide out of the air; and better catalysts to make green hydrogen and other clean fuels and chemicals. And we will likely need novel materials like higher-temperature superconductors, improved magnets, and different types of semiconductors for a next generation of breakthroughs in everything from quantum computing to fusion power to AI hardware.  But materials science has not had many commercial wins in the last few decades. In part because of its complexity and the lack of successes, the field has become something of an innovation backwater, overshadowed by the more glamorous—and lucrative—search for new drugs and insights into biology. The idea of using AI for materials discovery is not exactly new, but it got a huge boost in 2020 when DeepMind showed that its AlphaFold2 model could accurately predict the three-dimensional structure of proteins. Then, in 2022, came the success and popularity of ChatGPT. The hope that similar AI models using deep learning could aid in doing science captivated tech insiders. Why not use our new generative AI capabilities to search the vast chemical landscape and help simulate atomic structures, pointing the way to new substances with amazing properties? “Simulations can be super powerful for framing problems and understanding what is worth testing in the lab. But there’s zero problems we can ever solve in the real world with simulation alone.” John Gregoire, Lila Sciences, chief autonomous science officer Researchers touted an AI model that had reportedly discovered “millions of new materials.” The money began pouring in, funding a host of startups. But so far there has been no “eureka” moment, no ChatGPT-like breakthrough—no discovery of new miracle materials or even slightly better ones. The startups that want to find useful new compounds face a common bottleneck: By far the most time-consuming and expensive step in materials discovery is not imagining new structures but making them in the real world. Before trying to synthesize a material, you don’t know if, in fact, it can be made and is stable, and many of its properties remain unknown until you test it in the lab. “Simulations can be super powerful for kind of framing problems and understanding what is worth testing in the lab,” says John Gregoire, Lila Sciences’ chief autonomous science officer. “But there’s zero problems we can ever solve in the real world with simulation alone.”  Startups like Lila Sciences have staked their strategies on using AI to transform experimentation and are building labs that use agents to plan, run, and interpret the results of experiments to synthesize new materials. Automation in laboratories already exists. But the idea is to have AI agents take it to the next level by directing autonomous labs, where their tasks could include designing experiments and controlling the robotics used to shuffle samples around. And, most important, companies want to use AI to vacuum up and analyze the vast amount of data produced by such experiments in the search for clues to better materials. If they succeed, these companies could shorten the discovery process from decades to a few years or less, helping uncover new materials and optimize existing ones. But it’s a gamble. Even though AI is already taking over many laboratory chores and tasks, finding new—and useful—materials on its own is another matter entirely.  Innovation backwater I have been reporting about materials discovery for nearly 40 years, and to be honest, there have been only a few memorable commercial breakthroughs, such as lithium-­ion batteries, over that time. There have been plenty of scientific advances to write about, from perovskite solar cells to graphene transistors to metal-­organic frameworks (MOFs), materials based on an intriguing type of molecular architecture that recently won its inventors a Nobel Prize. But few of those advances—including MOFs—have made it far out of the lab. Others, like quantum dots, have found some commercial uses, but in general, the kinds of life-changing inventions created in earlier decades have been lacking.  Blame the amount of time (typically 20 years or more) and the hundreds of millions of dollars it takes to make, test, optimize, and manufacture a new material—and the industry’s lack of interest in spending that kind of time and money in low-margin commodity markets. Or maybe we’ve just run out of ideas for making stuff. The need to both speed up that process and find new ideas is the reason researchers have turned to AI. For decades, scientists have used computers to design potential materials, calculating where to place atoms to form structures that are stable and have predictable characteristics. It’s worked—but only kind of. Advances in AI have made that computational modeling far faster and have promised the ability to quickly explore a vast number of possible structures. Google DeepMind, Meta, and Microsoft have all launched efforts to bring AI tools to the problem of designing new materials.  But the limitations that have always plagued computational modeling of new materials remain. With many types of materials, such as crystals, useful characteristics often can’t be predicted solely by calculating atomic structures. To uncover and optimize those properties, you need to make something real. Or as Rafael Gómez-Bombarelli, one of Lila’s cofounders and an MIT professor of materials science, puts it: “Structure helps us think about the problem, but it’s neither necessary nor sufficient for real materials problems.” Perhaps no advance exemplified the gap between the virtual and physical worlds more than DeepMind’s announcement in late 2023 that it had used deep learning to discover “millions of new materials,” including 380,000 crystals that it declared “the most stable, making them promising candidates for experimental synthesis.” In technical terms, the arrangement of atoms represented a minimum energy state where they were content to stay put. This was “an order-of-magnitude expansion in stable materials known to humanity,” the DeepMind researchers proclaimed. To the AI community, it appeared to be the breakthrough everyone had been waiting for. The DeepMind research not only offered a gold mine of possible new materials, it also created powerful new computational methods for predicting a large number of structures. But some materials scientists had a far different reaction. After closer scrutiny, researchers at the University of California, Santa Barbara, said they’d found “scant evidence for compounds that fulfill the trifecta of novelty, credibility, and utility.” In fact, the scientists reported, they didn’t find any truly novel compounds among the ones they looked at; some were merely “trivial” variations of known ones. The scientists appeared particularly peeved that the potential compounds were labeled materials. They wrote: “We would respectfully suggest that the work does not report any new materials but reports a list of proposed compounds. In our view, a compound can be called a material when it exhibits some functionality and, therefore, has potential utility.” Some of the imagined crystals simply defied the conditions of the real world. To do computations on so many possible structures, DeepMind researchers simulated them at absolute zero, where atoms are well ordered; they vibrate a bit but don’t move around. At higher temperatures—the kind that would exist in the lab or anywhere in the world—the atoms fly about in complex ways, often creating more disorderly crystal structures. A number of the so-called novel materials predicted by DeepMind appeared to be well-ordered versions of disordered ones that were already known.  More generally, the DeepMind paper was simply another reminder of how challenging it is to capture physical realities in virtual simulations—at least for now. Because of the limitations of computational power, researchers typically perform calculations on relatively few atoms. Yet many desirable properties are determined by the microstructure of the materials—at a scale much larger than the atomic world. And some effects, like high-temperature superconductivity or even the catalysis that is key to many common industrial processes, are far too complex or poorly understood to be explained by atomic simulations alone. A common language Even so, there are signs that the divide between simulations and experimental work is beginning to narrow. DeepMind, for one, says that since the release of the 2023 paper it has been working with scientists in labs around the world to synthesize AI-identified compounds and has achieved some success. Meanwhile, a number of the startups entering the space are looking to combine computational and experimental expertise in one organization.  One such startup is Periodic Labs, cofounded by Ekin Dogus Cubuk, a physicist who led the scientific team that generated the 2023 DeepMind headlines, and by Liam Fedus, a co-creator of ChatGPT at OpenAI. Despite its founders’ background in computational modeling and AI software, the company is building much of its materials discovery strategy around synthesis done in automated labs.  The vision behind the startup is to link these different fields of expertise by using large language models that are trained on scientific literature and able to learn from ongoing experiments. An LLM might suggest the recipe and conditions to make a compound; it can also interpret test data and feed additional suggestions to the startup’s chemists and physicists. In this strategy, simulations might suggest possible material candidates, but they are also used to help explain the experimental results and suggest possible structural tweaks. The grand prize would be a room-temperature superconductor, a material that could transform computing and electricity but that has eluded scientists for decades. Periodic Labs, like Lila Sciences, has ambitions beyond designing and making new materials. It wants to “create an AI scientist”—specifically, one adept at the physical sciences. “LLMs have gotten quite good at distilling chemistry information, physics information,” says Cubuk, “and now we’re trying to make it more advanced by teaching it how to do science—for example, doing simulations, doing experiments, doing theoretical modeling.” The approach, like that of Lila Sciences, is based on the expectation that a better understanding of the science behind materials and their synthesis will lead to clues that could help researchers find a broad range of new ones. One target for Periodic Labs is materials whose properties are defined by quantum effects, such as new types of magnets. The grand prize would be a room-temperature superconductor, a material that could transform computing and electricity but that has eluded scientists for decades. Superconductors are materials in which electricity flows without any resistance and, thus, without producing heat. So far, the best of these materials become superconducting only at relatively low temperatures and require significant cooling. If they can be made to work at or close to room temperature, they could lead to far more efficient power grids, new types of quantum computers, and even more practical high-speed magnetic-levitation trains.  Lila staff scientist Natalie Page (right), Gómez- Bombarelli, and Gregoire inspect thin-film samples after they come out of the sputtering machine and before they undergo testing.CODY O’LOUGHLIN The failure to find a room-­temperature superconductor is one of the great disappointments in materials science over the last few decades. I was there when President Reagan spoke about the technology in 1987, during the peak hype over newly made ceramics that became superconducting at the relatively balmy temperature of 93 Kelvin (that’s −292 °F), enthusing that they “bring us to the threshold of a new age.” There was a sense of optimism among the scientists and businesspeople in that packed ballroom at the Washington Hilton as Reagan anticipated “a host of benefits, not least among them a reduced dependence on foreign oil, a cleaner environment, and a stronger national economy.” In retrospect, it might have been one of the last times that we pinned our economic and technical aspirations on a breakthrough in materials. The promised new age never came. Scientists still have not found a material that becomes superconducting at room temperatures, or anywhere close, under normal conditions. The best existing superconductors are brittle and tend to make lousy wires. One of the reasons that finding higher-­temperature superconductors has been so difficult is that no theory explains the effect at relatively high temperatures—or can predict it simply from the placement of atoms in the structure. It will ultimately fall to lab scientists to synthesize any interesting candidates, test them, and search the resulting data for clues to understanding the still puzzling phenomenon. Doing so, says Cubuk, is one of the top priorities of Periodic Labs.  AI in charge It can take a researcher a year or more to make a crystal structure for the first time. Then there are typically years of further work to test its properties and figure out how to make the larger quantities needed for a commercial product.  Startups like Lila Sciences and Periodic Labs are pinning their hopes largely on the prospect that AI-directed experiments can slash those times. One reason for the optimism is that many labs have already incorporated a lot of automation, for everything from preparing samples to shuttling test items around. Researchers routinely use robotic arms, software, automated versions of microscopes and other analytical instruments, and mechanized tools for manipulating lab equipment. The automation allows, among other things, for high-throughput synthesis, in which multiple samples with various combinations of ingredients are rapidly created and screened in large batches, greatly speeding up the experiments. The idea is that using AI to plan and run such automated synthesis can make it far more systematic and efficient. AI agents, which can collect and analyze far more data than any human possibly could, can use real-time information to vary the ingredients and synthesis conditions until they get a sample with the optimal properties. Such AI-directed labs could do far more experiments than a person and could be far smarter than existing systems for high-throughput synthesis.  But so-called self-driving labs for materials are still a work in progress. Many types of materials require solid-­state synthesis, a set of processes that are far more difficult to automate than the liquid-­handling activities that are commonplace in making drugs. You need to prepare and mix powders of multiple inorganic ingredients in the right combination for making, say, a catalyst and then decide how to process the sample to create the desired structure—for example, identifying the right temperature and pressure at which to carry out the synthesis. Even determining what you’ve made can be tricky. In 2023, the A-Lab at Lawrence Berkeley National Laboratory claimed to be the first fully automated lab to use inorganic powders as starting ingredients. Subsequently, scientists reported that the autonomous lab had used robotics and AI to synthesize and test 41 novel materials, including some predicted in the DeepMind database. Some critics questioned the novelty of what was produced and complained that the automated analysis of the materials was not up to experimental standards, but the Berkeley researchers defended the effort as simply a demonstration of the autonomous system’s potential. “How it works today and how we envision it are still somewhat different. There’s just a lot of tool building that needs to be done,” says Gerbrand Ceder, the principal scientist behind the A-Lab.  AI agents are already getting good at doing many laboratory chores, from preparing recipes to interpreting some kinds of test data—finding, for example, patterns in a micrograph that might be hidden to the human eye. But Ceder is hoping the technology could soon “capture human decision-making,” analyzing ongoing experiments to make strategic choices on what to do next. For example, his group is working on an improved synthesis agent that would better incorporate what he calls scientists’ “diffused” knowledge—the kind gained from extensive training and experience. “I imagine a world where people build agents around their expertise, and then there’s sort of an uber-model that puts it together,” he says. “The uber-model essentially needs to know what agents it can call on and what they know, or what their expertise is.” “In one field that I work in, solid-state batteries, there are 50 papers published every day. And that is just one field that I work in. The A I revolution is about finally gathering all the scientific data we have.” Gerbrand Ceder, principal scientist, A-Lab One of the strengths of AI agents is their ability to devour vast amounts of scientific literature. “In one field that I work in, solid-­state batteries, there are 50 papers published every day. And that is just one field that I work in,” says Ceder. It’s impossible for anyone to keep up. “The AI revolution is about finally gathering all the scientific data we have,” he says.  Last summer, Ceder became the chief science officer at an AI materials discovery startup called Radical AI and took a sabbatical from the University of California, Berkeley, to help set up its self-driving labs in New York City. A slide deck shows the portfolio of different AI agents and generative models meant to help realize Ceder’s vision. If you look closely, you can spot an LLM called the “orchestrator”—it’s what CEO Joseph Krause calls the “head honcho.”  New hope So far, despite the hype around the use of AI to discover new materials and the growing momentum—and money—behind the field, there still has not been a convincing big win. There is no example like the 2016 victory of DeepMind’s AlphaGo over a Go world champion. Or like AlphaFold’s achievement in mastering one of biomedicine’s hardest and most time-consuming chores, predicting 3D structures of proteins.  The field of materials discovery is still waiting for its moment. It could come if AI agents can dramatically speed the design or synthesis of practical materials, similar to but better than what we have today. Or maybe the moment will be the discovery of a truly novel one, such as a room-­temperature superconductor. A small window provides a view of the inside workings of Lila’s sputtering instrument.The startup uses the machine to create a wide variety of experimental samples, including potential materials that could be useful for coatings and catalysts.CODY O’LOUGHLIN With or without such a breakthrough moment, startups face the challenge of trying to turn their scientific achievements into useful materials. The task is particularly difficult because any new materials would likely have to be commercialized in an industry dominated by large incumbents that are not particularly prone to risk-taking. Susan Schofer, a tech investor and partner at the venture capital firm SOSV, is cautiously optimistic about the field. But Schofer, who spent several years in the mid-2000s as a catalyst researcher at one of the first startups using automation and high-throughput screening for materials discovery (it didn’t survive), wants to see some evidence that the technology can translate into commercial successes when she evaluates startups to invest in.   In particular, she wants to see evidence that the AI startups are already “finding something new, that’s different, and know how they are going to iterate from there.” And she wants to see a business model that captures the value of new materials. She says, “I think the ideal would be: I got a spec from the industry. I know what their problem is. We’ve defined it. Now we’re going to go build it. Now we have a new material that we can sell, that we have scaled up enough that we’ve proven it. And then we partner somehow to manufacture it, but we get revenue off selling the material.” Schofer says that while she gets the vision of trying to redefine science, she’d advise startups to “show us how you’re going to get there.” She adds, “Let’s see the first steps.” Demonstrating those first steps could be essential in enticing large existing materials companies to embrace AI technologies more fully. Corporate researchers in the industry have been burned before—by the promise over the decades that increasingly powerful computers will magically design new materials; by combinatorial chemistry, a fad that raced through materials R&D labs in the early 2000s with little tangible result; and by the promise that synthetic biology would make our next generation of chemicals and materials. More recently, the materials community has been blanketed by a new hype cycle around AI. Some of that hype was fueled by the 2023 DeepMind announcement of the discovery of “millions of new materials,” a claim that, in retrospect, clearly overpromised. And it was further fueled when an MIT economics student posted a paper in late 2024 claiming that a large, unnamed corporate R&D lab had used AI to efficiently invent a slew of new materials. AI, it seemed, was already revolutionizing the industry. A few months later, the MIT economics department concluded that “the paper should be withdrawn from public discourse.” Two prominent MIT economists who are acknowledged in a footnote in the paper added that they had “no confidence in the provenance, reliability or validity of the data and the veracity of the research.” Can AI move beyond the hype and false hopes and truly transform materials discovery? Maybe. There is ample evidence that it’s changing how materials scientists work, providing them—if nothing else—with useful lab tools. Researchers are increasingly using LLMs to query the scientific literature and spot patterns in experimental data.  But it’s still early days in turning those AI tools into actual materials discoveries. The use of AI to run autonomous labs, in particular, is just getting underway; making and testing stuff takes time and lots of money. The morning I visited Lila Sciences, its labs were largely empty, and it’s now preparing to move into a much larger space a few miles away. Periodic Labs is just beginning to set up its lab in San Francisco. It’s starting with manual synthesis guided by AI predictions; its robotic high-throughput lab will come soon. Radical AI reports that its lab is almost fully autonomous but plans to soon move to a larger space. Prominent AI researchers Liam Fedus (left) and Ekin Dogus Cubuk are the cofounders of Periodic Labs. The San Francisco–based startup aims to build an AI scientist that’s adept at the physical sciences.JASON HENRY When I talk to the scientific founders of these startups, I hear a renewed excitement about a field that long operated in the shadows of drug discovery and genomic medicine. For one thing, there is the money. “You see this enormous enthusiasm to put AI and materials together,” says Ceder. “I’ve never seen this much money flow into materials.” Reviving the materials industry is a challenge that goes beyond scientific advances, however. It means selling companies on a whole new way of doing R&D. But the startups benefit from a huge dose of confidence borrowed from the rest of the AI industry. And maybe that, after years of playing it safe, is just what the materials business needs. This story is part of an online package on resetting expectations around AI. For more see technologyreview.com/hypecorrection.

.cst-large,
.cst-default {
width: 100%;
}

@media (max-width: 767px) {
.cst-block {
overflow-x: hidden;
}
}

@media (min-width: 630px) {
.cst-large {
margin-left: -25%;
width: 150%;
}

@media (min-width: 960px) {
.cst-large {
margin-left: -16.666666666666664%;
width: 140.26%;
}
}

@media (min-width: 1312px) {
.cst-large {
width: 145.13%;
}
}
}

@media (min-width: 60rem) {
.flourish-embed {
width: 60vw;
transform: translateX(-50%);
left: 50%;
position: relative;
}
}

The microwave-size instrument at Lila Sciences in Cambridge, Massachusetts, doesn’t look all that different from others that I’ve seen in state-of-the-art materials labs. Inside its vacuum chamber, the machine zaps a palette of different elements to create vaporized particles, which then fly through the chamber and land to create a thin film, using a technique called sputtering. What sets this instrument apart is that artificial intelligence is running the experiment; an AI agent, trained on vast amounts of scientific literature and data, has determined the recipe and is varying the combination of elements. 

Later, a person will walk the samples, each containing multiple potential catalysts, over to a different part of the lab for testing. Another AI agent will scan and interpret the data, using it to suggest another round of experiments to try to optimize the materials’ performance.  


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


For now, a human scientist keeps a close eye on the experiments and will approve the next steps on the basis of the AI’s suggestions and the test results. But the startup is convinced this AI-controlled machine is a peek into the future of materials discovery—one in which autonomous labs could make it far cheaper and faster to come up with novel and useful compounds. 

Flush with hundreds of millions of dollars in new funding, Lila Sciences is one of AI’s latest unicorns. The company is on a larger mission to use AI-run autonomous labs for scientific discovery—the goal is to achieve what it calls scientific superintelligence. But I’m here this morning to learn specifically about the discovery of new materials. 

Lila Sciences’ John Gregoire (background) and Rafael Gómez-Bombarelli watch as an AI-guided sputtering instrument makes samples of thin-film alloys.
CODY O’LOUGHLIN

We desperately need better materials to solve our problems. We’ll need improved electrodes and other parts for more powerful batteries; compounds to more cheaply suck carbon dioxide out of the air; and better catalysts to make green hydrogen and other clean fuels and chemicals. And we will likely need novel materials like higher-temperature superconductors, improved magnets, and different types of semiconductors for a next generation of breakthroughs in everything from quantum computing to fusion power to AI hardware. 

But materials science has not had many commercial wins in the last few decades. In part because of its complexity and the lack of successes, the field has become something of an innovation backwater, overshadowed by the more glamorous—and lucrative—search for new drugs and insights into biology.

The idea of using AI for materials discovery is not exactly new, but it got a huge boost in 2020 when DeepMind showed that its AlphaFold2 model could accurately predict the three-dimensional structure of proteins. Then, in 2022, came the success and popularity of ChatGPT. The hope that similar AI models using deep learning could aid in doing science captivated tech insiders. Why not use our new generative AI capabilities to search the vast chemical landscape and help simulate atomic structures, pointing the way to new substances with amazing properties?

“Simulations can be super powerful for framing problems and understanding what is worth testing in the lab. But there’s zero problems we can ever solve in the real world with simulation alone.”

John Gregoire, Lila Sciences, chief autonomous science officer

Researchers touted an AI model that had reportedly discovered “millions of new materials.” The money began pouring in, funding a host of startups. But so far there has been no “eureka” moment, no ChatGPT-like breakthrough—no discovery of new miracle materials or even slightly better ones.

The startups that want to find useful new compounds face a common bottleneck: By far the most time-consuming and expensive step in materials discovery is not imagining new structures but making them in the real world. Before trying to synthesize a material, you don’t know if, in fact, it can be made and is stable, and many of its properties remain unknown until you test it in the lab.

“Simulations can be super powerful for kind of framing problems and understanding what is worth testing in the lab,” says John Gregoire, Lila Sciences’ chief autonomous science officer. “But there’s zero problems we can ever solve in the real world with simulation alone.” 

Startups like Lila Sciences have staked their strategies on using AI to transform experimentation and are building labs that use agents to plan, run, and interpret the results of experiments to synthesize new materials. Automation in laboratories already exists. But the idea is to have AI agents take it to the next level by directing autonomous labs, where their tasks could include designing experiments and controlling the robotics used to shuffle samples around. And, most important, companies want to use AI to vacuum up and analyze the vast amount of data produced by such experiments in the search for clues to better materials.

If they succeed, these companies could shorten the discovery process from decades to a few years or less, helping uncover new materials and optimize existing ones. But it’s a gamble. Even though AI is already taking over many laboratory chores and tasks, finding new—and useful—materials on its own is another matter entirely. 

Innovation backwater

I have been reporting about materials discovery for nearly 40 years, and to be honest, there have been only a few memorable commercial breakthroughs, such as lithium-­ion batteries, over that time. There have been plenty of scientific advances to write about, from perovskite solar cells to graphene transistors to metal-­organic frameworks (MOFs), materials based on an intriguing type of molecular architecture that recently won its inventors a Nobel Prize. But few of those advances—including MOFs—have made it far out of the lab. Others, like quantum dots, have found some commercial uses, but in general, the kinds of life-changing inventions created in earlier decades have been lacking. 

Blame the amount of time (typically 20 years or more) and the hundreds of millions of dollars it takes to make, test, optimize, and manufacture a new material—and the industry’s lack of interest in spending that kind of time and money in low-margin commodity markets. Or maybe we’ve just run out of ideas for making stuff.

The need to both speed up that process and find new ideas is the reason researchers have turned to AI. For decades, scientists have used computers to design potential materials, calculating where to place atoms to form structures that are stable and have predictable characteristics. It’s worked—but only kind of. Advances in AI have made that computational modeling far faster and have promised the ability to quickly explore a vast number of possible structures. Google DeepMind, Meta, and Microsoft have all launched efforts to bring AI tools to the problem of designing new materials. 

But the limitations that have always plagued computational modeling of new materials remain. With many types of materials, such as crystals, useful characteristics often can’t be predicted solely by calculating atomic structures.

To uncover and optimize those properties, you need to make something real. Or as Rafael Gómez-Bombarelli, one of Lila’s cofounders and an MIT professor of materials science, puts it: “Structure helps us think about the problem, but it’s neither necessary nor sufficient for real materials problems.”

Perhaps no advance exemplified the gap between the virtual and physical worlds more than DeepMind’s announcement in late 2023 that it had used deep learning to discover “millions of new materials,” including 380,000 crystals that it declared “the most stable, making them promising candidates for experimental synthesis.” In technical terms, the arrangement of atoms represented a minimum energy state where they were content to stay put. This was “an order-of-magnitude expansion in stable materials known to humanity,” the DeepMind researchers proclaimed.

To the AI community, it appeared to be the breakthrough everyone had been waiting for. The DeepMind research not only offered a gold mine of possible new materials, it also created powerful new computational methods for predicting a large number of structures.

But some materials scientists had a far different reaction. After closer scrutiny, researchers at the University of California, Santa Barbara, said they’d found “scant evidence for compounds that fulfill the trifecta of novelty, credibility, and utility.” In fact, the scientists reported, they didn’t find any truly novel compounds among the ones they looked at; some were merely “trivial” variations of known ones. The scientists appeared particularly peeved that the potential compounds were labeled materials. They wrote: “We would respectfully suggest that the work does not report any new materials but reports a list of proposed compounds. In our view, a compound can be called a material when it exhibits some functionality and, therefore, has potential utility.”

Some of the imagined crystals simply defied the conditions of the real world. To do computations on so many possible structures, DeepMind researchers simulated them at absolute zero, where atoms are well ordered; they vibrate a bit but don’t move around. At higher temperatures—the kind that would exist in the lab or anywhere in the world—the atoms fly about in complex ways, often creating more disorderly crystal structures. A number of the so-called novel materials predicted by DeepMind appeared to be well-ordered versions of disordered ones that were already known. 

More generally, the DeepMind paper was simply another reminder of how challenging it is to capture physical realities in virtual simulations—at least for now. Because of the limitations of computational power, researchers typically perform calculations on relatively few atoms. Yet many desirable properties are determined by the microstructure of the materials—at a scale much larger than the atomic world. And some effects, like high-temperature superconductivity or even the catalysis that is key to many common industrial processes, are far too complex or poorly understood to be explained by atomic simulations alone.

A common language

Even so, there are signs that the divide between simulations and experimental work is beginning to narrow. DeepMind, for one, says that since the release of the 2023 paper it has been working with scientists in labs around the world to synthesize AI-identified compounds and has achieved some success. Meanwhile, a number of the startups entering the space are looking to combine computational and experimental expertise in one organization. 

One such startup is Periodic Labs, cofounded by Ekin Dogus Cubuk, a physicist who led the scientific team that generated the 2023 DeepMind headlines, and by Liam Fedus, a co-creator of ChatGPT at OpenAI. Despite its founders’ background in computational modeling and AI software, the company is building much of its materials discovery strategy around synthesis done in automated labs. 

The vision behind the startup is to link these different fields of expertise by using large language models that are trained on scientific literature and able to learn from ongoing experiments. An LLM might suggest the recipe and conditions to make a compound; it can also interpret test data and feed additional suggestions to the startup’s chemists and physicists. In this strategy, simulations might suggest possible material candidates, but they are also used to help explain the experimental results and suggest possible structural tweaks.

The grand prize would be a room-temperature superconductor, a material that could transform computing and electricity but that has eluded scientists for decades.

Periodic Labs, like Lila Sciences, has ambitions beyond designing and making new materials. It wants to “create an AI scientist”—specifically, one adept at the physical sciences. “LLMs have gotten quite good at distilling chemistry information, physics information,” says Cubuk, “and now we’re trying to make it more advanced by teaching it how to do science—for example, doing simulations, doing experiments, doing theoretical modeling.”

The approach, like that of Lila Sciences, is based on the expectation that a better understanding of the science behind materials and their synthesis will lead to clues that could help researchers find a broad range of new ones. One target for Periodic Labs is materials whose properties are defined by quantum effects, such as new types of magnets. The grand prize would be a room-temperature superconductor, a material that could transform computing and electricity but that has eluded scientists for decades.

Superconductors are materials in which electricity flows without any resistance and, thus, without producing heat. So far, the best of these materials become superconducting only at relatively low temperatures and require significant cooling. If they can be made to work at or close to room temperature, they could lead to far more efficient power grids, new types of quantum computers, and even more practical high-speed magnetic-levitation trains. 

Lila staff scientist Natalie Page (right), Gómez- Bombarelli, and Gregoire inspect thin-film samples after they come out of the sputtering machine and before they undergo testing.
CODY O’LOUGHLIN

The failure to find a room-­temperature superconductor is one of the great disappointments in materials science over the last few decades. I was there when President Reagan spoke about the technology in 1987, during the peak hype over newly made ceramics that became superconducting at the relatively balmy temperature of 93 Kelvin (that’s −292 °F), enthusing that they “bring us to the threshold of a new age.” There was a sense of optimism among the scientists and businesspeople in that packed ballroom at the Washington Hilton as Reagan anticipated “a host of benefits, not least among them a reduced dependence on foreign oil, a cleaner environment, and a stronger national economy.” In retrospect, it might have been one of the last times that we pinned our economic and technical aspirations on a breakthrough in materials.

The promised new age never came. Scientists still have not found a material that becomes superconducting at room temperatures, or anywhere close, under normal conditions. The best existing superconductors are brittle and tend to make lousy wires.

One of the reasons that finding higher-­temperature superconductors has been so difficult is that no theory explains the effect at relatively high temperatures—or can predict it simply from the placement of atoms in the structure. It will ultimately fall to lab scientists to synthesize any interesting candidates, test them, and search the resulting data for clues to understanding the still puzzling phenomenon. Doing so, says Cubuk, is one of the top priorities of Periodic Labs. 

AI in charge

It can take a researcher a year or more to make a crystal structure for the first time. Then there are typically years of further work to test its properties and figure out how to make the larger quantities needed for a commercial product. 

Startups like Lila Sciences and Periodic Labs are pinning their hopes largely on the prospect that AI-directed experiments can slash those times. One reason for the optimism is that many labs have already incorporated a lot of automation, for everything from preparing samples to shuttling test items around. Researchers routinely use robotic arms, software, automated versions of microscopes and other analytical instruments, and mechanized tools for manipulating lab equipment.

The automation allows, among other things, for high-throughput synthesis, in which multiple samples with various combinations of ingredients are rapidly created and screened in large batches, greatly speeding up the experiments.

The idea is that using AI to plan and run such automated synthesis can make it far more systematic and efficient. AI agents, which can collect and analyze far more data than any human possibly could, can use real-time information to vary the ingredients and synthesis conditions until they get a sample with the optimal properties. Such AI-directed labs could do far more experiments than a person and could be far smarter than existing systems for high-throughput synthesis. 

But so-called self-driving labs for materials are still a work in progress.

Many types of materials require solid-­state synthesis, a set of processes that are far more difficult to automate than the liquid-­handling activities that are commonplace in making drugs. You need to prepare and mix powders of multiple inorganic ingredients in the right combination for making, say, a catalyst and then decide how to process the sample to create the desired structure—for example, identifying the right temperature and pressure at which to carry out the synthesis. Even determining what you’ve made can be tricky.

In 2023, the A-Lab at Lawrence Berkeley National Laboratory claimed to be the first fully automated lab to use inorganic powders as starting ingredients. Subsequently, scientists reported that the autonomous lab had used robotics and AI to synthesize and test 41 novel materials, including some predicted in the DeepMind database. Some critics questioned the novelty of what was produced and complained that the automated analysis of the materials was not up to experimental standards, but the Berkeley researchers defended the effort as simply a demonstration of the autonomous system’s potential.

“How it works today and how we envision it are still somewhat different. There’s just a lot of tool building that needs to be done,” says Gerbrand Ceder, the principal scientist behind the A-Lab. 

AI agents are already getting good at doing many laboratory chores, from preparing recipes to interpreting some kinds of test data—finding, for example, patterns in a micrograph that might be hidden to the human eye. But Ceder is hoping the technology could soon “capture human decision-making,” analyzing ongoing experiments to make strategic choices on what to do next. For example, his group is working on an improved synthesis agent that would better incorporate what he calls scientists’ “diffused” knowledge—the kind gained from extensive training and experience. “I imagine a world where people build agents around their expertise, and then there’s sort of an uber-model that puts it together,” he says. “The uber-model essentially needs to know what agents it can call on and what they know, or what their expertise is.”

“In one field that I work in, solid-state batteries, there are 50 papers published every day. And that is just one field that I work in. The A I revolution is about finally gathering all the scientific data we have.”

Gerbrand Ceder, principal scientist, A-Lab

One of the strengths of AI agents is their ability to devour vast amounts of scientific literature. “In one field that I work in, solid-­state batteries, there are 50 papers published every day. And that is just one field that I work in,” says Ceder. It’s impossible for anyone to keep up. “The AI revolution is about finally gathering all the scientific data we have,” he says. 

Last summer, Ceder became the chief science officer at an AI materials discovery startup called Radical AI and took a sabbatical from the University of California, Berkeley, to help set up its self-driving labs in New York City. A slide deck shows the portfolio of different AI agents and generative models meant to help realize Ceder’s vision. If you look closely, you can spot an LLM called the “orchestrator”—it’s what CEO Joseph Krause calls the “head honcho.” 

New hope

So far, despite the hype around the use of AI to discover new materials and the growing momentum—and money—behind the field, there still has not been a convincing big win. There is no example like the 2016 victory of DeepMind’s AlphaGo over a Go world champion. Or like AlphaFold’s achievement in mastering one of biomedicine’s hardest and most time-consuming chores, predicting 3D structures of proteins. 

The field of materials discovery is still waiting for its moment. It could come if AI agents can dramatically speed the design or synthesis of practical materials, similar to but better than what we have today. Or maybe the moment will be the discovery of a truly novel one, such as a room-­temperature superconductor.

A hexagonal window in the side of a black box
A small window provides a view of the inside workings of Lila’s sputtering instrument.The startup uses the machine to create a wide variety of experimental samples, including potential materials that could be useful for coatings and catalysts.
CODY O’LOUGHLIN

With or without such a breakthrough moment, startups face the challenge of trying to turn their scientific achievements into useful materials. The task is particularly difficult because any new materials would likely have to be commercialized in an industry dominated by large incumbents that are not particularly prone to risk-taking.

Susan Schofer, a tech investor and partner at the venture capital firm SOSV, is cautiously optimistic about the field. But Schofer, who spent several years in the mid-2000s as a catalyst researcher at one of the first startups using automation and high-throughput screening for materials discovery (it didn’t survive), wants to see some evidence that the technology can translate into commercial successes when she evaluates startups to invest in.  

In particular, she wants to see evidence that the AI startups are already “finding something new, that’s different, and know how they are going to iterate from there.” And she wants to see a business model that captures the value of new materials. She says, “I think the ideal would be: I got a spec from the industry. I know what their problem is. We’ve defined it. Now we’re going to go build it. Now we have a new material that we can sell, that we have scaled up enough that we’ve proven it. And then we partner somehow to manufacture it, but we get revenue off selling the material.”

Schofer says that while she gets the vision of trying to redefine science, she’d advise startups to “show us how you’re going to get there.” She adds, “Let’s see the first steps.”

Demonstrating those first steps could be essential in enticing large existing materials companies to embrace AI technologies more fully. Corporate researchers in the industry have been burned before—by the promise over the decades that increasingly powerful computers will magically design new materials; by combinatorial chemistry, a fad that raced through materials R&D labs in the early 2000s with little tangible result; and by the promise that synthetic biology would make our next generation of chemicals and materials.

More recently, the materials community has been blanketed by a new hype cycle around AI. Some of that hype was fueled by the 2023 DeepMind announcement of the discovery of “millions of new materials,” a claim that, in retrospect, clearly overpromised. And it was further fueled when an MIT economics student posted a paper in late 2024 claiming that a large, unnamed corporate R&D lab had used AI to efficiently invent a slew of new materials. AI, it seemed, was already revolutionizing the industry.

A few months later, the MIT economics department concluded that “the paper should be withdrawn from public discourse.” Two prominent MIT economists who are acknowledged in a footnote in the paper added that they had “no confidence in the provenance, reliability or validity of the data and the veracity of the research.”

Can AI move beyond the hype and false hopes and truly transform materials discovery? Maybe. There is ample evidence that it’s changing how materials scientists work, providing them—if nothing else—with useful lab tools. Researchers are increasingly using LLMs to query the scientific literature and spot patterns in experimental data. 

But it’s still early days in turning those AI tools into actual materials discoveries. The use of AI to run autonomous labs, in particular, is just getting underway; making and testing stuff takes time and lots of money. The morning I visited Lila Sciences, its labs were largely empty, and it’s now preparing to move into a much larger space a few miles away. Periodic Labs is just beginning to set up its lab in San Francisco. It’s starting with manual synthesis guided by AI predictions; its robotic high-throughput lab will come soon. Radical AI reports that its lab is almost fully autonomous but plans to soon move to a larger space.

Prominent AI researchers Liam Fedus (left) and Ekin Dogus Cubuk are the cofounders of Periodic Labs. The San Francisco–based startup aims to build an AI scientist that’s adept at the physical sciences.
JASON HENRY

When I talk to the scientific founders of these startups, I hear a renewed excitement about a field that long operated in the shadows of drug discovery and genomic medicine. For one thing, there is the money. “You see this enormous enthusiasm to put AI and materials together,” says Ceder. “I’ve never seen this much money flow into materials.”

Reviving the materials industry is a challenge that goes beyond scientific advances, however. It means selling companies on a whole new way of doing R&D.

But the startups benefit from a huge dose of confidence borrowed from the rest of the AI industry. And maybe that, after years of playing it safe, is just what the materials business needs.


This story is part of an online package on resetting expectations around AI. For more see technologyreview.com/hypecorrection.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Nvidia moves deeper into AI infrastructure with SchedMD acquisition

“Slurm excels at orchestrating multi-node distributed training, where jobs span hundreds or thousands of GPUs,” said Lian Jye Su, chief analyst at Omdia. “The software can optimize data movement within servers by deciding where jobs should be placed based on resource availability. With strong visibility into the network topology, Slurm

Read More »

ExxonMobil bumps up 2030 target for Permian production

ExxonMobil Corp., Houston, is looking to grow production in the Permian basin to about 2.5 MMboe/d by 2030, an increase of 200,000 boe/d from executives’ previous forecasts and a jump of more than 45% from this year’s output. Helping drive that higher target is an expected 2030 cost profile that

Read More »

Chevron Reduces Price for Venezuelan Oil

Chevron Corp, lowered the price of Venezuelan crude offered to US refiners after a tanker was seized by American forces in the Caribbean and as global prices drifted lower.  The oil supermajor sold a batch of Venezuelan oil on Dec. 11 — a day after US forces seized a vessel off the country’s coast — at weaker prices compared than a batch offered on Monday, according to people with knowledge of the situation.  The administration of President Donald Trump is stepping up pressure on Venezuela by targeting oil revenues critical to the survival of Nicolas Maduro regime. The seized vessel, the Skipper, is currently near the Dominican Republic and appeared to be en route to the US, according to vessel movements tracked by Bloomberg. While it’s unclear when the ship will be able to discharge, it’s expected arrival is pressuring already weak prices in the Gulf Coast market, the people said, asking not to be named because the information is private.  Chevron’s operations in Venezuela continue in full compliance with laws and regulations applicable to its business, as well as the sanctions frameworks provided for by the US government, the Houston-based company said in a statement.  The company sold about 10 oil cargoes of different grades for loading next month, in a sign that it’s pressing ahead despite heightened tensions between the two countries. The cargoes were sold in two separate tenders and price levels were not immediately available.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Phillips 66 Budgets $2.4B for 2026

Phillips 66 said Monday it expects a $2.4 billion budget for next year, consisting of $1.1 billion in sustaining capital and $1.3 billion in growth capital. “The 2026 capital budget reflects our ongoing commitment to capital discipline and maximizing shareholder returns”, chair and chief executive Mark Lashier said in an online statement. “We are investing growth capital in our NGL value chain and high-return refining projects, while also investing sustaining capital to support safe and reliable operations”. Houston, Texas-based Phillips 66 expects to shell out $1.1 billion into its refining business, comprising $590 million in sustaining capital and $520 million into growth projects. “With the consolidation of WRB Refining, we incorporated approximately $200 million of sustaining capital and $100 million of growth capital into the budget”, Lashier said. Phillips 66 recently acquired an additional 50 percent stake in WRB Refining LP from Cenovus Energy Inc for $1.4 billion, fully taking over the Wood River and Borger refineries, as confirmed by Phillips 66 in its third quarter report October 29. Wood River in Roxana, Illinois, has a gasoline and distillates production capacity of 176,000 bpd and 140,000 bpd respectively. Borger in Borger, Texas, produces up to 100,000 bpd of gasoline and 70,000 bpd of distillates, according to Phillips 66. The refining allotment for 2026 also includes a multiyear investment at the Humber refinery to enable the production of higher-quality gasoline and expand the facility’s access to “higher-value global markets”, the company said. Phillips 66 expects to start up the project in the second quarter of 2027. Located in North Lincolnshire on the English east coast, the Humber site produces up to 95,000 barrels per day (bpd) of gasoline and 115,000 bpd of distillates, according to Phillips 66. The refining budget also includes “over 100 low-capital, high-return projects to improve market capture

Read More »

USA Emerges as World’s Hydrocarbon Superpower

The U.S. has emerged as the world’s hydrocarbon superpower, exemplified by its meteoric rise in the liquefied natural gas (LNG) market.   That’s what Wood Mackenzie (WoodMac) said in a statement sent to Rigzone recently, which highlighted several charts that “spotlight the most significant trends reshaping the [energy and resources] sector globally” and were included in the company’s latest Horizons report. “You don’t need to look too far back to find a U.S. which was building LNG import infrastructure and now in under 10 years it has become the world’s largest LNG exporter,” WoodMac said in the statement. The company noted in the statement that, by 2030, the U.S. is projected to account for 30 percent of global LNG output. A chart included in the statement outlined that the U.S. would continue as the world’s largest LNG exporter in 2030, followed by Qatar and Australia. WoodMac also highlighted in its statement that the U.S. “leads global oil production (including oil, condensate, and natural gas liquids), delivering one-fifth of the world’s volumes”. “In comparison, its closest competitors, Saudi Arabia and Russia, produce only 65 percent and 50 percent of U.S. volumes, respectively,” it added.   Malcolm Forbes-Cable, Vice President, Upstream and Carbon Management Consulting at Wood Mackenzie, said in the statement, “the resurrection of U.S. LNG is a crucial reminder of what a resource-rich, free-market country like the U.S. can do”. “This hydrocarbon hegemony is now being leveraged as a diplomatic tool,” he added. In its latest short term energy outlook (STEO), which was released on December 9, the U.S. Energy Information Administration (EIA) projected that gross U.S. LNG exports will average 14.9 billion cubic feet per day in 2025 and 16.3 billion cubic feet per day in 2026. Gross U.S. LNG exports averaged 11.9 billion cubic feet per day in 2024, this STEO highlighted. A quarterly breakdown included in the EIA’s latest STEO forecasted that gross U.S. LNG exports will come in at

Read More »

North America Rig Count Stays Flat

North America’s rig count stayed flat week on week, according to Baker Hughes’ latest North America rotary rig count, which was published on December 12. The total U.S. rig count dropped by one week on week and the total Canada rig count rose by one during the same period, keeping the total North America rig count at 740, the count outlined. The total North America rig count comprised 548 rigs from the U.S. and 192 rigs from Canada, the count showed. Of the total U.S. rig count of 548, 528 rigs are categorized as land rigs, 17 are categorized as offshore rigs, and three are categorized as inland water rigs. The total U.S. rig count is made up of 414 oil rigs, 127 gas rigs, and seven miscellaneous rigs, according to Baker Hughes’ count, which revealed that the U.S. total comprises 478 horizontal rigs, 54 directional rigs, and 16 vertical rigs. Week on week, the U.S. land rig count rose by one, its offshore rig count dropped by two, and its inland water rig count remained unchanged, Baker Hughes highlighted. The U.S. oil rig count rose by one week on week, its gas rig count dropped by two by week on week, and its miscellaneous rig count remained unchanged week on week, the count showed. The U.S. horizontal rig count rose by two, its directional rig count dropped by four, and its vertical rig count increased by one, week on week, the count revealed. A major state variances subcategory included in the rig count showed that, week on week, Texas added two rigs, and Ohio and Louisiana each added one rig. This subcategory revealed that New Mexico dropped three rigs and Colorado dropped one rig week on week. A major basin variances subcategory included in Baker Hughes’ rig count showed

Read More »

Pembina Completes Remarketing of Cedar LNG Share

Pembina Pipeline Corp said Monday it had signed a 12-year agreement allowing Ovintiv Inc to use 0.5 million tonnes per annum (MMtpa) of liquefaction capacity at the under-construction Cedar LNG on Canada’s West Coast. “Pembina has now remarketed the full 1.5 mtpa [million tonnes per annum] of its Cedar LNG capacity to third parties and further demonstrated its commitment to delivering growth and executing its strategy within the company’s long-standing financial guardrails and prudent risk profile”, Calgary-based Pembina said in an online statement. It owns 49.9 percent in the project. The Haisla Nation, who host Cedar LNG on tribal territory, holds 50.1 percent. According to the developers, Cedar LNG is the world’s first liquefied natural gas facility primarily owned by Indigenous people. Expected to start operation 2028, the project has a declared capacity of 3.3 MMtpa. “The agreement enables the export of 0.5 mtpa of LNG, under which Pembina will provide transportation and liquefaction capacity to Ovintiv over a 12-year term, commencing with commercial operations at Cedar LNG, anticipated in late 2028”, Denver, Colorado-based Ovintiv said separately. “It provides Ovintiv, one of Canada’s largest natural gas producers, with access to additional export markets, complementary to the company’s existing portfolio of natural gas transportation arrangements. Export from the west coast of Canada offers the shortest shipping distance to Asian LNG markets from North America”. Meghan Eilers, midstream and marketing executive vice president at Ovintiv, said, “Today’s announcement marks a significant advancement in our strategy to expand market access and maximize the profitability of our Montney gas resource through participation in global LNG markets”. Pembina senior vice president and corporate development officer Stu Taylor said, “Ovintiv is a significant customer to Pembina across our natural gas processing and transportation, and NGL transportation, fractionation and marketing businesses”. Pembina added in its statement, in which it also announced a capital investment

Read More »

TotalEnergies, Galp Agree Exchange Involving Mopane, Venus Discoveries

TotalEnergies and Galp Energia SGPS SA have entered into a deal under which TotalEnergies will acquire a 40 percent operating stake in the block containing the Mopane discoveries and Galp will obtain a 10 percent interest in the Venus discovery license, both on Namibia’s side of the Orange Basin. Currently Portugal’s Galp operates Petroleum Exploration License (PEL) 83, which includes Mopane, with an 80 percent stake. The National Petroleum Corporation of Namibia (Namcor) owns 10 percent. Local player Custos Energy (Pty) Ltd holds 10 percent. In PEL56 France’s TotalEnergies has a 45.25 percent operating stake. State-owned QatarEnergy owns 35.25 percent. Namcor owns 10 percent. London-based Impact Oil & Gas Ltd owns 9.5 percent. After the completion of the transaction, subject to approvals from Namibian authorities and their partners in the PELs, Galp would retain 40 percent in PEL83 and TotalEnergies would keep operatorship and a 35.25 percent stake in PEL56. Galp would also receive a 9.4 percent stake in PEL 91, also in the Namibian Orange Basin. Operator TotalEnergies would keep a 33.09 percent stake. QatarEnergy owns 33.03 percent, Namcor 15 percent and Impact 9.5 percent. “The transaction positions TotalEnergies as the operator of the two largest oil discoveries in Namibia and opens the way for the development of a major producing hub, generating long-term value for the country and partners”, TotalEnergies said in an online statement. Galp has announced five discoveries in PEL83’s Mopane area. Mopane-1X, Mopane-2X, appraisal well Mopane-1A and appraisal well Mopane 2A were proven 2024. Mopane-3X was proven 2025. In PEL56, TotalEnergies announced a “significant discovery” through the Venus 1-X well on February 24, 2022. “TotalEnergies and Galp agreed to launch an exploration and appraisal campaign including three wells over the next two years, with a first well planned in 2026, to further derisk resources and progress diligently

Read More »

Executive Roundtable: Converging Disciplines in the AI Buildout

At Data Center Frontier, we rely on industry leaders to help us understand the most urgent challenges facing digital infrastructure. And in the fourth quarter of 2025, the data center industry is adjusting to a new kind of complexity.  AI-scale infrastructure is redefining what “mission critical” means, from megawatt density and modular delivery to the chemistry of cooling fluids and the automation of energy systems. Every project has arguably in effect now become an ecosystem challenge, demanding that electrical, mechanical, construction, and environmental disciplines act as one.  For this quarter’s Executive Roundtable, DCF convened subject matter experts from Ecolab, EdgeConneX, Rehlko and Schneider Electric – leaders spanning the full chain of facilities design, deployment, and operation. Their insights illuminate how liquid cooling, energy management, and sustainable process design in data centers are now converging to set the pace for the AI era. Our distinguished executive panelists for this quarter include: Rob Lowe, Director RD&E – Global High Tech, Ecolab Phillip Marangella, Chief Marketing and Product Officer, EdgeConneX Ben Rapp, Manager, Strategic Project Development, Rehlko Joe Reele, Vice President, Datacenter Solution Architects, Schneider Electric Today: Engineering the New Normal – Liquid Cooling at Scale  Today’s kickoff article grapples with how, as liquid cooling technology transitions to default hyperscale design, the challenge is no longer if, but how to scale builds safely, repeatably, and globally.  Cold plates, immersion, dielectric fluids, and liquid-to-chip loops are converging into factory-integrated building blocks, yet variability in chemistry, serviceability, materials, commissioning practices, and long-term maintenance threatens to fragment adoption just as demand accelerates.  Success now hinges on shared standards and tighter collaboration across OEMs, builders, and process specialists worldwide. So how do developers coordinate across the ecosystem to make liquid cooling a safe, maintainable global default? What’s Ahead in the Roundtable Over the coming days, our panel

Read More »

DCF Trends Summit 2025: AI for Good – How Operators, Vendors and Cooling Specialists See the Next Phase of AI Data Centers

At the 2025 Data Center Frontier Trends Summit (Aug. 26-28) in Reston, Va., the conversation around AI and infrastructure moved well past the hype. In a panel sponsored by Schneider Electric—“AI for Good: Building for AI Workloads and Using AI for Smarter Data Centers”—three industry leaders explored what it really means to design, cool and operate the new class of AI “factories,” while also turning AI inward to run those facilities more intelligently. Moderated by Data Center Frontier Editor in Chief Matt Vincent, the session brought together: Steve Carlini, VP, Innovation and Data Center Energy Management Business, Schneider Electric Sudhir Kalra, Chief Data Center Operations Officer, Compass Datacenters Andrew Whitmore, VP of Sales, Motivair Together, they traced both sides of the “AI for Good” equation: building for AI workloads at densities that would have sounded impossible just a few years ago, and using AI itself to reduce risk, improve efficiency and minimize environmental impact. From Bubble Talk to “AI Factories” Carlini opened by acknowledging the volatility surrounding AI investments, citing recent headlines and even Sam Altman’s public use of the word “bubble” to describe the current phase of exuberance. “It’s moving at an incredible pace,” Carlini noted, pointing out that roughly half of all VC money this year has flowed into AI, with more already spent than in all of the previous year. Not every investor will win, he said, and some companies pouring in hundreds of billions may not recoup their capital. But for infrastructure, the signal is clear: the trajectory is up and to the right. GPU generations are cycling faster than ever. Densities are climbing from high double-digits per rack toward hundreds of kilowatts. The hyperscale “AI factories,” as NVIDIA calls them, are scaling to campus capacities measured in gigawatts. Carlini reminded the audience that in 2024,

Read More »

FinOps Foundation sharpens FOCUS to reduce cloud cost chaos

“The big change that’s really started to happen in late 2024 early 2025 is that the FinOps practice started to expand past the cloud,” Storment said. “A lot of organizations got really good at using FinOps to manage the value of cloud, and then their organizations went, ‘oh, hey, we’re living in this happily hybrid state now where we’ve got cloud, SaaS, data center. Can you also apply the FinOps practice to our SaaS? Or can you apply it to our Snowflake? Can you apply it to our data center?’” The FinOps Foundation’s community has grown to approximately 100,000 practitioners. The organization now includes major cloud vendors, hardware providers like Nvidia and AMD, data center operators and data cloud platforms like Snowflake and Databricks. Some 96 of the Fortune 100 now participate in FinOps Foundation programs. The practice itself has shifted in two directions. It has moved left into earlier architectural and design processes, becoming more proactive rather than reactive. It has also moved up organizationally, from director-level cloud management roles to SVP and COO positions managing converged technology portfolios spanning multiple infrastructure types. This expansion has driven the evolution of FOCUS beyond its original cloud billing focus. Enterprises are implementing FOCUS as an internal standard for chargeback reporting even when their providers don’t generate native FOCUS data. Some newer cloud providers, particularly those focused on AI infrastructure, are using the FOCUS specification to define their billing data structures from the ground up rather than retrofitting existing systems. The FOCUS 1.3 release reflects this maturation, addressing technical gaps that have emerged as organizations apply cost management practices across increasingly complex hybrid environments. FOCUS 1.3 exposes cost allocation logic for shared infrastructure The most significant technical enhancement in FOCUS 1.3 addresses a gap in how shared infrastructure costs are allocated and

Read More »

Aetherflux joins the race to launch orbital data centers by 2027

Enterprises will connect to and manage orbital workloads “the same way they manage cloud workloads today,” using optical links, the spokesperson added. The company’s approach is to “continuously launch new hardware and quickly integrate the latest architectures,” with older systems running lower-priority tasks to serve out the full useful lifetime of their high-end GPUs. The company declined to disclose pricing. Aetherflux plans to launch about 30 satellites at a time on SpaceX Falcon 9 rockets. Before the data center launch, the company will launch a power-beaming demonstration satellite in 2026 to test transmission of one kilowatt of energy from orbit to ground stations, using infrared lasers. Competition in the sector has intensified in recent months. In November, Starcloud launched its Starcloud-1 satellite carrying an Nvidia H100 GPU, which is 100 times more powerful than any previous GPU flown in space, according to the company, and demonstrated running Google’s Gemma AI model in orbit. In the same month, Google announced Project Suncatcher, with a 2027 demonstration mission planned. Analysts see limited near-term applications Despite the competitive activity, orbital data centers won’t replace terrestrial cloud regions for general hosting through 2030, said Ashish Banerjee, senior principal analyst at Gartner. Instead, they suit specific workloads, including meeting data sovereignty requirements for jurisdictionally complex scenarios, offering disaster recovery immune to terrestrial risks, and providing asynchronous high-performance computing, he said. “Orbital centers are ideal for high-compute, low-I/O batch jobs,” Banerjee said. “Think molecular folding simulations for pharma, massive Monte Carlo financial simulations, or training specific AI model weights. If the job takes 48 hours, the 500ms latency penalty of LEO is irrelevant.” One immediate application involves processing satellite-generated data in orbit, he said. Earth observation satellites using synthetic aperture radar generate roughly 10 gigabytes per second, but limited downlink bandwidth creates bottlenecks. Processing data in

Read More »

Here’s what Oracle’s soaring infrastructure spend could mean for enterprises

He said he had earlier told analysts in a separate call that margins for AI workloads in these data centers would be in the 30% to 40% range over the life of a customer contract. Kehring reassured that there would be demand for the data centers when they were completed, pointing to Oracle’s increasing remaining performance obligations, or services contracted but not yet delivered, up $68 billion on the previous quarter, saying that Oracle has been seeing unprecedented demand for AI workloads driven by the likes of Meta and Nvidia. Rising debt and margin risks raise flags for CIOs For analysts, though, the swelling debt load is hard to dismiss, even with Oracle’s attempts to de-risk its spend and squeeze more efficiency out of its buildouts. Gogia sees Oracle already under pressure, with the financial ecosystem around the company pricing the risk — one of the largest debts in corporate history, crossing $100 billion even before the capex spend this quarter — evident in the rising cost of insuring the debt and the shift in credit outlook. “The combination of heavy capex, negative free cash flow, increasing financing cost and long-dated revenue commitments forms a structural pressure that will invariably finds its way into the commercial posture of the vendor,” Gogia said, hinting at an “eventual” increase in pricing of the company’s offerings. He was equally unconvinced by Magouyrk’s assurances about the margin profile of AI workloads as he believes that AI infrastructure, particularly GPU-heavy clusters, delivers significantly lower margins in the early years because utilisation takes time to ramp.

Read More »

New Nvidia software gives data centers deeper visibility into GPU thermals and reliability

Addressing the challenge Modern AI accelerators now draw more than 700W per GPU, and multi-GPU nodes can reach 6kW, creating concentrated heat zones, rapid power swings, and a higher risk of interconnect degradation in dense racks, according to Manish Rawat, semiconductor analyst at TechInsights. Traditional cooling methods and static power planning increasingly struggle to keep pace with these loads. “Rich vendor telemetry covering real-time power draw, bandwidth behavior, interconnect health, and airflow patterns shifts operators from reactive monitoring to proactive design,” Rawat said. “It enables thermally aware workload placement, faster adoption of liquid or hybrid cooling, and smarter network layouts that reduce heat-dense traffic clusters.” Rawat added that the software’s fleet-level configuration insights can also help operators catch silent errors caused by mismatched firmware or driver versions. This can improve training reproducibility and strengthen overall fleet stability. “Real-time error and interconnect health data also significantly accelerates root-cause analysis, reducing MTTR and minimizing cluster fragmentation,” Rawat said. These operational pressures can shape budget decisions and infrastructure strategy at the enterprise level.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »