Stay Ahead, Stay ONMINE

What’s next for AlphaFold: A conversation with a Google DeepMind Nobel laureate

EXECUTIVE SUMMARY In 2017, fresh off a PhD on theoretical chemistry, John Jumper heard rumors that Google DeepMind had moved on from building AI that played games with superhuman skill and was starting up a secret project to predict the structures of proteins. He applied for a job. Just three years later, Jumper celebrated a stunning win that few had seen coming. With CEO Demis Hassabis, he had co-led the development of an AI system called AlphaFold 2 that was able to predict the structures of proteins to within the width of an atom, matching the accuracy of painstaking techniques used in the lab, and doing it many times faster—returning results in hours instead of months. AlphaFold 2 had cracked a 50-year-old grand challenge in biology. “This is the reason I started DeepMind,” Hassabis told me a few years ago. “In fact, it’s why I’ve worked my whole career in AI.” In 2024, Jumper and Hassabis shared a Nobel Prize in chemistry. It was five years ago this week that AlphaFold 2’s debut took scientists by surprise. Now that the hype has died down, what impact has AlphaFold really had? How are scientists using it? And what’s next? I talked to Jumper (as well as a few other scientists) to find out. “It’s been an extraordinary five years,” Jumper says, laughing: “It’s hard to remember a time before I knew tremendous numbers of journalists.” AlphaFold 2 was followed by AlphaFold Multimer, which could predict structures that contained more than one protein, and then AlphaFold 3, the fastest version yet. Google DeepMind also let AlphaFold loose on UniProt, a vast protein database used and updated by millions of researchers around the world. It has now predicted the structures of some 200 million proteins, almost all that are known to science. Despite his success, Jumper remains modest about AlphaFold’s achievements. “That doesn’t mean that we’re certain of everything in there,” he says. “It’s a database of predictions, and it comes with all the caveats of predictions.” A hard problem Proteins are the biological machines that make living things work. They form muscles, horns, and feathers; they carry oxygen around the body and ferry messages between cells; they fire neurons, digest food, power the immune system; and so much more. But understanding exactly what a protein does (and what role it might play in various diseases or treatments) involves figuring out its structure—and that’s hard. Proteins are made from strings of amino acids that chemical forces twist up into complex knots. An untwisted string gives few clues about the structure it will form. In theory, most proteins could take on an astronomical number of possible shapes. The task is to predict the correct one. Jumper and his team built AlphaFold 2 using a type of neural network called a transformer, the same technology that underpins large language models. Transformers are very good at paying attention to specific parts of a larger puzzle. But Jumper puts a lot of the success down to making a prototype model that they could test quickly. “We got a system that would give wrong answers at incredible speed,” he says. “That made it easy to start becoming very adventurous with the ideas you try.” Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters They stuffed the neural network with as much information about protein structures as they could, such as how proteins across certain species have evolved similar shapes. And it worked even better than they expected. “We were sure we had made a breakthrough,” says Jumper. “We were sure that this was an incredible advance in ideas.” What he hadn’t foreseen was that researchers would download his software and start using it straight away for so many different things. Normally, it’s the thing a few iterations down the line that has the real impact, once the kinks have been ironed out, he says: “I’ve been shocked at how responsibly scientists have used it, in terms of interpreting it, and using it in practice about as much as it should be trusted in my view, neither too much nor too little.” Any projects stand out in particular?  Honeybee science Jumper brings up a research group that uses AlphaFold to study disease resistance in honeybees. “They wanted to understand this particular protein as they look at things like colony collapse,” he says. “I never would have said, ‘You know, of course AlphaFold will be used for honeybee science.’” He also highlights a few examples of what he calls off-label uses of AlphaFold“in the sense that it wasn’t guaranteed to work”—where the ability to predict protein structures has opened up new research techniques. “The first is very obviously the advances in protein design,” he says. “David Baker and others have absolutely run with this technology.” Baker, a computational biologist at the University of Washington, was a co-winner of last year’s chemistry Nobel, alongside Jumper and Hassabis, for his work on creating synthetic proteins to perform specific tasks—such as treating disease or breaking down plastics—better than natural proteins can. Baker and his colleagues have developed their own tool based on AlphaFold, called RoseTTAFold. But they have also experimented with AlphaFold Multimer to predict which of their designs for potential synthetic proteins will work.     “Basically, if AlphaFold confidently agrees with the structure you were trying to design [and] then you make it and if AlphaFold says ‘I don’t know,’ you don’t make it. That alone was an enormous improvement.” It can make the design process 10 times faster, says Jumper. Another off-label use that Jumper highlights: Turning AlphaFold into a kind of search engine. He mentions two separate research groups that were trying to understand exactly how human sperm cells hooked up with eggs during fertilization. They knew one of the proteins involved but not the other, he says: “And so they took a known egg protein and ran all 2,000 human sperm surface proteins, and they found one that AlphaFold was very sure stuck against the egg.” They were then able to confirm this in the lab. “This notion that you can use AlphaFold to do something you couldn’t do before—you would never do 2,000 structures looking for one answer,” he says. “This kind of thing I think is really extraordinary.” Five years on When AlphaFold 2 came out, I asked a handful of early adopters what they made of it. Reviews were good, but the technology was too new to know for sure what long-term impact it might have. I caught up with one of those people to hear his thoughts five years on. Kliment Verba is a molecular biologist who runs a lab at the University of California, San Francisco. “It’s an incredibly useful technology, there’s no question about it,” he tells me. “We use it every day, all the time.” But it’s far from perfect. A lot of scientists use AlphaFold to study pathogens or to develop drugs. This involves looking at interactions between multiple proteins or between proteins and even smaller molecules in the body. But AlphaFold is known to be less accurate at making predictions about multiple proteins or their interaction over time. Verba says he and his colleagues have been using AlphaFold long enough to get used to its limitations. “There are many cases where you get a prediction and you have to kind of scratch your head,” he says. “Is this real or is this not? It’s not entirely clear—it’s sort of borderline.” “It’s sort of the same thing as ChatGPT,” he adds. “You know—it will bullshit you with the same confidence as it would give a true answer.” Still, Verba’s team uses AlphaFold (both 2 and 3, because they have different strengths, he says) to run virtual versions of their experiments before running them in the lab. Using AlphaFold’s results, they can narrow down the focus of an experiment—or decide that it’s not worth doing. It can really save time, he says: “It hasn’t really replaced any experiments, but it’s augmented them quite a bit.” New wave   AlphaFold was designed to be used for a range of purposes. Now multiple startups and university labs are building on its success to develop a new wave of tools more tailored to drug discovery. This year, a collaboration between MIT researchers and the AI drug company Recursion produced a model called Boltz-2, which predicts not only the structure of proteins but also how well potential drug molecules will bind to their target.   Last month, the startup Genesis Molecular AI released another structure prediction model called Pearl, which the firm claims is more accurate than AlphaFold 3 for certain queries that are important for drug development. Pearl is interactive, so that drug developers can feed any additional data they may have to the model to guide its predictions. AlphaFold was a major leap, but there’s more to do, says Evan Feinberg, Genesis Molecular AI’s CEO: “We’re still fundamentally innovating, just with a better starting point than before.” Genesis Molecular AI is pushing margins of error down from less than two angstroms, the de facto industry standard set by AlphaFold, to less than one angstrom—one 10-millionth of a millimeter, or the width of a single hydrogen atom. “Small errors can be catastrophic for predicting how well a drug will actually bind to its target,” says Michael LeVine, vice president of modeling and simulation at the firm. That’s because chemical forces that interact at one angstrom can stop doing so at two. “It can go from ‘They will never interact’ to ‘They will,’” he says. With so much activity in this space, how soon should we expect new types of drugs to hit the market? Jumper is pragmatic. Protein structure prediction is just one step of many, he says: “This was not the only problem in biology. It’s not like we were one protein structure away from curing any diseases.” Think of it this way, he says. Finding a protein’s structure might previously have cost $100,000 in the lab: “If we were only a hundred thousand dollars away from doing a thing, it would already be done.” At the same time, researchers are looking for ways to do as much as they can with this technology, says Jumper: “We’re trying to figure out how to make structure prediction an even bigger part of the problem, because we have a nice big hammer to hit it with.” In other words, they want to make everything into nails? “Yeah, let’s make things into nails,” he says. “How do we make this thing that we made a million times faster a bigger part of our process?” What’s next? Jumper’s next act? He wants to fuse the deep but narrow power of AlphaFold with the broad sweep of LLMs.   “We have machines that can read science. They can do some scientific reasoning,” he says. “And we can build amazing, superhuman systems for protein structure prediction. How do you get these two technologies to work together?” That makes me think of a system called AlphaEvolve, which is being built by another team at Google DeepMind. AlphaEvolve uses an LLM to generate possible solutions to a problem and a second model to check them, filtering out the trash. Researchers have already used AlphaEvolve to make a handful of practical discoveries in math and computer science.     Is that what Jumper has in mind? “I won’t say too much on methods, but I’ll be shocked if we don’t see more and more LLM impact on science,” he says. “I think that’s the exciting open question that I’ll say almost nothing about. This is all speculation, of course.” Jumper was 39 when he won his Nobel Prize. What’s next for him? “It worries me,” he says. “I believe I’m the youngest chemistry laureate in 75 years.”  He adds: “I’m at the midpoint of my career, roughly. I guess my approach to this is to try to do smaller things, little ideas that you keep pulling on. The next thing I announce doesn’t have to be, you know, my second shot at a Nobel. I think that’s the trap.”

In 2017, fresh off a PhD on theoretical chemistry, John Jumper heard rumors that Google DeepMind had moved on from building AI that played games with superhuman skill and was starting up a secret project to predict the structures of proteins. He applied for a job.

Just three years later, Jumper celebrated a stunning win that few had seen coming. With CEO Demis Hassabis, he had co-led the development of an AI system called AlphaFold 2 that was able to predict the structures of proteins to within the width of an atom, matching the accuracy of painstaking techniques used in the lab, and doing it many times faster—returning results in hours instead of months.

AlphaFold 2 had cracked a 50-year-old grand challenge in biology. “This is the reason I started DeepMind,” Hassabis told me a few years ago. “In fact, it’s why I’ve worked my whole career in AI.” In 2024, Jumper and Hassabis shared a Nobel Prize in chemistry.

It was five years ago this week that AlphaFold 2’s debut took scientists by surprise. Now that the hype has died down, what impact has AlphaFold really had? How are scientists using it? And what’s next? I talked to Jumper (as well as a few other scientists) to find out.

“It’s been an extraordinary five years,” Jumper says, laughing: “It’s hard to remember a time before I knew tremendous numbers of journalists.”

AlphaFold 2 was followed by AlphaFold Multimer, which could predict structures that contained more than one protein, and then AlphaFold 3, the fastest version yet. Google DeepMind also let AlphaFold loose on UniProt, a vast protein database used and updated by millions of researchers around the world. It has now predicted the structures of some 200 million proteins, almost all that are known to science.

Despite his success, Jumper remains modest about AlphaFold’s achievements. “That doesn’t mean that we’re certain of everything in there,” he says. “It’s a database of predictions, and it comes with all the caveats of predictions.”

A hard problem

Proteins are the biological machines that make living things work. They form muscles, horns, and feathers; they carry oxygen around the body and ferry messages between cells; they fire neurons, digest food, power the immune system; and so much more. But understanding exactly what a protein does (and what role it might play in various diseases or treatments) involves figuring out its structure—and that’s hard.

Proteins are made from strings of amino acids that chemical forces twist up into complex knots. An untwisted string gives few clues about the structure it will form. In theory, most proteins could take on an astronomical number of possible shapes. The task is to predict the correct one.

Jumper and his team built AlphaFold 2 using a type of neural network called a transformer, the same technology that underpins large language models. Transformers are very good at paying attention to specific parts of a larger puzzle.

But Jumper puts a lot of the success down to making a prototype model that they could test quickly. “We got a system that would give wrong answers at incredible speed,” he says. “That made it easy to start becoming very adventurous with the ideas you try.”

Ask AI

Why it matters to you?BETA
Here’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weird

They stuffed the neural network with as much information about protein structures as they could, such as how proteins across certain species have evolved similar shapes. And it worked even better than they expected. “We were sure we had made a breakthrough,” says Jumper. “We were sure that this was an incredible advance in ideas.”

What he hadn’t foreseen was that researchers would download his software and start using it straight away for so many different things. Normally, it’s the thing a few iterations down the line that has the real impact, once the kinks have been ironed out, he says: “I’ve been shocked at how responsibly scientists have used it, in terms of interpreting it, and using it in practice about as much as it should be trusted in my view, neither too much nor too little.”

Any projects stand out in particular? 

Honeybee science

Jumper brings up a research group that uses AlphaFold to study disease resistance in honeybees. “They wanted to understand this particular protein as they look at things like colony collapse,” he says. “I never would have said, ‘You know, of course AlphaFold will be used for honeybee science.’”

He also highlights a few examples of what he calls off-label uses of AlphaFold“in the sense that it wasn’t guaranteed to work”—where the ability to predict protein structures has opened up new research techniques. “The first is very obviously the advances in protein design,” he says. “David Baker and others have absolutely run with this technology.”

Baker, a computational biologist at the University of Washington, was a co-winner of last year’s chemistry Nobel, alongside Jumper and Hassabis, for his work on creating synthetic proteins to perform specific tasks—such as treating disease or breaking down plastics—better than natural proteins can.

Baker and his colleagues have developed their own tool based on AlphaFold, called RoseTTAFold. But they have also experimented with AlphaFold Multimer to predict which of their designs for potential synthetic proteins will work.    

“Basically, if AlphaFold confidently agrees with the structure you were trying to design [and] then you make it and if AlphaFold says ‘I don’t know,’ you don’t make it. That alone was an enormous improvement.” It can make the design process 10 times faster, says Jumper.

Another off-label use that Jumper highlights: Turning AlphaFold into a kind of search engine. He mentions two separate research groups that were trying to understand exactly how human sperm cells hooked up with eggs during fertilization. They knew one of the proteins involved but not the other, he says: “And so they took a known egg protein and ran all 2,000 human sperm surface proteins, and they found one that AlphaFold was very sure stuck against the egg.” They were then able to confirm this in the lab.

“This notion that you can use AlphaFold to do something you couldn’t do before—you would never do 2,000 structures looking for one answer,” he says. “This kind of thing I think is really extraordinary.”

Five years on

When AlphaFold 2 came out, I asked a handful of early adopters what they made of it. Reviews were good, but the technology was too new to know for sure what long-term impact it might have. I caught up with one of those people to hear his thoughts five years on.

Kliment Verba is a molecular biologist who runs a lab at the University of California, San Francisco. “It’s an incredibly useful technology, there’s no question about it,” he tells me. “We use it every day, all the time.”

But it’s far from perfect. A lot of scientists use AlphaFold to study pathogens or to develop drugs. This involves looking at interactions between multiple proteins or between proteins and even smaller molecules in the body. But AlphaFold is known to be less accurate at making predictions about multiple proteins or their interaction over time.

Verba says he and his colleagues have been using AlphaFold long enough to get used to its limitations. “There are many cases where you get a prediction and you have to kind of scratch your head,” he says. “Is this real or is this not? It’s not entirely clear—it’s sort of borderline.”

“It’s sort of the same thing as ChatGPT,” he adds. “You know—it will bullshit you with the same confidence as it would give a true answer.”

Still, Verba’s team uses AlphaFold (both 2 and 3, because they have different strengths, he says) to run virtual versions of their experiments before running them in the lab. Using AlphaFold’s results, they can narrow down the focus of an experiment—or decide that it’s not worth doing.

It can really save time, he says: “It hasn’t really replaced any experiments, but it’s augmented them quite a bit.”

New wave  

AlphaFold was designed to be used for a range of purposes. Now multiple startups and university labs are building on its success to develop a new wave of tools more tailored to drug discovery. This year, a collaboration between MIT researchers and the AI drug company Recursion produced a model called Boltz-2, which predicts not only the structure of proteins but also how well potential drug molecules will bind to their target.  

Last month, the startup Genesis Molecular AI released another structure prediction model called Pearl, which the firm claims is more accurate than AlphaFold 3 for certain queries that are important for drug development. Pearl is interactive, so that drug developers can feed any additional data they may have to the model to guide its predictions.

AlphaFold was a major leap, but there’s more to do, says Evan Feinberg, Genesis Molecular AI’s CEO: “We’re still fundamentally innovating, just with a better starting point than before.”

Genesis Molecular AI is pushing margins of error down from less than two angstroms, the de facto industry standard set by AlphaFold, to less than one angstrom—one 10-millionth of a millimeter, or the width of a single hydrogen atom.

“Small errors can be catastrophic for predicting how well a drug will actually bind to its target,” says Michael LeVine, vice president of modeling and simulation at the firm. That’s because chemical forces that interact at one angstrom can stop doing so at two. “It can go from ‘They will never interact’ to ‘They will,’” he says.

With so much activity in this space, how soon should we expect new types of drugs to hit the market? Jumper is pragmatic. Protein structure prediction is just one step of many, he says: “This was not the only problem in biology. It’s not like we were one protein structure away from curing any diseases.”

Think of it this way, he says. Finding a protein’s structure might previously have cost $100,000 in the lab: “If we were only a hundred thousand dollars away from doing a thing, it would already be done.”

At the same time, researchers are looking for ways to do as much as they can with this technology, says Jumper: “We’re trying to figure out how to make structure prediction an even bigger part of the problem, because we have a nice big hammer to hit it with.”

In other words, they want to make everything into nails? “Yeah, let’s make things into nails,” he says. “How do we make this thing that we made a million times faster a bigger part of our process?”

What’s next?

Jumper’s next act? He wants to fuse the deep but narrow power of AlphaFold with the broad sweep of LLMs.  

“We have machines that can read science. They can do some scientific reasoning,” he says. “And we can build amazing, superhuman systems for protein structure prediction. How do you get these two technologies to work together?”

That makes me think of a system called AlphaEvolve, which is being built by another team at Google DeepMind. AlphaEvolve uses an LLM to generate possible solutions to a problem and a second model to check them, filtering out the trash. Researchers have already used AlphaEvolve to make a handful of practical discoveries in math and computer science.    

Is that what Jumper has in mind? “I won’t say too much on methods, but I’ll be shocked if we don’t see more and more LLM impact on science,” he says. “I think that’s the exciting open question that I’ll say almost nothing about. This is all speculation, of course.”

Jumper was 39 when he won his Nobel Prize. What’s next for him?

“It worries me,” he says. “I believe I’m the youngest chemistry laureate in 75 years.” 

He adds: “I’m at the midpoint of my career, roughly. I guess my approach to this is to try to do smaller things, little ideas that you keep pulling on. The next thing I announce doesn’t have to be, you know, my second shot at a Nobel. I think that’s the trap.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Apstra founder launches Aria to tackle AI networking performance

Aria’s technical approach differs from incumbent vendors in its focus on end-to-end path optimization rather than individual switch performance. Karam argues that traditional networking vendors think of themselves primarily as switch companies, with software efforts concentrated on switch operating systems rather than cluster-wide operational models. “It’s no longer just about

Read More »

Gluware tackles AI agent coordination with Titan platform

The first phase focused on configuration management and drift detection. Gluware’s system identified when network devices deviated from approved configurations and proposed fixes, but network operations teams manually reviewed and approved each remediation. The second phase introduced automatic remediation. As customers gained confidence, they allowed the system to automatically correct

Read More »

Ransomware gangs find a new hostage: Your AWS S3 buckets

To succeed, attackers typically look for S3 buckets that have: versioning disabled ( so old versions can’t be restored), object-lock disabled ( so files can be overwritten or deleted), wide write permissions (via mis-configured IAM policies or leaked credentials), and hold high-value data (backup files, production config dumps). Once inside,

Read More »

North America Adds 12 Rigs Week on Week

North America added 12 rigs week on week, according to Baker Hughes’ latest North America rotary rig count, which was published on November 21. The total U.S. rig count increased by five week on week and the total Canada rig count rose by seven during the same period, taking the total North America rig count up to 749, comprising 554 rigs from the U.S. and 195 rigs from Canada, the count outlined. Of the total U.S. rig count of 554, 533 rigs are categorized as land rigs, 19 are categorized as offshore rigs, and two are categorized as inland water rigs. The total U.S. rig count is made up of 419 oil rigs, 127 gas rigs, and eight miscellaneous rigs, according to Baker Hughes’ count, which revealed that the U.S. total comprises 481 horizontal rigs, 61 directional rigs, and 12 vertical rigs. Week on week, the U.S. land rig count rose by six, its offshore rig count remained unchanged, and its inland water rig count dropped by one, Baker Hughes highlighted. The U.S. oil and gas rig counts each increased by two, and the country’s miscellaneous rig count rose by one, week on week, the count showed. The U.S. horizontal rig count increased by five, its vertical rig count rose by one, and its directional rig count dropped by one, week on week, the count revealed. A major state variances subcategory included in the rig count showed that, week on week, Wyoming added three rigs, and Pennsylvania, Oklahoma, and New Mexico each added one rig. North Dakota, Louisiana, and Alaska each dropped one rig, week on week, the count revealed.   A major basin variances subcategory included in Baker Hughes’ rig count showed that, week on week, the Granite Wash basin added two rigs, and the Marcellus and Permian basins

Read More »

Burgum Signs Order to ‘Unleash American Offshore Energy’

A statement posted on the U.S. Department of the Interior’s (DOI) website on Thursday revealed that U.S. Secretary of the Interior Doug Burgum has signed an order “to unleash American offshore energy”. In this statement, the DOI announced a Secretary’s Order, titled Unleashing American Offshore Energy, which the DOI said directs the Bureau of Ocean Energy Management (BOEM) “to take the necessary steps, in accordance with federal law, to terminate the restrictive Biden 2024-2029 National Outer Continental Shelf Oil and Gas Leasing Program and replace it with a new, expansive 11th National Outer Continental Shelf Oil and Gas Leasing Program by October 2026”. “As part of this directive, the Department is releasing the Secretary’s Draft Proposed Program for the 11th National Outer Continental Shelf Oil and Gas Leasing Program,” the DOI noted in the statement. “Under the new proposal for the 2026-2031 National Outer Continental Shelf Oil and Gas Leasing Program, Interior is taking a major step to boost United States energy independence and sustain domestic oil and gas production,” it added. “The proposal includes as many as 34 potential offshore lease sales across 21 of 27 existing Outer Continental Shelf planning areas, covering approximately 1.27 billion acres. That includes 21 areas off the coast of Alaska, seven in the Gulf of America, and six along the Pacific coast,” it continued. “The proposal also includes the Secretary’s decision to create a new administrative planning area, the South-Central Gulf of America,” it went on to state. In its statement, the DOI said the current proposal follows a public request for information and comment published in April 2025. The DOI stated that it received more than 86,000 comments from stakeholders, states, industry representatives, and members of the public. Feedback from those comments informed the proposal released on Thursday, the DOI highlighted.  The

Read More »

Russian Oil Offered to India at Deep Discount

Russia’s flagship Urals crude is being offered to India’s refiners at the cheapest price in at least two years after US sanctions on top producers Rosneft PJSC and Lukoil PJSC upended a lucrative trade. The price of Urals for Indian refiners has slipped to a discount of as much as $7 a barrel to Dated Brent on a delivered basis, according to people familiar with the matter, who asked not to be identified discussing sensitive information. The offer is for cargoes loading in December and arriving in January, they added. Most Indian refiners have skipped placing orders for Russian crude that would arrive after sanctions on Rosneft and Lukoil took effect last week, all but ending a trade that flourished after Russia’s invasion of Ukraine in 2022, as India took advantage of a steady flow of cheaper oil. In recent days, however, the tone across Indian refiners has changed due to the cheaper Urals prices, with some processors now open to purchasing Russian oil from non-sanctioned sellers, the people said. Still, only around a fifth of the cargoes being offered are free from non-blacklisted entities, they added. Prior to the sanctions on Rosneft and Lukoil, the discount for Urals was at around $3 a barrel. Since the US sanctions, which add to similar curbs on Gazprom Neft PJSC and Surgutneftegas PJSC, India’s refiners have purchased more crude from other regions including the Middle East. The Urals blend is shipped from Russia’s western ports. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

OEUK Awards Winners Revealed

Industry body Offshore Energies UK’s (OEUK) 2025 awards ceremony took place in Aberdeen, Scotland, on Thursday night, crowning several winners across a range of categories. “In Aberdeen … [on Thursday], the UK’s offshore energy industry paused to celebrate its people – from those just starting out to those whose careers have spanned the North Sea’s six-decade story,” OEUK said in a statement sent to Rigzone. “At Offshore Energies UK’s (OEUK) 2025 Awards … the spotlight turned to the individuals and companies shaping the sector’s future, even as it faces a complex fiscal landscape and a subsequent downturn in activity,” it added. “The evening recognized young professionals bringing fresh ideas to established challenges, as well as the engineers, technicians, and leaders whose experience continues to anchor an industry in change,” it went on to state. OEUK Chief Executive David Whitehouse said in the statement that the night was a reminder of both the continuity and evolution within the energy workforce. “Our sector has always been defined by its people; their skills, resilience, and ingenuity,” Whitehouse added in the statement. “What we saw this evening is how that same spirit is driving innovation across carbon capture, hydrogen, and offshore wind, while continuing to deliver the oil and gas that the UK still depends on,” he added. “We hope the Autumn Budget recognizes the value of these skilled jobs and the communities they sustain,” he continued. “This is a story of transition, but also of continuity – of people who’ve powered the country for decades and are now helping to shape how it’s powered for decades to come,” Whitehouse said. The budget, or financial statement, is a statement made to the House of Commons by the Chancellor of the Exchequer on the nation’s finances and the government’s proposals for changes to taxation, the

Read More »

Owning the edge: How utilities can lead in the age of onsite power

Artificial intelligence is reshaping the power landscape. Experts predict that AI-focused data centers in the U.S. could increase their power demand by a factor of 30 by 2035 — from roughly 4 GW in 2024 to about 123 GW — and that’s a conservative estimate. This spectacular growth will make AI one of the most dominant loads on the American grid, rivaling the demand from entire industrial sectors. For utilities, the surge in demand represents both a challenge and an opportunity. While the electric grid remains the primary power source for data centers, there’s a growing mismatch between data center construction schedules, utility infrastructure timelines and transmission and distribution bottlenecks. Developers need megawatts within months; utility capacity delivery can take years. This gap has become increasingly difficult to reconcile, leading data centers to consider onsite generation as a way to bypass delays and get power at AI speed. Yet, onsite power doesn’t have to be a competitive threat. When utilities take the lead by owning and integrating onsite systems, they can provide fast, reliable capacity that helps customers avoid grid bottlenecks today. In the meantime, utilities can continue to plan and build out future connections. This approach allows utilities to unlock new revenue opportunities, enhance grid resilience, and deliver better service for all ratepayers, all without jeopardizing future growth. Deploying onsite power strategically Utility-owned onsite generation means putting generation capacity at or near the point of use, such as a data center or other large load. These systems can be sized to fully power a facility if needed, operating independently of the grid in island mode. They can also operate alongside the grid to meet demand when the grid can supply only part of the load. In both cases, the system provides continuous, reliable power immediately and for as long

Read More »

Baker Hughes Books 1.3 GW Gas Turbine Order from Dynamis

Dynamis Power Solutions LLC has awarded Baker Hughes Co a contract for the supply of 25 aeroderivative gas turbines with a combined capacity of 1.3 gigawatts (GW). The turbines, including LM2500, LM6000 and LM9000, will be deployed for “mobile power generation across a wide range of oil and gas applications, including upstream, refining and petrochemical”, said a joint statement Thursday. “Dynamis packages gas turbines and generators in its distinctive mobile power solutions. As part of the agreement, Dynamis will package 10 of Baker Hughes’ efficient and dry low emissions LM9000 gas turbines in a new offering called the DT70-70 MW, which will total 700 MW of gas turbine power generation capacity, delivering the oil and gas industry’s highest reported mobile power density (MW per square foot) to date”, the companies said.  Matthew Crawford, chief executive of The Woodlands, Texas-based Dynamis, said, “Through our decade-long collaboration with Baker Hughes, we are redefining what’s possible in the mobile power generation market for oil and gas through our delivery of a new solution with power density once thought unattainable. Our use of LM9000s will offer twice the power of our flagship solution – the best-in-class DT35 – without compromising flexibility, reliability or efficiency”. The statement said, “Designed to support unique and complex operational needs of industries requiring natural gas power solutions, the DT70 is based off Dynamis’ successful DT35 – a 1.5-GW installed base which has been in operation for nearly a decade in more than 1,200 locations throughout the North America region”. “Dynamis’ new application of Baker Hughes’ LM9000s boasts enhanced versatility for large power consumers in the oil and gas space, resilience in challenging environments and the ability to power – benefits that are emphasized by the unit’s compact footprint and record-setting short rig-up and commissioning times”, it added. The companies did not disclose the contract price. Baker

Read More »

Microsoft’s Fairwater Atlanta and the Rise of the Distributed AI Supercomputer

Microsoft’s second Fairwater data center in Atlanta isn’t just “another big GPU shed.” It represents the other half of a deliberate architectural experiment: proving that two massive AI campuses, separated by roughly 700 miles, can operate as one coherent, distributed supercomputer. The Atlanta installation is the latest expression of Microsoft’s AI-first data center design: purpose-built for training and serving frontier models rather than supporting mixed cloud workloads. It links directly to the original Fairwater campus in Wisconsin, as well as to earlier generations of Azure AI supercomputers, through a dedicated AI WAN backbone that Microsoft describes as the foundation of a “planet-scale AI superfactory.” Inside a Fairwater Site: Preparing for Multi-Site Distribution Efficient multi-site training only works if each individual site behaves as a clean, well-structured unit. Microsoft’s intra-site design is deliberately simplified so that cross-site coordination has a predictable abstraction boundary—essential for treating multiple campuses as one distributed AI system. Each Fairwater installation presents itself as a single, flat, high-regularity cluster: Up to 72 NVIDIA Blackwell GPUs per rack, using GB200 NVL72 rack-scale systems. NVLink provides the ultra-low-latency, high-bandwidth scale-up fabric within the rack, while the Spectrum-X Ethernet stack handles scale-out. Each rack delivers roughly 1.8 TB/s of GPU-to-GPU bandwidth and exposes a multi-terabyte pooled memory space addressable via NVLink—critical for large-model sharding, activation checkpointing, and parallelism strategies. Racks feed into a two-tier Ethernet scale-out network offering 800 Gbps GPU-to-GPU connectivity with very low hop counts, engineered to scale to hundreds of thousands of GPUs without encountering the classic port-count and topology constraints of traditional Clos fabrics. Microsoft confirms that the fabric relies heavily on: SONiC-based switching and a broad commodity Ethernet ecosystem to avoid vendor lock-in and accelerate architectural iteration. Custom network optimizations, such as packet trimming, packet spray, high-frequency telemetry, and advanced congestion-control mechanisms, to prevent collective

Read More »

Land & Expand: Hyperscale, AI Factory, Megascale

Land & Expand is Data Center Frontier’s periodic roundup of notable North American data center development activity, tracking the newest sites, land plays, retrofits, and hyperscale campus expansions shaping the industry’s build cycle. October delivered a steady cadence of announcements, with several megascale projects advancing from concept to commitment. The month was defined by continued momentum in OpenAI and Oracle’s Stargate initiative (now spanning multiple U.S. regions) as well as major new investments from Google, Meta, DataBank, and emerging AI cloud players accelerating high-density reuse strategies. The result is a clearer picture of how the next wave of AI-first infrastructure is taking shape across the country. Google Begins $4B West Memphis Hyperscale Buildout Google formally broke ground on its $4 billion hyperscale campus in West Memphis, Arkansas, marking the company’s first data center in the state and the anchor for a new Mid-South operational hub. The project spans just over 1,000 acres, with initial site preparation and utility coordination already underway. Google and Entergy Arkansas confirmed a 600 MW solar generation partnership, structured to add dedicated renewable supply to the regional grid. As part of the launch, Google announced a $25 million Energy Impact Fund for local community affordability programs and energy-resilience improvements—an unusually early community-benefit commitment for a first-phase hyperscale project. Cooling specifics have not yet been made public. Water sourcing—whether reclaimed, potable, or hybrid seasonal mode—remains under review, as the company finalizes environmental permits. Public filings reference a large-scale onsite water treatment facility, similar to Google’s deployments in The Dalles and Council Bluffs. Local governance documents show that prior to the October announcement, West Memphis approved a 30-year PILOT via Groot LLC (Google’s land assembly entity), with early filings referencing a typical placeholder of ~50 direct jobs. At launch, officials emphasized hundreds of full-time operations roles and thousands

Read More »

The New Digital Infrastructure Geography: Green Street’s David Guarino on AI Demand, Power Scarcity, and the Next Phase of Data Center Growth

As the global data center industry races through its most frenetic build cycle in history, one question continues to define the market’s mood: is this the peak of an AI-fueled supercycle, or the beginning of a structurally different era for digital infrastructure? For Green Street Managing Director and Head of Global Data Center and Tower Research David Guarino, the answer—based firmly on observable fundamentals—is increasingly clear. Demand remains blisteringly strong. Capital appetite is deepening. And the very definition of a “data center market” is shifting beneath the industry’s feet. In a wide-ranging discussion with Data Center Frontier, Guarino outlined why data centers continue to stand out in the commercial real estate landscape, how AI is reshaping underwriting and development models, why behind-the-meter power is quietly reorganizing the U.S. map, and what Green Street sees ahead for rents, REITs, and the next wave of hyperscale expansion. A ‘Safe’ Asset in an Uncertain CRE Landscape Among institutional investors, the post-COVID era was the moment data centers stepped decisively out of “niche” territory. Guarino notes that pandemic-era reliance on digital services crystallized a structural recognition: data centers deliver stable, predictable cash flows, anchored by the highest-credit tenants in global real estate. Hyperscalers today dominate new leasing and routinely sign 15-year (or longer) contracts, a duration largely unmatched across CRE categories. When compared with one-year apartment leases, five-year office leases, or mall anchor terms, the stability story becomes plain. “These are AAA-caliber companies signing the longest leases in the sector’s history,” Guarino said. “From a real estate point of view, that combination of tenant quality and lease duration continues to position the asset class as uniquely durable.” And development returns remain exceptional. Even without assuming endless AI growth, the math works: strong demand, rising rents, and high-credit tenants create unusually predictable performance relative to

Read More »

The Flexential Blueprint: New CEO Ryan Mallory on Power, AI, and Bending the Physics Curve

In a coordinated leadership transition this fall, Ryan Mallory has stepped into the role of CEO at Flexential, succeeding Chris Downie. The move, described as thoughtful and planned, signals not a shift in direction, but a reinforcement of the company’s core strategy, with a sharpened focus on the unprecedented opportunities presented by the artificial intelligence revolution. In an exclusive interview on the Data Center Frontier Show Podcast, Mallory outlined a confident vision for Flexential, positioning the company at the critical intersection of enterprise IT and next-generation AI infrastructure. “Flexential will continue to focus on being an industry and market leader in wholesale, multi-tenant, and interconnection capabilities,” Mallory stated, affirming the company’s foundational strengths. His central thesis is that the AI infrastructure boom is not a monolithic wave, but a multi-stage evolution where Flexential’s model is uniquely suited for the emerging “inference edge.” The AI Build Cycle: A Three-Act Play Mallory frames the AI infrastructure market as a three-stage process, each lasting roughly four years. We are currently at the tail end of Stage 1, which began with the ChatGPT explosion three years ago. This phase, characterized by a frantic rush for capacity, has led to elongated lead times for critical infrastructure like generators, switchgear, and GPUs. The capacity from this initial build-out is expected to come online between late 2025 and late 2026. Stage 2, beginning around 2026 and stretching to 2030, will see the next wave of builds, with significant capacity hitting the market in 2028-2029. “This stage will reveal the viability of AI and actual consumption models,” Mallory notes, adding that air-cooled infrastructure will still dominate during this period. Stage 3, looking ahead to the early 2030s, will focus on long-term scale, mirroring the evolution of the public cloud. For Mallory, the enduring nature of this build cycle—contrasted

Read More »

Centersquare Launches $1 Billion Expansion to Scale an AI-Ready North American Data Center Platform

A Platform Built for Both Colo and AI Density The combined Evoque–Cyxtera platform entered the market with hundreds of megawatts of installed capacity and a clear runway for expansion. That scale positioned Centersquare to offer both traditional enterprise colocation and the higher-density, AI-ready footprints increasingly demanded through 2024 and 2025. The addition of these ten facilities demonstrates that the consolidation strategy is gaining traction, giving the platform more owned capacity to densify and more regional optionality as AI deployment accelerates. What’s in the $1 Billion Package — and Why It Matters 1) Lease-to-Own Conversions in Boston & Minneapolis Centersquare’s decision to purchase two long-operated but previously leased sites in Boston and Minneapolis reduces long-term occupancy risk and gives the operator full capex control. Owning the buildings unlocks the ability to schedule power and cooling upgrades on Centersquare’s terms, accelerate retrofits for high-density AI aisles, deploy liquid-ready thermal topologies, and add incremental power blocks without navigating landlord approval cycles. This structural flexibility aligns directly with the platform’s “AI-era backbone” positioning. 2) Eight Additional Data Centers Across Six Metros The acquisitions broaden scale in fast-rising secondary markets—Tulsa, Nashville, Raleigh—while deepening Centersquare’s presence in Dallas and expanding its Canadian footprint in Toronto and Montréal. Dallas remains a core scaling hub, but Nashville and Raleigh are increasingly important for enterprises modernizing their stacks and deploying regional AI workloads at lower cost and with faster timelines than congested Tier-1 corridors. Tulsa provides a network-adjacent, cost-efficient option for disaster recovery, edge aggregation, and latency-tolerant compute. In Canada, Toronto and Montréal offer strong enterprise demand, attractive economics, and grid advantages—including Québec’s hydro-powered, low-carbon energy mix—that position them well for AI training spillover and inference workloads requiring reliable, competitively priced power. 3) Self-Funded With Cash on Hand In the current rate environment, funding the entire $1 billion package

Read More »

Fission Forward: Next-Gen Nuclear Power Developments for the AI Data Center Boom

Constellation proposes to begin with 1.5 GW of fast-tracked projects, including 800 MW of battery energy storage and 700 MW of new natural gas generation to address short-term reliability needs. The remaining 4.3 GW represents longer-term investment at the Calvert Cliffs Clean Energy Center: extending both units for an additional 20 years beyond their current 2034 and 2036 license expirations, implementing a 10% uprate that would add roughly 190 MW of output, and pursuing 2 GW of next-generation nuclear at the existing site. For Maryland, a state defined by a dense I-95 fiber corridor, accelerating data center buildout, and rising AI-driven load, the plan could be transformative. If Constellation moves from “option” to “program,” the company estimates that 70% of the state’s electricity supply could come from clean energy sources, positioning Maryland as a top-tier market for 24/7 carbon-free power. TerraPower’s Natrium SMR Clears a Key Federal Milestone On Oct. 23, the Nuclear Regulatory Commission issued the final environmental impact statement (FEIS) for TerraPower’s Natrium small modular reactor in Kemmerer, Wyoming. While not a construction permit, FEIS completion removes a major element of federal environmental risk and keeps the project on track for the next phase of NRC review. TerraPower and its subsidiary, US SFR Owner, LLC, originally submitted the construction permit application on March 28, 2024. Natrium is a sodium-cooled fast reactor producing roughly 345 MW of electric output, paired with a molten-salt thermal-storage system capable of boosting generation to about 500 MW during peak periods. The design combines firm baseload power with flexible, dispatchable capability, an attractive profile for hyperscalers evaluating 24/7 clean energy options in the western U.S. The project is part of the DOE’s Advanced Reactor Demonstration Program, intended to replace retiring coal capacity in PacifiCorp’s service territory while showcasing advanced fission technology. For operators planning multi-GW

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »