Stay Ahead, Stay ONMINE

The AI doomers feel undeterred

It’s a weird time to be an AI doomer. This small but influential community of researchers, scientists, and policy experts believes, in the simplest terms, that AI could get so good it could be bad—very, very bad—for humanity. Though many of these people would be more likely to describe themselves as advocates for AI safety than as literal doomsayers, they warn that AI poses an existential risk to humanity. They argue that absent more regulation, the industry could hurtle toward systems it can’t control. They commonly expect such systems to follow the creation of artificial general intelligence (AGI), a slippery concept generally understood as technology that can do whatever humans can do, and better.  This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next. Though this is far from a universally shared perspective in the AI field, the doomer crowd has had some notable success over the past several years: helping shape AI policy coming from the Biden administration, organizing prominent calls for international “red lines” to prevent AI risks, and getting a bigger (and more influential) megaphone as some of its adherents win science’s most prestigious awards. But a number of developments over the past six months have put them on the back foot. Talk of an AI bubble has overwhelmed the discourse as tech companies continue to invest in multiple Manhattan Projects’ worth of data centers without any certainty that future demand will match what they’re building.  And then there was the August release of OpenAI’s latest foundation model, GPT-5, which proved something of a letdown. Maybe that was inevitable, since it was the most hyped AI release of all time; OpenAI CEO Sam Altman had boasted that GPT-5 felt “like a PhD-level expert” in every topic and told the podcaster Theo Von that the model was so good, it had made him feel “useless relative to the AI.”  Many expected GPT-5 to be a big step toward AGI, but whatever progress the model may have made was overshadowed by a string of technical bugs and the company’s mystifying, quickly reversed decision to shut off access to every old OpenAI model without warning. And while the new model achieved state-of-the-art benchmark scores, many people felt, perhaps unfairly, that in day-to-day use GPT-5 was a step backward.  All this would seem to threaten some of the very foundations of the doomers’ case. In turn, a competing camp of AI accelerationists, who fear AI is actually not moving fast enough and that the industry is constantly at risk of being smothered by overregulation, is seeing a fresh chance to change how we approach AI safety (or, maybe more accurately, how we don’t).  This is particularly true of the industry types who’ve decamped to Washington: “The Doomer narratives were wrong,” declared David Sacks, the longtime venture capitalist turned Trump administration AI czar. “This notion of imminent AGI has been a distraction and harmful and now effectively proven wrong,” echoed the White House’s senior policy advisor for AI and tech investor Sriram Krishnan. (Sacks and Krishnan did not reply to requests for comment.)  (There is, of course, another camp in the AI safety debate: the group of researchers and advocates commonly associated with the label “AI ethics.” Though they also favor regulation, they tend to think the speed of AI progress has been overstated and have often written off AGI as a sci-fi story or a scam that distracts us from the technology’s immediate threats. But any potential doomer demise wouldn’t exactly give them the same opening the accelerationists are seeing.) So where does this leave the doomers? As part of our Hype Correction package, we decided to ask some of the movement’s biggest names to see if the recent setbacks and general vibe shift had altered their views. Are they frustrated that policymakers no longer seem to heed their threats? Are they quietly adjusting their timelines for the apocalypse?  Recent interviews with 20 people who study or advocate AI safety and governance—including Nobel Prize winner Geoffrey Hinton, Turing Prize winner Yoshua Bengio, and high-profile experts like former OpenAI board member Helen Toner—reveal that rather than feeling chastened or lost in the wilderness, they’re still deeply committed to their cause, believing that AGI remains not just possible but incredibly dangerous. At the same time, they seem to be grappling with a near contradiction. While they’re somewhat relieved that recent developments suggest AGI is further out than they previously thought (“Thank God we have more time,” says AI researcher Jeffrey Ladish), they also feel angry that people in power are not taking them seriously enough (Daniel Kokotajlo, lead author of a cautionary forecast called “AI 2027,” calls the Sacks and Krishnan tweets “deranged and/or dishonest”).  Broadly speaking, these experts see the talk of an AI bubble as no more than a speed bump, and disappointment in GPT-5 as more distracting than illuminating. They still generally favor more robust regulation and worry that progress on policy—the implementation of the EU AI Act; the passage of the first major American AI safety bill, California’s SB 53; and new interest in AGI risk from some members of Congress—has become vulnerable as Washington overreacts to what doomers see as short-term failures to live up to the hype.  Some were also eager to correct what they see as the most persistent misconceptions about the doomer world. Though their critics routinely mock them for predicting that AGI is right around the corner, they claim that’s never been an essential part of their case: It “isn’t about imminence,” says Berkeley professor Stuart Russell, the author of Human Compatible: Artificial Intelligence and the Problem of Control. Most people I spoke with say their timelines to dangerous systems have actually lengthened slightly in the last year—an important change given how quickly the policy and technical landscapes can shift.  “If someone said there’s a four-mile-diameter asteroid that’s going to hit the Earth in 2067, we wouldn’t say, ‘Remind me in 2066 and we’ll think about it.’” Many of them, in fact, emphasize the importance of changing timelines. And even if they are just a tad longer now, Toner tells me that one big-picture story of the ChatGPT era is the dramatic compression of these estimates across the AI world. For a long while, she says, AGI was expected in many decades. Now, for the most part, the predicted arrival is sometime in the next few years to 20 years. So even if we have a little bit more time, she (and many of her peers) continue to see AI safety as incredibly, vitally urgent. She tells me that if AGI were possible anytime in even the next 30 years, “It’s a huge fucking deal. We should have a lot of people working on this.” So despite the precarious moment doomers find themselves in, their bottom line remains that no matter when AGI is coming (and, again, they say it’s very likely coming), the world is far from ready.  Maybe you agree. Or maybe you may think this future is far from guaranteed. Or that it’s the stuff of science fiction. You may even think AGI is a great big conspiracy theory. You’re not alone, of course—this topic is polarizing. But whatever you think about the doomer mindset, there’s no getting around the fact that certain people in this world have a lot of influence. So here are some of the most prominent people in the space, reflecting on this moment in their own words.  Interviews have been edited and condensed for length and clarity.  The Nobel laureate who’s not sure what’s coming Geoffrey Hinton, winner of the Turing Award and the Nobel Prize in physics for pioneering deep learning The biggest change in the last few years is that there are people who are hard to dismiss who are saying this stuff is dangerous. Like, [former Google CEO] Eric Schmidt, for example, really recognized this stuff could be really dangerous. He and I were in China recently talking to someone on the Politburo, the party secretary of Shanghai, to make sure he really understood—and he did. I think in China, the leadership understands AI and its dangers much better because many of them are engineers. I’ve been focused on the longer-term threat: When AIs get more intelligent than us, can we really expect that humans will remain in control or even relevant? But I don’t think anything is inevitable. There’s huge uncertainty on everything. We’ve never been here before. Anybody who’s confident they know what’s going to happen seems silly to me. I think this is very unlikely but maybe it’ll turn out that all the people saying AI is way overhyped are correct. Maybe it’ll turn out that we can’t get much further than the current chatbots—we hit a wall due to limited data. I don’t believe that. I think that’s unlikely, but it’s possible.  I also don’t believe people like Eliezer Yudkowsky, who say if anybody builds it, we’re all going to die. We don’t know that.  But if you go on the balance of the evidence, I think it’s fair to say that most experts who know a lot about AI believe it’s very probable that we’ll have superintelligence within the next 20 years. [Google DeepMind CEO] Demis Hassabis says maybe 10 years. Even [prominent AI skeptic] Gary Marcus would probably say, “Well, if you guys make a hybrid system with good old-fashioned symbolic logic … maybe that’ll be superintelligent.” [Editor’s note: In September, Marcus predicted AGI would arrive between 2033 and 2040.] And I don’t think anybody believes progress will stall at AGI. I think more or less everybody believes a few years after AGI, we’ll have superintelligence, because the AGI will be better than us at building AI. So while I think it’s clear that the winds are getting more difficult, simultaneously, people are putting in many more resources [into developing advanced AI]. I think progress will continue just because there’s many more resources going in. The deep learning pioneer who wishes he’d seen the risks sooner Yoshua Bengio, winner of the Turing Award, chair of the International AI Safety Report, and founder of LawZero Some people thought that GPT-5 meant we had hit a wall, but that isn’t quite what you see in the scientific data and trends. There have been people overselling the idea that AGI is tomorrow morning, which commercially could make sense. But if you look at the various benchmarks, GPT-5 is just where you would expect the models at that point in time to be. By the way, it’s not just GPT-5, it’s Claude and Google models, too. In some areas where AI systems weren’t very good, like Humanity’s Last Exam or FrontierMath, they’re getting much better scores now than they were at the beginning of the year. At the same time, the overall landscape for AI governance and safety is not good. There’s a strong force pushing against regulation. It’s like climate change. We can put our head in the sand and hope it’s going to be fine, but it doesn’t really deal with the issue. The biggest disconnect with policymakers is a misunderstanding of the scale of change that is likely to happen if the trend of AI progress continues. A lot of people in business and governments simply think of AI as just another technology that’s going to be economically very powerful. They don’t understand how much it might change the world if trends continue, and we approach human-level AI.  Like many people, I had been blinding myself to the potential risks to some extent. I should have seen it coming much earlier. But it’s human. You’re excited about your work and you want to see the good side of it. That makes us a little bit biased in not really paying attention to the bad things that could happen. Even a small chance—like 1% or 0.1%—of creating an accident where billions of people die is not acceptable.  The AI veteran who believes AI is progressing—but not fast enough to prevent the bubble from bursting Stuart Russell, distinguished professor of computer science, University of California, Berkeley, and author of Human Compatible I hope the idea that talking about existential risk makes you a “doomer” or is “science fiction” comes to be seen as fringe, given that most leading AI researchers and most leading AI CEOs take it seriously.  There have been claims that AI could never pass a Turing test, or you could never have a system that uses natural language fluently, or one that could parallel-park a car. All these claims just end up getting disproved by progress. People are spending trillions of dollars to make superhuman AI happen. I think they need some new ideas, but there’s a significant chance they will come up with them, because many significant new ideas have happened in the last few years.  My fairly consistent estimate for the last 12 months has been that there’s a 75% chance that those breakthroughs are not going to happen in time to rescue the industry from the bursting of the bubble. Because the investments are consistent with a prediction that we’re going to have much better AI that will deliver much more value to real customers. But if those predictions don’t come true, then there’ll be a lot of blood on the floor in the stock markets. However, the safety case isn’t about imminence. It’s about the fact that we still don’t have a solution to the control problem. If someone said there’s a four-mile-diameter asteroid that’s going to hit the Earth in 2067, we wouldn’t say, “Remind me in 2066 and we’ll think about it.” We don’t know how long it takes to develop the technology needed to control superintelligent AI. Looking at precedents, the acceptable level of risk for a nuclear plant melting down is about one in a million per year. Extinction is much worse than that. So maybe set the acceptable risk at one in a billion. But the companies are saying it’s something like one in five. They don’t know how to make it acceptable. And that’s a problem. The professor trying to set the narrative straight on AI safety David Krueger, assistant professor in machine learning at the University of Montreal and Yoshua Bengio’s Mila Institute, and founder of Evitable I think people definitely overcorrected in their response to GPT-5. But there was hype. My recollection was that there were multiple statements from CEOs at various levels of explicitness who basically said that by the end of 2025, we’re going to have an automated drop-in replacement remote worker. But it seems like it’s been underwhelming, with agents just not really being there yet. I’ve been surprised how much these narratives predicting AGI in 2027 capture the public attention. When 2027 comes around, if things still look pretty normal, I think people are going to feel like the whole worldview has been falsified. And it’s really annoying how often when I’m talking to people about AI safety, they assume that I think we have really short timelines to dangerous systems, or that I think LLMs or deep learning are going to give us AGI. They ascribe all these extra assumptions to me that aren’t necessary to make the case.  I’d expect we need decades for the international coordination problem. So even if dangerous AI is decades off, it’s already urgent. That point seems really lost on a lot of people. There’s this idea of “Let’s wait until we have a really dangerous system and then start governing it.” Man, that is way too late. I still think people in the safety community tend to work behind the scenes, with people in power, not really with civil society. It gives ammunition to people who say it’s all just a scam or insider lobbying. That’s not to say that there’s no truth to these narratives, but the underlying risk is still real. We need more public awareness and a broad base of support to have an effective response. If you actually believe there’s a 10% chance of doom in the next 10 years—which I think a reasonable person should, if they take a close look—then the first thing you think is: “Why are we doing this? This is crazy.” That’s just a very reasonable response once you buy the premise. The governance expert worried about AI safety’s credibility Helen Toner, acting executive director of Georgetown University’s Center for Security and Emerging Technology and former OpenAI board member When I got into the space, AI safety was more of a set of philosophical ideas. Today, it’s a thriving set of subfields of machine learning, filling in the gulf between some of the more “out there” concerns about AI scheming, deception, or power-seeking and real concrete systems we can test and play with.  “I worry that some aggressive AGI timeline estimates from some AI safety people are setting them up for a boy-who-cried-wolf moment.” AI governance is improving slowly. If we have lots of time to adapt and governance can keep improving slowly, I feel not bad. If we don’t have much time, then we’re probably moving too slow. I think GPT-5 is generally seen as a disappointment in DC. There’s a pretty polarized conversation around: Are we going to have AGI and superintelligence in the next few years? Or is AI actually just totally all hype and useless and a bubble? The pendulum had maybe swung too far toward “We’re going to have super-capable systems very, very soon.” And so now it’s swinging back toward “It’s all hype.” I worry that some aggressive AGI timeline estimates from some AI safety people are setting them up for a boy-who-cried-wolf moment. When the predictions about AGI coming in 2027 don’t come true, people will say, “Look at all these people who made fools of themselves. You should never listen to them again.” That’s not the intellectually honest response, if maybe they later changed their mind, or their take was that they only thought it was 20 percent likely and they thought that was still worth paying attention to. I think that shouldn’t be disqualifying for people to listen to you later, but I do worry it will be a big credibility hit. And that’s applying to people who are very concerned about AI safety and never said anything about very short timelines. The AI security researcher who now believes AGI is further out—and is grateful Jeffrey Ladish, executive director at Palisade Research In the last year, two big things updated my AGI timelines.  First, the lack of high-quality data turned out to be a bigger problem than I expected.  Second, the first “reasoning” model, OpenAI’s o1 in September 2024, showed reinforcement learning scaling was more effective than I thought it would be. And then months later, you see the o1 to o3 scale-up and you see pretty crazy impressive performance in math and coding and science—domains where it’s easier to sort of verify the results. But while we’re seeing continued progress, it could have been much faster. All of this bumps up my median estimate to the start of fully automated AI research and development from three years to maybe five or six years. But those are kind of made up numbers. It’s hard. I want to caveat all this with, like, “Man, it’s just really hard to do forecasting here.” Thank God we have more time. We have a possibly very brief window of opportunity to really try to understand these systems before they are capable and strategic enough to pose a real threat to our ability to control them. But it’s scary to see people think that we’re not making progress anymore when that’s clearly not true. I just know it’s not true because I use the models. One of the downsides of the way AI is progressing is that how fast it’s moving is becoming less legible to normal people.  Now, this is not true in some domains—like, look at Sora 2. It is so obvious to anyone who looks at it that Sora 2 is vastly better than what came before. But if you ask GPT-4 and GPT-5 why the sky is blue, they’ll give you basically the same answer. It is the correct answer. It’s already saturated the ability to tell you why the sky is blue. So the people who I expect to most understand AI progress right now are the people who are actually building with AIs or using AIs on very difficult scientific problems. The AGI forecaster who saw the critics coming Daniel Kokotajlo, executive director of the AI Futures Project; an OpenAI whistleblower; and lead author of “AI 2027,” a vivid scenario where—starting in 2027—AIs progress from “superhuman coders” to “wildly superintelligent” systems in the span of months AI policy seems to be getting worse, like the “Pro-AI” super PAC [launched earlier this year by executives from OpenAI and Andreessen Horowitz to lobby for a deregulatory agenda], and the deranged and/or dishonest tweets from Sriram Krishnan and David Sacks. AI safety research is progressing at the usual pace, which is excitingly rapid compared to most fields, but slow compared to how fast it needs to be. We said on the first page of “AI 2027” that our timelines were somewhat longer than 2027. So even when we launched AI 2027, we expected there to be a bunch of critics in 2028 triumphantly saying we’ve been discredited, like the tweets from Sacks and Krishnan. But we thought, and continue to think, that the intelligence explosion will probably happen sometime in the next five to 10 years, and that when it does, people will remember our scenario and realize it was closer to the truth than anything else available in 2025.  Predicting the future is hard, but it’s valuable to try; people should aim to communicate their uncertainty about the future in a way that is specific and falsifiable. This is what we’ve done and very few others have done. Our critics mostly haven’t made predictions of their own and often exaggerate and mischaracterize our views. They say our timelines are shorter than they are or ever were, or they say we are more confident than we are or were. I feel pretty good about having longer timelines to AGI. It feels like I just got a better prognosis from my doctor. The situation is still basically the same, though. Garrison Lovely is a freelance journalist and the author of Obsolete, an online publication and forthcoming book on the discourse, economics, and geopolitics of the race to build machine superintelligence (out spring 2026). His writing on AI has appeared in the New York Times, Nature, Bloomberg, Time, the Guardian, The Verge, and elsewhere.

It’s a weird time to be an AI doomer.

This small but influential community of researchers, scientists, and policy experts believes, in the simplest terms, that AI could get so good it could be bad—very, very bad—for humanity. Though many of these people would be more likely to describe themselves as advocates for AI safety than as literal doomsayers, they warn that AI poses an existential risk to humanity. They argue that absent more regulation, the industry could hurtle toward systems it can’t control. They commonly expect such systems to follow the creation of artificial general intelligence (AGI), a slippery concept generally understood as technology that can do whatever humans can do, and better. 


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


Though this is far from a universally shared perspective in the AI field, the doomer crowd has had some notable success over the past several years: helping shape AI policy coming from the Biden administration, organizing prominent calls for international “red lines” to prevent AI risks, and getting a bigger (and more influential) megaphone as some of its adherents win science’s most prestigious awards.

But a number of developments over the past six months have put them on the back foot. Talk of an AI bubble has overwhelmed the discourse as tech companies continue to invest in multiple Manhattan Projects’ worth of data centers without any certainty that future demand will match what they’re building. 

And then there was the August release of OpenAI’s latest foundation model, GPT-5, which proved something of a letdown. Maybe that was inevitable, since it was the most hyped AI release of all time; OpenAI CEO Sam Altman had boasted that GPT-5 felt “like a PhD-level expert” in every topic and told the podcaster Theo Von that the model was so good, it had made him feel “useless relative to the AI.” 

Many expected GPT-5 to be a big step toward AGI, but whatever progress the model may have made was overshadowed by a string of technical bugs and the company’s mystifying, quickly reversed decision to shut off access to every old OpenAI model without warning. And while the new model achieved state-of-the-art benchmark scores, many people felt, perhaps unfairly, that in day-to-day use GPT-5 was a step backward

All this would seem to threaten some of the very foundations of the doomers’ case. In turn, a competing camp of AI accelerationists, who fear AI is actually not moving fast enough and that the industry is constantly at risk of being smothered by overregulation, is seeing a fresh chance to change how we approach AI safety (or, maybe more accurately, how we don’t). 

This is particularly true of the industry types who’ve decamped to Washington: “The Doomer narratives were wrong,” declared David Sacks, the longtime venture capitalist turned Trump administration AI czar. “This notion of imminent AGI has been a distraction and harmful and now effectively proven wrong,” echoed the White House’s senior policy advisor for AI and tech investor Sriram Krishnan. (Sacks and Krishnan did not reply to requests for comment.) 

(There is, of course, another camp in the AI safety debate: the group of researchers and advocates commonly associated with the label “AI ethics.” Though they also favor regulation, they tend to think the speed of AI progress has been overstated and have often written off AGI as a sci-fi story or a scam that distracts us from the technology’s immediate threats. But any potential doomer demise wouldn’t exactly give them the same opening the accelerationists are seeing.)

So where does this leave the doomers? As part of our Hype Correction package, we decided to ask some of the movement’s biggest names to see if the recent setbacks and general vibe shift had altered their views. Are they frustrated that policymakers no longer seem to heed their threats? Are they quietly adjusting their timelines for the apocalypse? 

Recent interviews with 20 people who study or advocate AI safety and governance—including Nobel Prize winner Geoffrey Hinton, Turing Prize winner Yoshua Bengio, and high-profile experts like former OpenAI board member Helen Toner—reveal that rather than feeling chastened or lost in the wilderness, they’re still deeply committed to their cause, believing that AGI remains not just possible but incredibly dangerous.

At the same time, they seem to be grappling with a near contradiction. While they’re somewhat relieved that recent developments suggest AGI is further out than they previously thought (“Thank God we have more time,” says AI researcher Jeffrey Ladish), they also feel angry that people in power are not taking them seriously enough (Daniel Kokotajlo, lead author of a cautionary forecast called “AI 2027,” calls the Sacks and Krishnan tweets “deranged and/or dishonest”). 

Broadly speaking, these experts see the talk of an AI bubble as no more than a speed bump, and disappointment in GPT-5 as more distracting than illuminating. They still generally favor more robust regulation and worry that progress on policy—the implementation of the EU AI Act; the passage of the first major American AI safety bill, California’s SB 53; and new interest in AGI risk from some members of Congress—has become vulnerable as Washington overreacts to what doomers see as short-term failures to live up to the hype. 

Some were also eager to correct what they see as the most persistent misconceptions about the doomer world. Though their critics routinely mock them for predicting that AGI is right around the corner, they claim that’s never been an essential part of their case: It “isn’t about imminence,” says Berkeley professor Stuart Russell, the author of Human Compatible: Artificial Intelligence and the Problem of Control. Most people I spoke with say their timelines to dangerous systems have actually lengthened slightly in the last year—an important change given how quickly the policy and technical landscapes can shift. 

“If someone said there’s a four-mile-diameter asteroid that’s going to hit the Earth in 2067, we wouldn’t say, ‘Remind me in 2066 and we’ll think about it.’”

Many of them, in fact, emphasize the importance of changing timelines. And even if they are just a tad longer now, Toner tells me that one big-picture story of the ChatGPT era is the dramatic compression of these estimates across the AI world. For a long while, she says, AGI was expected in many decades. Now, for the most part, the predicted arrival is sometime in the next few years to 20 years. So even if we have a little bit more time, she (and many of her peers) continue to see AI safety as incredibly, vitally urgent. She tells me that if AGI were possible anytime in even the next 30 years, “It’s a huge fucking deal. We should have a lot of people working on this.”

So despite the precarious moment doomers find themselves in, their bottom line remains that no matter when AGI is coming (and, again, they say it’s very likely coming), the world is far from ready. 

Maybe you agree. Or maybe you may think this future is far from guaranteed. Or that it’s the stuff of science fiction. You may even think AGI is a great big conspiracy theory. You’re not alone, of course—this topic is polarizing. But whatever you think about the doomer mindset, there’s no getting around the fact that certain people in this world have a lot of influence. So here are some of the most prominent people in the space, reflecting on this moment in their own words. 

Interviews have been edited and condensed for length and clarity. 


The Nobel laureate who’s not sure what’s coming

Geoffrey Hinton, winner of the Turing Award and the Nobel Prize in physics for pioneering deep learning

The biggest change in the last few years is that there are people who are hard to dismiss who are saying this stuff is dangerous. Like, [former Google CEO] Eric Schmidt, for example, really recognized this stuff could be really dangerous. He and I were in China recently talking to someone on the Politburo, the party secretary of Shanghai, to make sure he really understood—and he did. I think in China, the leadership understands AI and its dangers much better because many of them are engineers.

I’ve been focused on the longer-term threat: When AIs get more intelligent than us, can we really expect that humans will remain in control or even relevant? But I don’t think anything is inevitable. There’s huge uncertainty on everything. We’ve never been here before. Anybody who’s confident they know what’s going to happen seems silly to me. I think this is very unlikely but maybe it’ll turn out that all the people saying AI is way overhyped are correct. Maybe it’ll turn out that we can’t get much further than the current chatbots—we hit a wall due to limited data. I don’t believe that. I think that’s unlikely, but it’s possible. 

I also don’t believe people like Eliezer Yudkowsky, who say if anybody builds it, we’re all going to die. We don’t know that. 

But if you go on the balance of the evidence, I think it’s fair to say that most experts who know a lot about AI believe it’s very probable that we’ll have superintelligence within the next 20 years. [Google DeepMind CEO] Demis Hassabis says maybe 10 years. Even [prominent AI skeptic] Gary Marcus would probably say, “Well, if you guys make a hybrid system with good old-fashioned symbolic logic … maybe that’ll be superintelligent.” [Editor’s note: In September, Marcus predicted AGI would arrive between 2033 and 2040.]

And I don’t think anybody believes progress will stall at AGI. I think more or less everybody believes a few years after AGI, we’ll have superintelligence, because the AGI will be better than us at building AI.

So while I think it’s clear that the winds are getting more difficult, simultaneously, people are putting in many more resources [into developing advanced AI]. I think progress will continue just because there’s many more resources going in.

The deep learning pioneer who wishes he’d seen the risks sooner

Yoshua Bengio, winner of the Turing Award, chair of the International AI Safety Report, and founder of LawZero

Some people thought that GPT-5 meant we had hit a wall, but that isn’t quite what you see in the scientific data and trends.

There have been people overselling the idea that AGI is tomorrow morning, which commercially could make sense. But if you look at the various benchmarks, GPT-5 is just where you would expect the models at that point in time to be. By the way, it’s not just GPT-5, it’s Claude and Google models, too. In some areas where AI systems weren’t very good, like Humanity’s Last Exam or FrontierMath, they’re getting much better scores now than they were at the beginning of the year.

At the same time, the overall landscape for AI governance and safety is not good. There’s a strong force pushing against regulation. It’s like climate change. We can put our head in the sand and hope it’s going to be fine, but it doesn’t really deal with the issue.

The biggest disconnect with policymakers is a misunderstanding of the scale of change that is likely to happen if the trend of AI progress continues. A lot of people in business and governments simply think of AI as just another technology that’s going to be economically very powerful. They don’t understand how much it might change the world if trends continue, and we approach human-level AI. 

Like many people, I had been blinding myself to the potential risks to some extent. I should have seen it coming much earlier. But it’s human. You’re excited about your work and you want to see the good side of it. That makes us a little bit biased in not really paying attention to the bad things that could happen.

Even a small chance—like 1% or 0.1%—of creating an accident where billions of people die is not acceptable. 

The AI veteran who believes AI is progressing—but not fast enough to prevent the bubble from bursting

Stuart Russell, distinguished professor of computer science, University of California, Berkeley, and author of Human Compatible

I hope the idea that talking about existential risk makes you a “doomer” or is “science fiction” comes to be seen as fringe, given that most leading AI researchers and most leading AI CEOs take it seriously. 

There have been claims that AI could never pass a Turing test, or you could never have a system that uses natural language fluently, or one that could parallel-park a car. All these claims just end up getting disproved by progress.

People are spending trillions of dollars to make superhuman AI happen. I think they need some new ideas, but there’s a significant chance they will come up with them, because many significant new ideas have happened in the last few years. 

My fairly consistent estimate for the last 12 months has been that there’s a 75% chance that those breakthroughs are not going to happen in time to rescue the industry from the bursting of the bubble. Because the investments are consistent with a prediction that we’re going to have much better AI that will deliver much more value to real customers. But if those predictions don’t come true, then there’ll be a lot of blood on the floor in the stock markets.

However, the safety case isn’t about imminence. It’s about the fact that we still don’t have a solution to the control problem. If someone said there’s a four-mile-diameter asteroid that’s going to hit the Earth in 2067, we wouldn’t say, “Remind me in 2066 and we’ll think about it.” We don’t know how long it takes to develop the technology needed to control superintelligent AI.

Looking at precedents, the acceptable level of risk for a nuclear plant melting down is about one in a million per year. Extinction is much worse than that. So maybe set the acceptable risk at one in a billion. But the companies are saying it’s something like one in five. They don’t know how to make it acceptable. And that’s a problem.

The professor trying to set the narrative straight on AI safety

David Krueger, assistant professor in machine learning at the University of Montreal and Yoshua Bengio’s Mila Institute, and founder of Evitable

I think people definitely overcorrected in their response to GPT-5. But there was hype. My recollection was that there were multiple statements from CEOs at various levels of explicitness who basically said that by the end of 2025, we’re going to have an automated drop-in replacement remote worker. But it seems like it’s been underwhelming, with agents just not really being there yet.

I’ve been surprised how much these narratives predicting AGI in 2027 capture the public attention. When 2027 comes around, if things still look pretty normal, I think people are going to feel like the whole worldview has been falsified. And it’s really annoying how often when I’m talking to people about AI safety, they assume that I think we have really short timelines to dangerous systems, or that I think LLMs or deep learning are going to give us AGI. They ascribe all these extra assumptions to me that aren’t necessary to make the case. 

I’d expect we need decades for the international coordination problem. So even if dangerous AI is decades off, it’s already urgent. That point seems really lost on a lot of people. There’s this idea of “Let’s wait until we have a really dangerous system and then start governing it.” Man, that is way too late.

I still think people in the safety community tend to work behind the scenes, with people in power, not really with civil society. It gives ammunition to people who say it’s all just a scam or insider lobbying. That’s not to say that there’s no truth to these narratives, but the underlying risk is still real. We need more public awareness and a broad base of support to have an effective response.

If you actually believe there’s a 10% chance of doom in the next 10 years—which I think a reasonable person should, if they take a close look—then the first thing you think is: “Why are we doing this? This is crazy.” That’s just a very reasonable response once you buy the premise.

The governance expert worried about AI safety’s credibility

Helen Toner, acting executive director of Georgetown University’s Center for Security and Emerging Technology and former OpenAI board member

When I got into the space, AI safety was more of a set of philosophical ideas. Today, it’s a thriving set of subfields of machine learning, filling in the gulf between some of the more “out there” concerns about AI scheming, deception, or power-seeking and real concrete systems we can test and play with. 

“I worry that some aggressive AGI timeline estimates from some AI safety people are setting them up for a boy-who-cried-wolf moment.”

AI governance is improving slowly. If we have lots of time to adapt and governance can keep improving slowly, I feel not bad. If we don’t have much time, then we’re probably moving too slow.

I think GPT-5 is generally seen as a disappointment in DC. There’s a pretty polarized conversation around: Are we going to have AGI and superintelligence in the next few years? Or is AI actually just totally all hype and useless and a bubble? The pendulum had maybe swung too far toward “We’re going to have super-capable systems very, very soon.” And so now it’s swinging back toward “It’s all hype.”

I worry that some aggressive AGI timeline estimates from some AI safety people are setting them up for a boy-who-cried-wolf moment. When the predictions about AGI coming in 2027 don’t come true, people will say, “Look at all these people who made fools of themselves. You should never listen to them again.” That’s not the intellectually honest response, if maybe they later changed their mind, or their take was that they only thought it was 20 percent likely and they thought that was still worth paying attention to. I think that shouldn’t be disqualifying for people to listen to you later, but I do worry it will be a big credibility hit. And that’s applying to people who are very concerned about AI safety and never said anything about very short timelines.

The AI security researcher who now believes AGI is further out—and is grateful

Jeffrey Ladish, executive director at Palisade Research

In the last year, two big things updated my AGI timelines. 

First, the lack of high-quality data turned out to be a bigger problem than I expected. 

Second, the first “reasoning” model, OpenAI’s o1 in September 2024, showed reinforcement learning scaling was more effective than I thought it would be. And then months later, you see the o1 to o3 scale-up and you see pretty crazy impressive performance in math and coding and science—domains where it’s easier to sort of verify the results. But while we’re seeing continued progress, it could have been much faster.

All of this bumps up my median estimate to the start of fully automated AI research and development from three years to maybe five or six years. But those are kind of made up numbers. It’s hard. I want to caveat all this with, like, “Man, it’s just really hard to do forecasting here.”

Thank God we have more time. We have a possibly very brief window of opportunity to really try to understand these systems before they are capable and strategic enough to pose a real threat to our ability to control them.

But it’s scary to see people think that we’re not making progress anymore when that’s clearly not true. I just know it’s not true because I use the models. One of the downsides of the way AI is progressing is that how fast it’s moving is becoming less legible to normal people. 

Now, this is not true in some domains—like, look at Sora 2. It is so obvious to anyone who looks at it that Sora 2 is vastly better than what came before. But if you ask GPT-4 and GPT-5 why the sky is blue, they’ll give you basically the same answer. It is the correct answer. It’s already saturated the ability to tell you why the sky is blue. So the people who I expect to most understand AI progress right now are the people who are actually building with AIs or using AIs on very difficult scientific problems.

The AGI forecaster who saw the critics coming

Daniel Kokotajlo, executive director of the AI Futures Project; an OpenAI whistleblower; and lead author of “AI 2027,” a vivid scenario where—starting in 2027—AIs progress from “superhuman coders” to “wildly superintelligent” systems in the span of months

AI policy seems to be getting worse, like the “Pro-AI” super PAC [launched earlier this year by executives from OpenAI and Andreessen Horowitz to lobby for a deregulatory agenda], and the deranged and/or dishonest tweets from Sriram Krishnan and David Sacks. AI safety research is progressing at the usual pace, which is excitingly rapid compared to most fields, but slow compared to how fast it needs to be.

We said on the first page of “AI 2027” that our timelines were somewhat longer than 2027. So even when we launched AI 2027, we expected there to be a bunch of critics in 2028 triumphantly saying we’ve been discredited, like the tweets from Sacks and Krishnan. But we thought, and continue to think, that the intelligence explosion will probably happen sometime in the next five to 10 years, and that when it does, people will remember our scenario and realize it was closer to the truth than anything else available in 2025. 

Predicting the future is hard, but it’s valuable to try; people should aim to communicate their uncertainty about the future in a way that is specific and falsifiable. This is what we’ve done and very few others have done. Our critics mostly haven’t made predictions of their own and often exaggerate and mischaracterize our views. They say our timelines are shorter than they are or ever were, or they say we are more confident than we are or were.

I feel pretty good about having longer timelines to AGI. It feels like I just got a better prognosis from my doctor. The situation is still basically the same, though.

Garrison Lovely is a freelance journalist and the author of Obsolete, an online publication and forthcoming book on the discourse, economics, and geopolitics of the race to build machine superintelligence (out spring 2026). His writing on AI has appeared in the New York Times, Nature, Bloomberg, Time, the Guardian, The Verge, and elsewhere.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

ExxonMobil bumps up 2030 target for Permian production

ExxonMobil Corp., Houston, is looking to grow production in the Permian basin to about 2.5 MMboe/d by 2030, an increase of 200,000 boe/d from executives’ previous forecasts and a jump of more than 45% from this year’s output. Helping drive that higher target is an expected 2030 cost profile that

Read More »

WoodMac Says Eni Find Reinforces Kutei as One of Hottest Plays

Eni’s latest discovery in Indonesia reinforces the Kutei Basin’s reputation as one of the hottest global exploration plays of recent years. That’s what Andrew Harwood, Wood Mackenzie (WoodMac) Vice President, Corporate Research, said in a statement sent to Rigzone, adding that the find “will add to Indonesia’s gas resources when the country increasingly focuses on gas availability”. “It provides options for Indonesia as the nation balances domestic demand needs with future export opportunities,” Harwood said. Harwood noted that the Konta-1 discovery “adds momentum to Eni’s existing plans to invest in and develop new gas sources for the currently underutilized Bontang LNG plant”. “The Konta-1 discovery lies in the northern Muara Bakau area, close to Eni’s pre-FID Kutei North Hub. It provides future tie-back upside and offers Plan B for Eni if the un-appraised Geng North underperforms initial expectations,” he added. Harwood also said Eni’s latest find encourages the company’s ongoing exploration campaign, which he pointed out runs into 2026. “Wood Mackenzie’s pick of prospects in line for drilling is Geliga, which holds multi trillion cubic foot potential,” he stated. Harwood went on to note that 2026 “looks exciting for Eni’s Indonesian portfolio with several major milestones ahead”. “These include exploration campaign results, a final investment decision on the Northern hub development, and the launch of ‘NewCo’ – the strategic satellite venture between Eni and Petronas,” he highlighted. In a statement sent to Rigzone recently, Eni announced a “significant gas discovery” in the Konta-1 exploration well off the coast of East Kalimantan in Indonesia. “Estimates indicate 600 billion cubic feet of gas initially in place (GIIP) with a potential upside beyond one trillion cubic feet,” Eni said in the statement. “Konta-1 was drilled to a depth of 4,575 meters [15,009 feet] in 570 meters [1,870 feet] water depth, encountering gas in

Read More »

China Fossil Fuel Generation Set for First Drop in Decade

China’s fossil fuel power plants are on track to chart their first annual drop in generation in a decade as renewables flood the grid to meet rising demand.  Thermal electricity output fell 4.2 percent in November, according to data published by the National Bureau of Statistics on Monday. Generation from coal and gas-fired plants is down 0.7 percent this year, on track for the first annual decline since 2015 unless there’s a sharp jump in December. China’s massive fleet of coal power stations is the world’s leading source of greenhouse gases fueling global warming. Even though the nation is continuing to build more of the plants, their use is plateauing as huge investments in renewables meet growing consumption needs.  Wind power jumped 22 percent in November from the previous year, while large solar farms saw a 23 percent rise in generation, additional data released Monday showed.  Even as power-sector emissions in China drop, they’ve been largely offset by rising pollution from a growing fleet of chemicals and plastics factories, according to the Centre for Research on Energy and Clean Air.  The nation’s coal output fell on an annual basis for a fifth month, while oil and natural gas continued to rise toward annual production records. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.

Read More »

Smart growth, lower costs: How fuel cells support utility expansion

As utilities work to expand capacity and modernize aging infrastructure to meet growing demand, they face a new imperative: doing more with every dollar invested. Analysts project capital expenditures by U.S. investor-owned electric utilities will reach $1.4 trillion between 2025 and 2030, nearly twice the amount spent during the entire previous decade.  To maintain today’s investment momentum and strengthen reliability and resilience, utilities have an opportunity to look beyond cost control and pursue strategies that deliver broader long-term value. That means seeking systems that maximize output, efficiency and uptime.  In today’s energy landscape, fuel cells are becoming increasingly relevant. They provide modular, reliable power that helps utilities extract more value from their investments while addressing rising demand and aging infrastructure. With high electrical efficiency, modular design and exceptional reliability, advanced fuel cell systems enable utilities to generate more value from their assets and streamline their day-to-day operations. Powering More with Less: Fuel Cells Redefine Efficiency Fuel cells outperform traditional combustion-based generators by converting fuel into electricity through an electrochemical reaction, rather than by burning it. This translates into roughly 15% to 20% higher efficiency than most open-cycle gas turbines or reciprocating engines. That improved conversion efficiency means each kilowatt-hour requires less fuel, increasing energy productivity and reducing exposure to fuel-price swings.  Among the various types of fuel cells, solid oxide fuel cells(SOFCs) offer the greatest advantages. Operating at high temperatures and utilizing a solid ceramic electrolyte, rather than relying on precious metals, corrosive acids or molten materials, SOFCs are a modern technology that converts fuels such as natural gas or hydrogen into electricity with exceptional efficiency and durability. Conversion efficiencies can reach up to 65% and when integrated with combined heat and power (CHP) configurations, the total system efficiency can exceed 90%.  Meeting Demand Faster with Fuel Cells With demand surging,

Read More »

What’s ahead for utilities: Navigating demand, AI and customer affordability

Utilities are entering a transformative year, with surging demand, affordability concerns, cybersecurity challenges and the increasing integration of artificial intelligence reshaping the industry. Utilities that thrive in this complex environment will need to adopt disciplined, analytics-driven strategies to ensure resilience, reliability and affordability. The forces driving change are significant and utilities must act decisively to navigate these challenges while building trust with customers and regulators. For a comprehensive analysis of the trends and strategies driving the future of utilities, download the full report. Surging Demand Requires Proactive Grid Management One of the most pressing issues is the unprecedented demand growth fueled by data centers, AI workloads and advanced manufacturing. Global power demand from data centers alone is expected to rise by 165% by 2030, with AI-driven workloads accounting for nearly a third of that increase. This surge in demand is straining transmission and distribution grids, which are already hampered by regulatory and permitting delays. Utilities must rethink traditional planning cycles and adopt predictive load forecasting tools to anticipate new energy use patterns with greater accuracy. Advanced transmission technologies, such as dynamic line ratings and topology optimization, can help increase grid capacity and efficiency, ensuring utilities remain competitive. Modernizing interconnection processes is also vital, as delays in connecting new loads to the grid can hinder progress. By deploying digital workflow tools and creating public-facing hosting capacity maps, utilities can streamline interconnection requests and enable developers to make informed decisions about project siting. Customer Affordability at a Tipping Point Massive grid investments to support electrification, data centers and climate resilience are driving rates higher, while inflation continues to strain household budgets. Since 2021, electricity prices have risen by 30%, leaving nearly 80 million Americans struggling to pay their utility bills. Utilities must adopt customer-centric solutions to address these concerns. Predictive analytics can

Read More »

Equinor Greenlights Johan Castberg Tieback

Equinor ASA and its partners have agreed to proceed with the first project to be connected to the Johan Castberg field. Johan Castberg started production in March as only the third development on Norway’s side of the Barents Sea, according to information on government website Norskpetroleum.no. The other two, Snøhvit and Goliat, came online 2007 and 2016 respectively. “Recoverable oil in the new subsea development [the Isflak discovery] is estimated at 46 million barrels, and start-up is planned as early as the fourth quarter of 2028”, the Norwegian primarily state-owned company said in an online statement. Isflak, the first of several discoveries planned to be tied back to Johan Castberg, was discovered 2021. Its development is estimated to cost over NOK 4 billion, according to the statement. “A rapid development is possible because we can copy standardized solutions from Johan Castberg. The reservoir is in the same license and is similar to the discoveries we have developed previously, which means that we can copy equipment and well solutions. Johan Castberg has been developed as a future hub in the area”, said Equinor senior vice president for project development Trond Bokn. Equinor said, “The development solution for the Isflak discovery consists of two wells in a new subsea template tied back to existing subsea facilities via pipelines and umbilicals, and all new infrastructure is located within the current Johan Castberg license”. “Equinor has therefore applied to the Ministry of Energy for confirmation that Equinor has fulfilled the impact assessment obligation and exemption from the requirement for a plan for development and operation”, it said. “Global combustion emissions have been assessed in line with new practice”. Johan Castberg has raised Norway’s production capacity by up to 220,000 barrels per day, with estimated recoverable volumes of 450-650 million barrels, according to Equinor. The

Read More »

TotalEnergies, Repsol, HitecVision Form UK North Sea Leader

TotalEnergies SE and NEO NEXT Energy Ltd, recently created by Repsol UK Ltd and HitecVision AS, have entered into a deal to combine their exploration and production assets in the United Kingdom and thereby create what they say would be the top producer in the UK North Sea. France’s TotalEnergies would own 47.5 percent of the resulting company, to be called NEO NEXT+. Norway-based HitecVision, a capital investor in Europe’s energy sector, and Repsol UK will retain 28.88 percent and 23.63 percent respectively, according to online statements by the parties. Repsol UK is 75 percent owned by Spanish integrated energy company Repsol SA and 25 percent owned by the United States’ EIG Global Energy Partners, which acquired a 25 percent stake in Repsol SA’s entire upstream portfolio in 2023 for $4.8 billion. HitecVision and Repsol UK had merged their North Sea assets into NEO NEXT earlier this year with interests of 55 percent and 45 percent respectively. NEO NEXT+ would “encompass a large and diverse asset portfolio including notably NEO Energy’s [HitecVision subsidiary] and Repsol UK’s interests in the Elgin/Franklin complex and the Penguins, Mariner, Shearwater and Culzean fields, enriched by TotalEnergies’ UK upstream assets, notably including its interests in the Elgin/Franklin complex and the Alwyn North, Dunbar and Culzean fields”, TotalEnergies said in a statement on its website. “With TotalEnergies as its leading shareholder, NEO NEXT+ will become the largest independent oil and gas producer in the UK with a production over 250,000 barrels of oil equivalent per day in 2026, ideally positioned to maximize the value of its portfolio, deliver strong financial returns and ensure a long-term sustainable and resilient future for its oil and gas business”, TotalEnergies said. TotalEnergies’ upstream portfolio in the UK averaged 121,000 barrels of oil equivalent a day (boed) last year, accounting for about 27 percent of the

Read More »

Executive Roundtable: Converging Disciplines in the AI Buildout

At Data Center Frontier, we rely on industry leaders to help us understand the most urgent challenges facing digital infrastructure. And in the fourth quarter of 2025, the data center industry is adjusting to a new kind of complexity.  AI-scale infrastructure is redefining what “mission critical” means, from megawatt density and modular delivery to the chemistry of cooling fluids and the automation of energy systems. Every project has arguably in effect now become an ecosystem challenge, demanding that electrical, mechanical, construction, and environmental disciplines act as one.  For this quarter’s Executive Roundtable, DCF convened subject matter experts from Ecolab, EdgeConneX, Rehlko and Schneider Electric – leaders spanning the full chain of facilities design, deployment, and operation. Their insights illuminate how liquid cooling, energy management, and sustainable process design in data centers are now converging to set the pace for the AI era. Our distinguished executive panelists for this quarter include: Rob Lowe, Director RD&E – Global High Tech, Ecolab Phillip Marangella, Chief Marketing and Product Officer, EdgeConneX Ben Rapp, Manager, Strategic Project Development, Rehlko Joe Reele, Vice President, Datacenter Solution Architects, Schneider Electric Today: Engineering the New Normal – Liquid Cooling at Scale  Today’s kickoff article grapples with how, as liquid cooling technology transitions to default hyperscale design, the challenge is no longer if, but how to scale builds safely, repeatably, and globally.  Cold plates, immersion, dielectric fluids, and liquid-to-chip loops are converging into factory-integrated building blocks, yet variability in chemistry, serviceability, materials, commissioning practices, and long-term maintenance threatens to fragment adoption just as demand accelerates.  Success now hinges on shared standards and tighter collaboration across OEMs, builders, and process specialists worldwide. So how do developers coordinate across the ecosystem to make liquid cooling a safe, maintainable global default? What’s Ahead in the Roundtable Over the coming days, our panel

Read More »

DCF Trends Summit 2025: AI for Good – How Operators, Vendors and Cooling Specialists See the Next Phase of AI Data Centers

At the 2025 Data Center Frontier Trends Summit (Aug. 26-28) in Reston, Va., the conversation around AI and infrastructure moved well past the hype. In a panel sponsored by Schneider Electric—“AI for Good: Building for AI Workloads and Using AI for Smarter Data Centers”—three industry leaders explored what it really means to design, cool and operate the new class of AI “factories,” while also turning AI inward to run those facilities more intelligently. Moderated by Data Center Frontier Editor in Chief Matt Vincent, the session brought together: Steve Carlini, VP, Innovation and Data Center Energy Management Business, Schneider Electric Sudhir Kalra, Chief Data Center Operations Officer, Compass Datacenters Andrew Whitmore, VP of Sales, Motivair Together, they traced both sides of the “AI for Good” equation: building for AI workloads at densities that would have sounded impossible just a few years ago, and using AI itself to reduce risk, improve efficiency and minimize environmental impact. From Bubble Talk to “AI Factories” Carlini opened by acknowledging the volatility surrounding AI investments, citing recent headlines and even Sam Altman’s public use of the word “bubble” to describe the current phase of exuberance. “It’s moving at an incredible pace,” Carlini noted, pointing out that roughly half of all VC money this year has flowed into AI, with more already spent than in all of the previous year. Not every investor will win, he said, and some companies pouring in hundreds of billions may not recoup their capital. But for infrastructure, the signal is clear: the trajectory is up and to the right. GPU generations are cycling faster than ever. Densities are climbing from high double-digits per rack toward hundreds of kilowatts. The hyperscale “AI factories,” as NVIDIA calls them, are scaling to campus capacities measured in gigawatts. Carlini reminded the audience that in 2024,

Read More »

FinOps Foundation sharpens FOCUS to reduce cloud cost chaos

“The big change that’s really started to happen in late 2024 early 2025 is that the FinOps practice started to expand past the cloud,” Storment said. “A lot of organizations got really good at using FinOps to manage the value of cloud, and then their organizations went, ‘oh, hey, we’re living in this happily hybrid state now where we’ve got cloud, SaaS, data center. Can you also apply the FinOps practice to our SaaS? Or can you apply it to our Snowflake? Can you apply it to our data center?’” The FinOps Foundation’s community has grown to approximately 100,000 practitioners. The organization now includes major cloud vendors, hardware providers like Nvidia and AMD, data center operators and data cloud platforms like Snowflake and Databricks. Some 96 of the Fortune 100 now participate in FinOps Foundation programs. The practice itself has shifted in two directions. It has moved left into earlier architectural and design processes, becoming more proactive rather than reactive. It has also moved up organizationally, from director-level cloud management roles to SVP and COO positions managing converged technology portfolios spanning multiple infrastructure types. This expansion has driven the evolution of FOCUS beyond its original cloud billing focus. Enterprises are implementing FOCUS as an internal standard for chargeback reporting even when their providers don’t generate native FOCUS data. Some newer cloud providers, particularly those focused on AI infrastructure, are using the FOCUS specification to define their billing data structures from the ground up rather than retrofitting existing systems. The FOCUS 1.3 release reflects this maturation, addressing technical gaps that have emerged as organizations apply cost management practices across increasingly complex hybrid environments. FOCUS 1.3 exposes cost allocation logic for shared infrastructure The most significant technical enhancement in FOCUS 1.3 addresses a gap in how shared infrastructure costs are allocated and

Read More »

Aetherflux joins the race to launch orbital data centers by 2027

Enterprises will connect to and manage orbital workloads “the same way they manage cloud workloads today,” using optical links, the spokesperson added. The company’s approach is to “continuously launch new hardware and quickly integrate the latest architectures,” with older systems running lower-priority tasks to serve out the full useful lifetime of their high-end GPUs. The company declined to disclose pricing. Aetherflux plans to launch about 30 satellites at a time on SpaceX Falcon 9 rockets. Before the data center launch, the company will launch a power-beaming demonstration satellite in 2026 to test transmission of one kilowatt of energy from orbit to ground stations, using infrared lasers. Competition in the sector has intensified in recent months. In November, Starcloud launched its Starcloud-1 satellite carrying an Nvidia H100 GPU, which is 100 times more powerful than any previous GPU flown in space, according to the company, and demonstrated running Google’s Gemma AI model in orbit. In the same month, Google announced Project Suncatcher, with a 2027 demonstration mission planned. Analysts see limited near-term applications Despite the competitive activity, orbital data centers won’t replace terrestrial cloud regions for general hosting through 2030, said Ashish Banerjee, senior principal analyst at Gartner. Instead, they suit specific workloads, including meeting data sovereignty requirements for jurisdictionally complex scenarios, offering disaster recovery immune to terrestrial risks, and providing asynchronous high-performance computing, he said. “Orbital centers are ideal for high-compute, low-I/O batch jobs,” Banerjee said. “Think molecular folding simulations for pharma, massive Monte Carlo financial simulations, or training specific AI model weights. If the job takes 48 hours, the 500ms latency penalty of LEO is irrelevant.” One immediate application involves processing satellite-generated data in orbit, he said. Earth observation satellites using synthetic aperture radar generate roughly 10 gigabytes per second, but limited downlink bandwidth creates bottlenecks. Processing data in

Read More »

Here’s what Oracle’s soaring infrastructure spend could mean for enterprises

He said he had earlier told analysts in a separate call that margins for AI workloads in these data centers would be in the 30% to 40% range over the life of a customer contract. Kehring reassured that there would be demand for the data centers when they were completed, pointing to Oracle’s increasing remaining performance obligations, or services contracted but not yet delivered, up $68 billion on the previous quarter, saying that Oracle has been seeing unprecedented demand for AI workloads driven by the likes of Meta and Nvidia. Rising debt and margin risks raise flags for CIOs For analysts, though, the swelling debt load is hard to dismiss, even with Oracle’s attempts to de-risk its spend and squeeze more efficiency out of its buildouts. Gogia sees Oracle already under pressure, with the financial ecosystem around the company pricing the risk — one of the largest debts in corporate history, crossing $100 billion even before the capex spend this quarter — evident in the rising cost of insuring the debt and the shift in credit outlook. “The combination of heavy capex, negative free cash flow, increasing financing cost and long-dated revenue commitments forms a structural pressure that will invariably finds its way into the commercial posture of the vendor,” Gogia said, hinting at an “eventual” increase in pricing of the company’s offerings. He was equally unconvinced by Magouyrk’s assurances about the margin profile of AI workloads as he believes that AI infrastructure, particularly GPU-heavy clusters, delivers significantly lower margins in the early years because utilisation takes time to ramp.

Read More »

New Nvidia software gives data centers deeper visibility into GPU thermals and reliability

Addressing the challenge Modern AI accelerators now draw more than 700W per GPU, and multi-GPU nodes can reach 6kW, creating concentrated heat zones, rapid power swings, and a higher risk of interconnect degradation in dense racks, according to Manish Rawat, semiconductor analyst at TechInsights. Traditional cooling methods and static power planning increasingly struggle to keep pace with these loads. “Rich vendor telemetry covering real-time power draw, bandwidth behavior, interconnect health, and airflow patterns shifts operators from reactive monitoring to proactive design,” Rawat said. “It enables thermally aware workload placement, faster adoption of liquid or hybrid cooling, and smarter network layouts that reduce heat-dense traffic clusters.” Rawat added that the software’s fleet-level configuration insights can also help operators catch silent errors caused by mismatched firmware or driver versions. This can improve training reproducibility and strengthen overall fleet stability. “Real-time error and interconnect health data also significantly accelerates root-cause analysis, reducing MTTR and minimizing cluster fragmentation,” Rawat said. These operational pressures can shape budget decisions and infrastructure strategy at the enterprise level.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »