Stay Ahead, Stay ONMINE

How AGI became the most consequential conspiracy theory of our time

Are you feeling it? I hear it’s close: two years, five years—maybe next year! And I hear it’s going to change everything: it will cure disease, save the planet, and usher in an age of abundance. It will solve our biggest problems in ways we cannot yet imagine. It will redefine what it means to be human.  Wait—what if that’s all too good to be true? Because I also hear it will bring on the apocalypse and kill us all …  Either way, and whatever your timeline, something big is about to happen.  We could be talking about the Second Coming. Or the day when Heaven’s Gaters imagined they’d be picked up by a UFO and transformed into enlightened aliens. Or the moment when Donald Trump finally decides to deliver the storm that Q promised. But no. We’re of course talking about artificial general intelligence, or AGI—that hypothetical near-future technology that (I hear) will be able to do pretty much whatever a human brain can do. This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology. For many, AGI is more than just a technology. In tech hubs like Silicon Valley, it’s talked about in mystical terms. Ilya Sutskever, cofounder and former chief scientist at OpenAI, is said to have led chants of “Feel the AGI!” at team meetings. And he feels it more than most: In 2024, he left OpenAI, whose stated mission is to ensure that AGI benefits all of humanity, to cofound Safe Superintelligence, a startup dedicated to figuring out how to avoid a so-called rogue AGI (or control it when it comes). Superintelligence is the hot new flavor—AGI but better!—introduced as talk of AGI becomes commonplace. Sutskever also exemplifies the mixed-up motivations at play among many self-anointed AGI evangelists. He has spent his career building the foundations for a future technology that he now finds terrifying. “It’s going to be monumental, earth-shattering—there will be a before and an after,” he told me a few months before he quit OpenAI. When I asked him why he had redirected his efforts into reining that technology in, he said: “I’m doing it for my own self-interest. It’s obviously important that any superintelligence anyone builds does not go rogue. Obviously.” He’s far from alone in his grandiose, even apocalyptic, thinking.  Every age has its believers, people with an unshakeable faith that something huge is about to happen—a before and an after that they are privileged (or doomed) to live through.   For us, that’s the promised advent of AGI. People are used to hearing that this or that is the next big thing, says Shannon Vallor, who studies the ethics of technology at the University of Edinburgh. “It used to be the computer age and then it was the internet age and now it’s the AI age,” she says. “It’s normal to have something presented to you and be told that this thing is the future. What’s different, of course, is that in contrast to computers and the internet, AGI doesn’t exist.” And that’s why feeling the AGI is not the same as boosting the next big thing. There’s something weirder going on. Here’s what I think: AGI is a lot like a conspiracy theory, and it may be the most consequential one of our time. I have been reporting on artificial intelligence for more than a decade, and I’ve watched the idea of AGI bubble up from the backwaters to become the dominant narrative shaping an entire industry. A onetime pipe dream now props up the profit lines of some of the world’s most valuable companies and thus, you could argue, the US stock market. It justifies dizzying down payments on the new power plants and data centers that we’re told are needed to make the dream come true. Fixated on this hypothetical technology, AI firms are selling us hard.  Just listen to what the heads of some of those companies are telling us. AGI will be as smart as an entire “country of geniuses” (Dario Amodei, CEO of Anthropic); it will kick-start “an era of maximum human flourishing, where we travel to the stars and colonize the galaxy” (Demis Hassabis, CEO of Google DeepMind); it will “massively increase abundance and prosperity,” even encourage people to enjoy life more and have more children (Sam Altman, CEO of OpenAI). That’s some product. Or not. Don’t forget the flip side, of course. When those people are not shilling for utopia, they’re saving us from hell. In 2023, Amodei, Hassabis, and Altman all put their names to a 22-word statement that read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Elon Musk says AI has a 20% chance of annihilating humans.  “I’ve noticed recently that superintelligence, which I thought was a concept you definitely shouldn’t mention if you want to be taken seriously in public, is being thrown around by tech CEOs who are apparently planning to build it,” says Katja Grace, lead researcher at AI Impacts, an organization that surveys AI researchers about their field. “I think it’s easy to feel like this is fine. They also say it’s going to kill us, but they’re laughing while they say it.” You have to admit it all sounds a bit tinfoil hat. If you’re building a conspiracy theory, you need a few things in the mix: a scheme that’s flexible enough to sustain belief even when things don’t work out as planned; the promise of a better future that can be realized only if believers uncover hidden truths; and a hope for salvation from the horrors of this world.  AGI just about checks all those boxes. The more you poke at the idea, the more it starts to look like a conspiracy. It’s not, of course—not exactly. And I’m not drawing this parallel to dismiss the very real, often jaw-dropping results achieved by many people in this field, including (or especially) the AGI believers.  But by zooming in on things that AGI has in common with genuine conspiracies, I think we can bring the whole concept into better focus and reveal it for what it is: a techno-utopian (or techno-dystopian—pick your pill) fever dream that got its hooks into some pretty deep-seated beliefs that have made it hard to shake. This isn’t just a provocative thought experiment. It’s important to question what we’re told about AGI because buying into the idea isn’t harmless. Right now, AGI is the most important narrative in tech—and, to some extent, in the global economy. We can’t make sense of what’s going on in AI without understanding where the idea of AGI came from, why it is so compelling, and how it shapes the way we think about technology overall.  I get it, I get it—calling AGI a conspiracy isn’t a perfect analogy. It will also piss a lot of people off. But come with me down this rabbit hole and let me show you the light.  How Silicon Valley got AGI-pilled It had a ring to it A typical conspiracy theory usually starts out on the fringes. Maybe it’s just a couple of people posting on a message board, gathering “evidence.” Maybe it’s a few people out in the desert with binoculars waiting to spot some bright lights in the sky. But some conspiracy theories get lucky, if you will: They start to percolate more widely; they start to become a bit more acceptable; they start to influence people in power. Maybe it’s the UFOs (ahem, sorry, “unidentified aerial phenomena”) that are now formally and openly discussed in government hearings. Maybe it’s vaccine skepticism (yes, a much more dangerous example) that becomes official policy. And it’s impossible to ignore that artificial general intelligence has followed a pretty similar trajectory to its more overtly conspiratorial brethren.  Let’s go back to 2007, when AI wasn’t sexy and it wasn’t cool. Companies like Amazon and Netflix (which was still sending out DVDs in the mail) were using machine-learning models, proto-organisms to today’s LLM behemoths, to recommend movies and books to customers. But that was more or less it. Ben Goertzel had far bigger plans. About a decade earlier, the AI researcher had set up a dot-com startup called Webmind to train what he thought of as a kind of digital baby brain on the early internet. Childless, Webmind soon went bust. But Goertzel was an influential figure in a fringe community of researchers who had dreamed for years of building humanlike artificial intelligence, an all-purpose computer program that could do many of the things people can do (and do them better). It was a vision that went far beyond the kind of tech that Netflix was experimenting with. Goertzel wanted to put out a book promoting that vision, and he needed a name that would set it apart from the humdrum AI of the time. A former Webmind employee named Shane Legg suggested Artificial General Intelligence. It had a ring to it. A few years later, Legg cofounded DeepMind with Demis Hassabis and Mustafa Suleyman. But to most serious researchers at the time, the claim that AI would one day mimic human abilities was a bit of a joke. AGI used to be a dirty word, Sutskever told me. Andrew Ng, founder of Google Brain and former chief scientist at the Chinese tech giant Baidu, told me he thought it was loony. So what happened? I caught up with Goertzel last month to ask how a fringe idea went from crackpot to commonplace. “I’m sort of a complex chaotic systems guy, so I have a low estimate that I actually know what the nonlinear dynamic in the memosphere really was,” he said. (Translation: It’s complicated.)  Goertzel reckons a few things took the idea mainstream. The first is the Conference on Artificial General Intelligence, an annual meeting of researchers that he helped set up in 2008, the year after his book was published. The conference was often coordinated with top mainstream academic meetups, such as the Association for the Advancement of Artificial Intelligence conference and the International Joint Conference on Artificial Intelligence. “If I just published a book with that name AGI, it possibly would have just come and gone,” says Goertzel. “But the conference was circling through every year, with more and more students coming.” Next is Legg, who took the term with him to DeepMind. “I think they were the first mainstream corporate entity to talk about AGI,” says Goertzel. “It wasn’t the main thing they were harping on, but Shane and Demis would talk about it now and then. That was certainly a source of legitimation.” When I first talked to Legg about AGI five years ago, he said: “Talking about AGI in the early 2000s put you on the lunatic fringe … Even when we started DeepMind in 2010, we got an astonishing amount of eye-rolling at conferences.” But by 2020 the wind had changed. “Some people are uncomfortable with it, but it’s coming in from the cold,” he told me. The third thing Goertzel points to is the overlap between early AGI evangelists and Big Tech power brokers. In the years between shutting down Webmind and publishing that AGI book, Goertzel did some work with Peter Thiel at Thiel’s hedge fund Clarium Capital. “We talked a bunch,” says Goertzel. He recalls spending a day with Thiel at the Four Seasons in San Francisco. “I was trying to drum AGI into his head,” says Goertzel. “But then he was also hearing from Eliezer how AGI is going to kill everybody.” Enter the doomers That’s Eliezer Yudkowsky, another influential figure who has done at least as much as Goertzel, if not more, to push the idea of AGI. But unlike Goertzel, Yudkowsky thinks there’s a very high chance—99.5% is one number he throws out—that the development of AGI will be a catastrophe.   In 2000, Yudkowsky cofounded a nonprofit research outfit called the Singularity Institute for Artificial Intelligence (later renamed the Machine Intelligence Research Institute), which pretty quickly dedicated itself to preventing doomer scenarios. Thiel was an early benefactor.  At first, Yudkowsky’s ideas didn’t get much pickup. Recall that back then the idea of an all-powerful AI—let alone a dangerous one—was pure sci-fi. But in 2014, Nick Bostrom, a philosopher at the University of Oxford, published a book called Superintelligence. “It put the AGI thing out there,” says Goertzel. “I mean, Bill Gates, Elon Musk—lots of tech-industry AI people—read that book, and whether or not they agreed with his doomer perspective, Nick took Eliezer’s concepts and wrapped them up in a very acceptable way.”   “All of these things gave AGI a stamp of acceptability,” Goertzel adds. “Rather than it being pure crackpot stuff from mavericks howling out in the wilderness.” STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | PUBLIC DOMAIN Yudkowsky has been banging the same drum for 25 years; many engineers at today’s top AI companies grew up reading and discussing his views online, especially on LessWrong, a popular hub for the tech industry’s fervent community of rationalists and effective altruists. Today, those views are more popular than ever, capturing the imagination of a younger generation of doomers like David Krueger, a researcher at the University of Montreal who previously served as research director at the UK’s AI Security Institute. “I think we are definitely on track to build superhuman AI systems that will kill everybody,” Krueger tells me. “And I think that’s horrible and we should stop immediately.” Yudkowsky gets profiled by the likes of the New York Times, which bills him as “Silicon Valley’s version of a doomsday preacher.” His new book, If Anyone Builds It, Everyone Dies, written with Nate Soares, president of the Machine Intelligence Research Institute, lays out wild claims, with little evidence, that unless we pull the plug on development, near-future AGI will lead to global Armageddon. The pair’s position is extreme: They argue that an international ban should be enforced at all costs, up to and including the point of nuclear retaliation. After all, “datacenters can kill more people than nuclear weapons,” Yudkowsky and Soares write. This stuff is no longer niche. The book is an NYT bestseller and comes with endorsements from national security experts such as Suzanne Spaulding, a former US Department of Homeland Security official, and Fiona Hill, former senior director of the White House National Security Council, who now advises the UK government; celebrity scientists such as Max Tegmark and George Church; and other household names, including Stephen Fry, Mark Ruffalo, and Grimes. Yudkowsky now has a megaphone.  Still, it is those early quiet words in certain ears that may prove most consequential. Yudkowsky is credited with introducing Thiel to DeepMind’s founders, after which Thiel became one of the first big investors in the company. Having merged with Google, it is now the in-house AI lab for the tech colossus Alphabet.  Alongside Musk, Thiel was also instrumental in setting up OpenAI in 2015, sinking millions into a startup founded on the singular ambition to build AGI—and make it safe. In 2023, OpenAI CEO Sam Altman posted on X: “eliezer has IMO done more to accelerate AGI than anyone else. certainly he got many of us interested in AGI.” Yudkowsky might one day deserve the Nobel Peace Prize for that, Altman added. But by this point, Thiel had apparently grown wary of the “AI safety people” and the power they were gaining. “You don’t understand how Eliezer has programmed half the people in your company to believe in that stuff,” he is reported to have told Altman at a dinner party in late 2023. “You need to take this more seriously.” Altman “tried not to roll his eyes,” according to Wall Street Journal reporter Keach Hagey. OpenAI is now the most valuable private company in the world, worth half a trillion dollars.  And the transformation is complete: Like all the most powerful conspiracies, AGI has slipped into the mainstream and taken hold.     The great AGI conspiracy  The term “AGI” may have been popularized less than 20 years ago, but the mythmaking behind it has been there since the start of the computer age—a cosmic microwave background of chutzpah and marketing.  Alan Turing asked if machines could think only five years after the first electronic computer, ENIAC, was built in 1945. And here’s Turing a little later, in a 1951 radio broadcast: “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control.” Then, in 1955, the computer scientist John McCarthy and his colleagues applied for US government funding to create what they fatefully chose to call “artificial intelligence”—a canny spin, given that computers at the time were the size of a room and as dumb as a thermostat. Even so, as McCarthy wrote in that funding application: “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” It’s this myth that’s the root of the AGI conspiracy. A smarter-than-human machine that can do it all is not a technology. It’s a dream, unmoored from reality. Once you see that, other parallels with conspiracy thinking start to leap out. It’s impossible to debunk a shape-shifting idea like AGI.  Talking about AGI can sometimes feel like arguing with an enthusiastic Redditor about what drugs (or particles in the sky) are controlling your mind. Each point has a counterpoint that tries to chip away at your own sense of what’s true. Ultimately, it’s a clash of worldviews, not an exchange of evidence-based reason. AGI is like that, too—it’s slippery.  Part of the issue is that despite all the money, all the talk, nobody knows how to build it. More than that: Most people don’t even agree on what AGI really is—which helps explain how people can get away with telling us it can both save the world and end it. At the core of most definitions you’ll find the idea of a machine that can match humans on a wide range of cognitive tasks. (And remember, superintelligence is AGI’s shiny new upgrade: a machine that can outmatch us.) But even that’s easy to pull apart: What humans are we talking about? What kind of cognitive task? And how wide a range? “There’s no real definition of it,” says Christopher Symons, chief artificial intelligence scientist at the AI health-care startup Lirio and former head of the computer science and math division at Oak Ridge National Laboratory. “If you say ‘human-level intelligence,’ that could be an infinite number of things—everybody’s level of intelligence is slightly different.”  And so, says Symons, we’re in this weird race to build … what, exactly? “What are you trying to get it to do?” In 2023, a team of researchers at Google DeepMind, including Legg, had a go at categorizing various definitions that people had proposed for AGI. Some said that a machine had to be able to learn; some said that it had to be able to make money; some said that it had to have a body and move about in the world (and maybe make coffee).   Legg told me that when he’d suggested the term to Goertzel for the title of his book, the hand-waviness had been kind of the point. “I didn’t have an especially clear definition. I didn’t really feel it was necessary,” he said at the time. “I was actually thinking of it more as a field of study, rather than an artifact.” So, I guess we’ll know it when we see it? The problem is that some people think they’ve seen it already. In 2023, a team of Microsoft researchers put out a paper in which they described their experiences playing around with a prerelease version of OpenAI’s large language model GPT-4. They called it “Sparks of Artificial General Intelligence”—and it polarized the industry.  It was a moment when a lot of researchers were blown away and trying to come to terms with what they were seeing. “Shit was working better than they had expected it to,” says Goertzel. “The concept of AGI genuinely started to seem more plausible.” And yet for all of LLMs’ remarkable wordplay, Goertzel doesn’t think that they do in fact contain sparks of AGI. “It’s a little surprising to me that some people with a deep technical understanding of how these tools work under the hood still think that they could become human-level AGI,” he says. “On the other hand, you can’t prove it’s not true.” And there it is: You can’t prove it’s not true. “The idea that AGI is coming and that it’s right around the corner and that it’s inevitable has licensed a great many departures from reality,” says the University of Edinburgh’s Vallor. “But we really don’t have any evidence for it.” Conspiracy thinking looms again. Predictions about when AGI will arrive are made with the precision of numerologists counting down to the end of days. With no real stakes in the game, deadlines come and go with a shrug. Excuses are made and timelines are adjusted yet again. We saw this when OpenAI released the much-hyped GPT-5 this summer. AI stans were disappointed that the new version of the company’s flagship technology wasn’t the step change they expected. But instead of seeing that as evidence that AGI wasn’t attainable—or attainable with an LLM, at least—believers pushed out their predictions for how soon AGI would come. It was coming—just, you know, next time. Maybe they’re right. Or maybe people will pick whatever evidence they can to defend an idea and overlook evidence that counts against it. Jeremy Cohen, who studies conspiracy thinking in technology circles at McMaster University in Canada, calls this imperfect evidence gathering—a hallmark of conspiracy thinking. Cohen started his research career in the Arizona desert, studying a community called People Unlimited that believed its members were immortal. The conviction was impervious to contrary evidence. When its members died of natural causes (including two of its founders), the thinking was that they must have deserved it. “The general consensus was that every death was a suicide,” says Cohen. “If you are immortal and you get cancer and you die—well, you must have done something wrong.” Cohen has since been focused on transhumanism (the idea that technology can help humans push past their natural limitations) and AGI. “I am seeing a lot of parallels. There are forms of magical thinking that I think is a part of the popular imagination around AGI,” he says. “It connects really well to the kinds of religious imaginaries that you see in conspiracy thinking today.” The believers are in on the AGI secret.   Maybe some of you think I’m an idiot: You don’t get it at all lol. But that’s kind of my point. There are insiders and outsiders. When I talk to researchers or engineers who are happy to drop AGI into the conversation as a given, it’s like they know something I don’t. But nobody’s ever been able to tell me what that something is.  The truth is out there, if you know where to look. Conspiracy theories are primarily concerned about revealing a hidden truth, Cohen tells me: “It’s a really fundamental part of conspiracy thinking, and that’s absolutely something that you see in the way people talk about AGI,” he says.  Last year, a 23-year-old former OpenAI staffer turned investor, Leopold Aschenbrenner, published a much-dissected 165-page manifesto titled “Situational Awareness.” You don’t need to read it to get the idea: You either see the truth of what’s coming or you don’t. And you don’t need cold, hard facts, either—it’s enough to feel it. Those who don’t just haven’t seen the light.   This idea stalked the periphery of my conversation with Goertzel, too. When I pushed him on why people are skeptical of AGI, for instance, he said: “Before every major technical achievement, from human flight to electrical power, loads of wise pundits would tell you why it was never going to happen. The fact is, most people only believe what they see in front of their faces.”  That makes AGI sound like an article of faith. I put that to Krueger, who believes AGI’s arrival is maybe five years out. He scoffed: “I think that’s completely backwards.” For him, the article of faith is the idea that it won’t happen—it’s the skeptics who continue to deny the obvious. (Even so, he hedges: No one knows for sure, he says, but there’s no obvious reason that AGI won’t come.)  Hidden truths bring truth seekers, bent on revealing what they’ve been able to see all along. With AGI, though, it’s not enough to uncover something hidden. Here, revelation requires an unprecedented act of creation. If you believe AGI is achievable, then you believe that those making it are midwives to machines that will match or surpass human intelligence. “The idea of giving birth to machine gods is obviously very flattering to the ego,” says Vallor. “It’s an incredibly seductive thing to think that you yourself are laying the early foundations for that transcendence.”  It’s yet another overlap with conspiracy thinking. Part of the draw is the desire for a sense of purpose in an otherwise messy world that can feel meaningless—the longing to be a person of consequence.  Krueger, who is based in Berkeley, says he knows people working on AI who see the technology as our natural successor. “They view it as akin to having children or something,” he says. “Side note: they usually don’t have children.” AGI will be our one true savior (or it’ll bring the apocalypse).  Cohen sees parallels between many modern conspiracy theories and the New Age movement, which reached its peak of influence in the 1970s and ’80s. Adherents believed humanity was on the cusp of unlocking an era of spiritual well-being and expanded consciousness that would usher in a more peaceful and prosperous world. In a nutshell, the idea was that by engaging in a set of pseudo-religious practices, including astrology and the careful curation of crystals, humans would transcend their limitations and enter a kind of hippie utopia. Today’s tech industry is built on compute, not crystals, but its sense of what’s at stake is no less transcendent: “You know, this idea that there is going to be this fundamental shift, there’s going to be this millenarian turn where we end up in a techno-utopian future,” says Cohen. “And the idea that AGI is going to ultimately allow humanity to overcome the problems that face us.” In many people’s telling, AGI will arrive all at once. Incremental advances in AI will stack up until, one day, AI will be good enough to start making better AI by itself. At which point—FOOM—it will advance so rapidly that AGI will arrive in what’s often called an intelligence explosion, leading to a point of no return known as the Singularity, a goofy term that’s been popular in AGI circles for years. Co-opting a concept from physics, the science fiction author Vernor Vinge first introduced the idea of a technological singularity in the 1980s. Vinge imagined an event horizon on the path of technological progress beyond which humans would be fast outstripped by the exponential self-improvement of the machines they had created.  Call it the AI Big Bang—which, again, gives us a before and an after, a transcendent moment when humanity as we know it changes forever (for good or bad). “People imagine it as an event,” says Grace from AI Impacts. For Vallor, this belief system is notable for the way that a faith in technology has replaced a faith in humans. Despite the woo-woo, New Age thinking was at least motivated by the idea that people had what it took to change the world by themselves, if they could only tap into it. With the pursuit of AGI, we’ve left that self-belief behind and bought into the idea that only technology can save us, she says.   That’s a compelling—even comforting—thought for many people. “We’re in an era where other paths to material improvement of human lives and our societies seem to have been exhausted,” Vallor says.  Technology once promised a route to a better future: Progress was a ladder that we would climb toward human and social flourishing. “We’ve passed the peak of that,” says Vallor. “I think the one thing that gives many people hope and a return to that kind of optimism about the future is AGI.” Push this idea to its conclusion and, again, AGI becomes a kind of god—one that can offer relief from earthly suffering, says Vallor. Kelly Joyce, a sociologist at the University of North Carolina who studies how cultural, political, and economic beliefs shape the way we think about and use technology, sees all these wild predictions about AGI as something more banal: part of a long-term pattern of overpromising from the tech industry. “What’s interesting to me is that we get sucked in every time,” she says. “There is a deep belief that technology is better than human beings.” Joyce thinks that’s why, when the hype kicks in, people are predisposed to believe it. “It’s a religion,” she says. “We believe in technology. Technology is God. It’s really hard to push back against it. People don’t want to hear it.” How AGI hijacked an industry The fantasy of computers that can do almost anything a person can is seductive. But like many pervasive conspiracy theories, it has very real consequences. It has distorted the way we think about the stakes behind the current technology boom (and potential bust). It may have even derailed the industry, sucking resources away from more immediate, more practical application of the technology. More than anything else, it gives us a free pass to be lazy. It fools us into thinking we might be able to avoid the actual hard work needed to solve intractable, world-spanning problems—problems that will require international cooperation and compromise and expensive aid. Why bother with that when we’ll soon have machines to figure it all out for us? Consider the resources being sunk into this grand project. Just last month, OpenAI and Nvidia announced an up-to-$100 billion partnership that would see the chip giant supply at least 10 gigawatts of ChatGPT’s insatiable demand. That’s higher than nuclear power plant numbers. A bolt of lightning might release that much energy. The flux capacitor inside Dr. Emmett Brown’s DeLorean time machine only required 1.2 gigawatts to send Marty back to the future. And then, only two weeks later, OpenAI announced a second partnership with chipmaker AMD for another six gigawatts of power. Promoting the Nvidia deal on CNBC, Altman, straight-faced, claimed that without this kind of data center buildout, people would have to choose between a cure for cancer and free education. “No one wants to make that choice,” he said. (Just a few weeks later, he announced that erotic chats would be coming to ChatGPT.) Add to those costs the loss of investment in more immediate technology that could change lives today and tomorrow and the next day. “To me it’s a huge missed opportunity,” says Lirio’s Symons, “to put all these resources into solving something nebulous when we already know there’s real problems that we could solve.”  But that’s not how the likes of OpenAI needs to operate. “With people throwing so much money at these companies, they don’t have to do that,” Symons says. “If you’ve got hundreds of billions of dollars, you don’t have to focus on a practical, solvable project.” Despite his steadfast belief that AGI is coming, Krueger also thinks the industry’s single-minded pursuit of it means that potential solutions to real problems, such as better health care, are being ignored. “This AGI stuff—it’s nonsense, it’s a distraction, it’s hype,” he tells me.  And there are consequences for the way governments support and regulate technology (or don’t). Tina Law, who studies technology policy at the University of California–Davis, worries that policymakers are getting lobbied about the ways AI will one day kill us all, instead of addressing real concerns about the ways AI could impact people’s lives in immediate and material ways today. Inequality has been sidetracked by existential risk. “Hype is a lucrative strategy for tech firms,” says Law. A big part of that hype is the idea that what’s happening is inevitable: If we don’t build it, someone else will. “When something is framed as inevitable,” Law says, “people doubt not only whether they should resist but also whether they have the capacity to do so.” Everyone gets locked in.  The AGI distortion field isn’t limited to tech policy, says Milton Mueller at the Georgia Institute of Technology, who works on technology policy and regulation. The race to AGI gets compared to the race to the atomic bomb, he says. “So whoever gets it first is going to have ultimate power over everybody else. That’s a crazy and dangerous idea that really will distort our approach to foreign policy.”  There’s a business incentive for companies (and governments) to push the myth of AGI, says Mueller, because they can then claim that they will be the first to get there. But because they’re running a race in which nobody has agreed on the finish line, the myth can be spun as long as it’s useful. Or as long as investors are willing to buy into it.  It’s not hard to see how this plays out. It’s not utopia or hell—it’s OpenAI and its peers making a whole lot more money. The great AGI conspiracy, concluded  And maybe that brings us back to the whole conspiracy thing—and a late-game twist in this tale. So far we’ve ignored one popular feature of conspiracy thinking: that there’s a group of powerful figures pulling the levers behind the scenes and that, by seeking the truth, believers can expose this elite cabal.  Sure, the people feeling the AGI aren’t publicly accusing any Illuminati or WEF-like force of preventing the AGI future or withholding its secrets.  But what if there are, in fact, shadowy puppet masters here—and they’re the very people who have pushed the AGI conspiracy hardest all along? The kings of Silicon Valley are throwing everything they can get at building AGI for profit. The myth of AGI serves their interests more than anybody else’s.  As one senior executive at an AI company said to us recently, AGI always needs to be six months to a year away, because if it’s any further than that, you won’t be able to recruit people from Jane Street, and if it’s closer to already here, then what’s the point?  As Vallor puts it: “If OpenAI says they’re building a machine that’s going to make corporations even more powerful than they are today, that isn’t going to get the kind of public buy-in that they need.”  Remember: You create a god and you become like one yourself. Krueger says there’s a line of thinking running through Silicon Valley in which building AI is a way to seize huge amounts of power. (It’s one of the premises of Aschenbrenner’s “Situational Awareness,” for example.) “You know, we’re going to have this godlike power and we’re going to have to figure out what to do with it,” says Krueger. “A lot of people think if they get there first, they can basically take over the world.” “They’re putting so much effort into selling their vision of a future with AGI in it, and they’re having a pretty good amount of success because they have so much power,” he adds. Goertzel, for one, is almost lamenting how successful the maybe-cabal has been. He’s actually starting to miss life on the fringes. “In my generation, you had to have a lot of vision to want to work on AGI, and you had to be very stubborn,” he says. “Now it’s almost, like, what your grandma tells you to do to get a job instead of being a business major.” “It’s disorienting that this stuff is so broadly accepted,” he says. “It almost gives me the desire to go work on something else that not so many people are doing.” He’s half joking (I think): “Obviously, putting the finishing touches to AGI is more important than gratifying my preference to be out on the frontier.” But I’m no clearer on what exactly they’re putting the finishing touches on. What does it mean for technology in general if we fall so hard for the fairy tales? In a lot of ways, I think the whole idea of AGI is built on a warped view of what we should expect technology to do, and even what intelligence is in the first place. Stripped back to its essentials, the argument for AGI rests on the premise that one technology, AI, has gotten very good, very fast, and will continue to get better. But set aside the technical objections—what if it doesn’t continue to get better?—and you’re left with the claim that intelligence is a commodity you can get more of if you have the right data or compute or neural network. And it’s not.  Intelligence doesn’t come as a quantity you can just ratchet up and up. Smart people may be brilliant in one area and not in others. Some Nobel Prize winners are really bad at playing the piano or caring for their kids. Some very smart people insist that AGI is coming next year.  It’s hard not to wonder what will get its hooks into us next.  Before we ended our call, Goertzel told me about an event he’d just been to in San Francisco on AI consciousness and parapsychology: “ESP, precognition, and whatnot.” “That’s where AGI was 20 years ago,” he said. “Everyone thinks it’s batshit crazy.”

Are you feeling it?

I hear it’s close: two years, five years—maybe next year! And I hear it’s going to change everything: it will cure disease, save the planet, and usher in an age of abundance. It will solve our biggest problems in ways we cannot yet imagine. It will redefine what it means to be human. 

Wait—what if that’s all too good to be true? Because I also hear it will bring on the apocalypse and kill us all … 

Either way, and whatever your timeline, something big is about to happen. 

We could be talking about the Second Coming. Or the day when Heaven’s Gaters imagined they’d be picked up by a UFO and transformed into enlightened aliens. Or the moment when Donald Trump finally decides to deliver the storm that Q promised. But no. We’re of course talking about artificial general intelligence, or AGI—that hypothetical near-future technology that (I hear) will be able to do pretty much whatever a human brain can do.


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


For many, AGI is more than just a technology. In tech hubs like Silicon Valley, it’s talked about in mystical terms. Ilya Sutskever, cofounder and former chief scientist at OpenAI, is said to have led chants of “Feel the AGI!” at team meetings. And he feels it more than most: In 2024, he left OpenAI, whose stated mission is to ensure that AGI benefits all of humanity, to cofound Safe Superintelligence, a startup dedicated to figuring out how to avoid a so-called rogue AGI (or control it when it comes). Superintelligence is the hot new flavor—AGI but better!—introduced as talk of AGI becomes commonplace.

Sutskever also exemplifies the mixed-up motivations at play among many self-anointed AGI evangelists. He has spent his career building the foundations for a future technology that he now finds terrifying. “It’s going to be monumental, earth-shattering—there will be a before and an after,” he told me a few months before he quit OpenAI. When I asked him why he had redirected his efforts into reining that technology in, he said: “I’m doing it for my own self-interest. It’s obviously important that any superintelligence anyone builds does not go rogue. Obviously.”

He’s far from alone in his grandiose, even apocalyptic, thinking. 

Every age has its believers, people with an unshakeable faith that something huge is about to happen—a before and an after that they are privileged (or doomed) to live through.  

For us, that’s the promised advent of AGI. People are used to hearing that this or that is the next big thing, says Shannon Vallor, who studies the ethics of technology at the University of Edinburgh. “It used to be the computer age and then it was the internet age and now it’s the AI age,” she says. “It’s normal to have something presented to you and be told that this thing is the future. What’s different, of course, is that in contrast to computers and the internet, AGI doesn’t exist.”

And that’s why feeling the AGI is not the same as boosting the next big thing. There’s something weirder going on. Here’s what I think: AGI is a lot like a conspiracy theory, and it may be the most consequential one of our time.

I have been reporting on artificial intelligence for more than a decade, and I’ve watched the idea of AGI bubble up from the backwaters to become the dominant narrative shaping an entire industry. A onetime pipe dream now props up the profit lines of some of the world’s most valuable companies and thus, you could argue, the US stock market. It justifies dizzying down payments on the new power plants and data centers that we’re told are needed to make the dream come true. Fixated on this hypothetical technology, AI firms are selling us hard. 

Just listen to what the heads of some of those companies are telling us. AGI will be as smart as an entire “country of geniuses” (Dario Amodei, CEO of Anthropic); it will kick-start “an era of maximum human flourishing, where we travel to the stars and colonize the galaxy” (Demis Hassabis, CEO of Google DeepMind); it will “massively increase abundance and prosperity,” even encourage people to enjoy life more and have more children (Sam Altman, CEO of OpenAI). That’s some product.

Or not. Don’t forget the flip side, of course. When those people are not shilling for utopia, they’re saving us from hell. In 2023, Amodei, Hassabis, and Altman all put their names to a 22-word statement that read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Elon Musk says AI has a 20% chance of annihilating humans. 

“I’ve noticed recently that superintelligence, which I thought was a concept you definitely shouldn’t mention if you want to be taken seriously in public, is being thrown around by tech CEOs who are apparently planning to build it,” says Katja Grace, lead researcher at AI Impacts, an organization that surveys AI researchers about their field. “I think it’s easy to feel like this is fine. They also say it’s going to kill us, but they’re laughing while they say it.”

You have to admit it all sounds a bit tinfoil hat. If you’re building a conspiracy theory, you need a few things in the mix: a scheme that’s flexible enough to sustain belief even when things don’t work out as planned; the promise of a better future that can be realized only if believers uncover hidden truths; and a hope for salvation from the horrors of this world. 

AGI just about checks all those boxes. The more you poke at the idea, the more it starts to look like a conspiracy. It’s not, of course—not exactly. And I’m not drawing this parallel to dismiss the very real, often jaw-dropping results achieved by many people in this field, including (or especially) the AGI believers. 

But by zooming in on things that AGI has in common with genuine conspiracies, I think we can bring the whole concept into better focus and reveal it for what it is: a techno-utopian (or techno-dystopian—pick your pill) fever dream that got its hooks into some pretty deep-seated beliefs that have made it hard to shake.

This isn’t just a provocative thought experiment. It’s important to question what we’re told about AGI because buying into the idea isn’t harmless. Right now, AGI is the most important narrative in tech—and, to some extent, in the global economy. We can’t make sense of what’s going on in AI without understanding where the idea of AGI came from, why it is so compelling, and how it shapes the way we think about technology overall. 

I get it, I get it—calling AGI a conspiracy isn’t a perfect analogy. It will also piss a lot of people off. But come with me down this rabbit hole and let me show you the light. 

How Silicon Valley got AGI-pilled

It had a ring to it

A typical conspiracy theory usually starts out on the fringes. Maybe it’s just a couple of people posting on a message board, gathering “evidence.” Maybe it’s a few people out in the desert with binoculars waiting to spot some bright lights in the sky. But some conspiracy theories get lucky, if you will: They start to percolate more widely; they start to become a bit more acceptable; they start to influence people in power. Maybe it’s the UFOs (ahem, sorry, “unidentified aerial phenomena”) that are now formally and openly discussed in government hearings. Maybe it’s vaccine skepticism (yes, a much more dangerous example) that becomes official policy. And it’s impossible to ignore that artificial general intelligence has followed a pretty similar trajectory to its more overtly conspiratorial brethren. 

Let’s go back to 2007, when AI wasn’t sexy and it wasn’t cool. Companies like Amazon and Netflix (which was still sending out DVDs in the mail) were using machine-learning models, proto-organisms to today’s LLM behemoths, to recommend movies and books to customers. But that was more or less it.

Ben Goertzel had far bigger plans. About a decade earlier, the AI researcher had set up a dot-com startup called Webmind to train what he thought of as a kind of digital baby brain on the early internet. Childless, Webmind soon went bust.

But Goertzel was an influential figure in a fringe community of researchers who had dreamed for years of building humanlike artificial intelligence, an all-purpose computer program that could do many of the things people can do (and do them better). It was a vision that went far beyond the kind of tech that Netflix was experimenting with.

Goertzel wanted to put out a book promoting that vision, and he needed a name that would set it apart from the humdrum AI of the time. A former Webmind employee named Shane Legg suggested Artificial General Intelligence. It had a ring to it.

A few years later, Legg cofounded DeepMind with Demis Hassabis and Mustafa Suleyman. But to most serious researchers at the time, the claim that AI would one day mimic human abilities was a bit of a joke. AGI used to be a dirty word, Sutskever told me. Andrew Ng, founder of Google Brain and former chief scientist at the Chinese tech giant Baidu, told me he thought it was loony.

So what happened? I caught up with Goertzel last month to ask how a fringe idea went from crackpot to commonplace. “I’m sort of a complex chaotic systems guy, so I have a low estimate that I actually know what the nonlinear dynamic in the memosphere really was,” he said. (Translation: It’s complicated.) 

Goertzel reckons a few things took the idea mainstream. The first is the Conference on Artificial General Intelligence, an annual meeting of researchers that he helped set up in 2008, the year after his book was published. The conference was often coordinated with top mainstream academic meetups, such as the Association for the Advancement of Artificial Intelligence conference and the International Joint Conference on Artificial Intelligence. “If I just published a book with that name AGI, it possibly would have just come and gone,” says Goertzel. “But the conference was circling through every year, with more and more students coming.”

Next is Legg, who took the term with him to DeepMind. “I think they were the first mainstream corporate entity to talk about AGI,” says Goertzel. “It wasn’t the main thing they were harping on, but Shane and Demis would talk about it now and then. That was certainly a source of legitimation.”

When I first talked to Legg about AGI five years ago, he said: “Talking about AGI in the early 2000s put you on the lunatic fringe … Even when we started DeepMind in 2010, we got an astonishing amount of eye-rolling at conferences.” But by 2020 the wind had changed. “Some people are uncomfortable with it, but it’s coming in from the cold,” he told me.

The third thing Goertzel points to is the overlap between early AGI evangelists and Big Tech power brokers. In the years between shutting down Webmind and publishing that AGI book, Goertzel did some work with Peter Thiel at Thiel’s hedge fund Clarium Capital. “We talked a bunch,” says Goertzel. He recalls spending a day with Thiel at the Four Seasons in San Francisco. “I was trying to drum AGI into his head,” says Goertzel. “But then he was also hearing from Eliezer how AGI is going to kill everybody.”

Enter the doomers

That’s Eliezer Yudkowsky, another influential figure who has done at least as much as Goertzel, if not more, to push the idea of AGI. But unlike Goertzel, Yudkowsky thinks there’s a very high chance—99.5% is one number he throws out—that the development of AGI will be a catastrophe.  

In 2000, Yudkowsky cofounded a nonprofit research outfit called the Singularity Institute for Artificial Intelligence (later renamed the Machine Intelligence Research Institute), which pretty quickly dedicated itself to preventing doomer scenarios. Thiel was an early benefactor. 

At first, Yudkowsky’s ideas didn’t get much pickup. Recall that back then the idea of an all-powerful AI—let alone a dangerous one—was pure sci-fi. But in 2014, Nick Bostrom, a philosopher at the University of Oxford, published a book called Superintelligence.

“It put the AGI thing out there,” says Goertzel. “I mean, Bill Gates, Elon Musk—lots of tech-industry AI people—read that book, and whether or not they agreed with his doomer perspective, Nick took Eliezer’s concepts and wrapped them up in a very acceptable way.”  

“All of these things gave AGI a stamp of acceptability,” Goertzel adds. “Rather than it being pure crackpot stuff from mavericks howling out in the wilderness.”

STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | PUBLIC DOMAIN

Yudkowsky has been banging the same drum for 25 years; many engineers at today’s top AI companies grew up reading and discussing his views online, especially on LessWrong, a popular hub for the tech industry’s fervent community of rationalists and effective altruists.

Today, those views are more popular than ever, capturing the imagination of a younger generation of doomers like David Krueger, a researcher at the University of Montreal who previously served as research director at the UK’s AI Security Institute. “I think we are definitely on track to build superhuman AI systems that will kill everybody,” Krueger tells me. “And I think that’s horrible and we should stop immediately.”

Yudkowsky gets profiled by the likes of the New York Times, which bills him as “Silicon Valley’s version of a doomsday preacher.” His new book, If Anyone Builds It, Everyone Dies, written with Nate Soares, president of the Machine Intelligence Research Institute, lays out wild claims, with little evidence, that unless we pull the plug on development, near-future AGI will lead to global Armageddon. The pair’s position is extreme: They argue that an international ban should be enforced at all costs, up to and including the point of nuclear retaliation. After all, “datacenters can kill more people than nuclear weapons,” Yudkowsky and Soares write.

This stuff is no longer niche. The book is an NYT bestseller and comes with endorsements from national security experts such as Suzanne Spaulding, a former US Department of Homeland Security official, and Fiona Hill, former senior director of the White House National Security Council, who now advises the UK government; celebrity scientists such as Max Tegmark and George Church; and other household names, including Stephen Fry, Mark Ruffalo, and Grimes. Yudkowsky now has a megaphone. 

Still, it is those early quiet words in certain ears that may prove most consequential. Yudkowsky is credited with introducing Thiel to DeepMind’s founders, after which Thiel became one of the first big investors in the company. Having merged with Google, it is now the in-house AI lab for the tech colossus Alphabet. 

Alongside Musk, Thiel was also instrumental in setting up OpenAI in 2015, sinking millions into a startup founded on the singular ambition to build AGI—and make it safe. In 2023, OpenAI CEO Sam Altman posted on X: “eliezer has IMO done more to accelerate AGI than anyone else. certainly he got many of us interested in AGI.” Yudkowsky might one day deserve the Nobel Peace Prize for that, Altman added. But by this point, Thiel had apparently grown wary of the “AI safety people” and the power they were gaining. “You don’t understand how Eliezer has programmed half the people in your company to believe in that stuff,” he is reported to have told Altman at a dinner party in late 2023. “You need to take this more seriously.” Altman “tried not to roll his eyes,” according to Wall Street Journal reporter Keach Hagey.

OpenAI is now the most valuable private company in the world, worth half a trillion dollars. 

And the transformation is complete: Like all the most powerful conspiracies, AGI has slipped into the mainstream and taken hold.    

The great AGI conspiracy 

The term “AGI” may have been popularized less than 20 years ago, but the mythmaking behind it has been there since the start of the computer age—a cosmic microwave background of chutzpah and marketing. 

Alan Turing asked if machines could think only five years after the first electronic computer, ENIAC, was built in 1945. And here’s Turing a little later, in a 1951 radio broadcast: “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control.”

Then, in 1955, the computer scientist John McCarthy and his colleagues applied for US government funding to create what they fatefully chose to call “artificial intelligence”—a canny spin, given that computers at the time were the size of a room and as dumb as a thermostat. Even so, as McCarthy wrote in that funding application: “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

It’s this myth that’s the root of the AGI conspiracy. A smarter-than-human machine that can do it all is not a technology. It’s a dream, unmoored from reality. Once you see that, other parallels with conspiracy thinking start to leap out.

It’s impossible to debunk a shape-shifting idea like AGI. 

Talking about AGI can sometimes feel like arguing with an enthusiastic Redditor about what drugs (or particles in the sky) are controlling your mind. Each point has a counterpoint that tries to chip away at your own sense of what’s true. Ultimately, it’s a clash of worldviews, not an exchange of evidence-based reason. AGI is like that, too—it’s slippery. 

Part of the issue is that despite all the money, all the talk, nobody knows how to build it. More than that: Most people don’t even agree on what AGI really is—which helps explain how people can get away with telling us it can both save the world and end it. At the core of most definitions you’ll find the idea of a machine that can match humans on a wide range of cognitive tasks. (And remember, superintelligence is AGI’s shiny new upgrade: a machine that can outmatch us.) But even that’s easy to pull apart: What humans are we talking about? What kind of cognitive task? And how wide a range?

“There’s no real definition of it,” says Christopher Symons, chief artificial intelligence scientist at the AI health-care startup Lirio and former head of the computer science and math division at Oak Ridge National Laboratory. “If you say ‘human-level intelligence,’ that could be an infinite number of things—everybody’s level of intelligence is slightly different.” 

And so, says Symons, we’re in this weird race to build … what, exactly? “What are you trying to get it to do?”

In 2023, a team of researchers at Google DeepMind, including Legg, had a go at categorizing various definitions that people had proposed for AGI. Some said that a machine had to be able to learn; some said that it had to be able to make money; some said that it had to have a body and move about in the world (and maybe make coffee).  

Legg told me that when he’d suggested the term to Goertzel for the title of his book, the hand-waviness had been kind of the point. “I didn’t have an especially clear definition. I didn’t really feel it was necessary,” he said at the time. “I was actually thinking of it more as a field of study, rather than an artifact.”

So, I guess we’ll know it when we see it? The problem is that some people think they’ve seen it already.

In 2023, a team of Microsoft researchers put out a paper in which they described their experiences playing around with a prerelease version of OpenAI’s large language model GPT-4. They called it “Sparks of Artificial General Intelligence”—and it polarized the industry

It was a moment when a lot of researchers were blown away and trying to come to terms with what they were seeing. “Shit was working better than they had expected it to,” says Goertzel. “The concept of AGI genuinely started to seem more plausible.”

And yet for all of LLMs’ remarkable wordplay, Goertzel doesn’t think that they do in fact contain sparks of AGI. “It’s a little surprising to me that some people with a deep technical understanding of how these tools work under the hood still think that they could become human-level AGI,” he says. “On the other hand, you can’t prove it’s not true.”

And there it is: You can’t prove it’s not true. “The idea that AGI is coming and that it’s right around the corner and that it’s inevitable has licensed a great many departures from reality,” says the University of Edinburgh’s Vallor. “But we really don’t have any evidence for it.”

Conspiracy thinking looms again. Predictions about when AGI will arrive are made with the precision of numerologists counting down to the end of days. With no real stakes in the game, deadlines come and go with a shrug. Excuses are made and timelines are adjusted yet again.

We saw this when OpenAI released the much-hyped GPT-5 this summer. AI stans were disappointed that the new version of the company’s flagship technology wasn’t the step change they expected. But instead of seeing that as evidence that AGI wasn’t attainable—or attainable with an LLM, at least—believers pushed out their predictions for how soon AGI would come. It was coming—just, you know, next time.

Maybe they’re right. Or maybe people will pick whatever evidence they can to defend an idea and overlook evidence that counts against it. Jeremy Cohen, who studies conspiracy thinking in technology circles at McMaster University in Canada, calls this imperfect evidence gathering—a hallmark of conspiracy thinking.

Cohen started his research career in the Arizona desert, studying a community called People Unlimited that believed its members were immortal. The conviction was impervious to contrary evidence. When its members died of natural causes (including two of its founders), the thinking was that they must have deserved it. “The general consensus was that every death was a suicide,” says Cohen. “If you are immortal and you get cancer and you die—well, you must have done something wrong.”

Cohen has since been focused on transhumanism (the idea that technology can help humans push past their natural limitations) and AGI. “I am seeing a lot of parallels. There are forms of magical thinking that I think is a part of the popular imagination around AGI,” he says. “It connects really well to the kinds of religious imaginaries that you see in conspiracy thinking today.”

The believers are in on the AGI secret.  

Maybe some of you think I’m an idiot: You don’t get it at all lol. But that’s kind of my point. There are insiders and outsiders. When I talk to researchers or engineers who are happy to drop AGI into the conversation as a given, it’s like they know something I don’t. But nobody’s ever been able to tell me what that something is. 

The truth is out there, if you know where to look. Conspiracy theories are primarily concerned about revealing a hidden truth, Cohen tells me: “It’s a really fundamental part of conspiracy thinking, and that’s absolutely something that you see in the way people talk about AGI,” he says. 

Last year, a 23-year-old former OpenAI staffer turned investor, Leopold Aschenbrenner, published a much-dissected 165-page manifesto titled “Situational Awareness.” You don’t need to read it to get the idea: You either see the truth of what’s coming or you don’t. And you don’t need cold, hard facts, either—it’s enough to feel it. Those who don’t just haven’t seen the light.  

This idea stalked the periphery of my conversation with Goertzel, too. When I pushed him on why people are skeptical of AGI, for instance, he said: “Before every major technical achievement, from human flight to electrical power, loads of wise pundits would tell you why it was never going to happen. The fact is, most people only believe what they see in front of their faces.” 

That makes AGI sound like an article of faith. I put that to Krueger, who believes AGI’s arrival is maybe five years out. He scoffed: “I think that’s completely backwards.” For him, the article of faith is the idea that it won’t happen—it’s the skeptics who continue to deny the obvious. (Even so, he hedges: No one knows for sure, he says, but there’s no obvious reason that AGI won’t come.) 

Hidden truths bring truth seekers, bent on revealing what they’ve been able to see all along. With AGI, though, it’s not enough to uncover something hidden. Here, revelation requires an unprecedented act of creation. If you believe AGI is achievable, then you believe that those making it are midwives to machines that will match or surpass human intelligence. “The idea of giving birth to machine gods is obviously very flattering to the ego,” says Vallor. “It’s an incredibly seductive thing to think that you yourself are laying the early foundations for that transcendence.” 

It’s yet another overlap with conspiracy thinking. Part of the draw is the desire for a sense of purpose in an otherwise messy world that can feel meaningless—the longing to be a person of consequence. 

Krueger, who is based in Berkeley, says he knows people working on AI who see the technology as our natural successor. “They view it as akin to having children or something,” he says. “Side note: they usually don’t have children.”

AGI will be our one true savior (or it’ll bring the apocalypse). 

Cohen sees parallels between many modern conspiracy theories and the New Age movement, which reached its peak of influence in the 1970s and ’80s. Adherents believed humanity was on the cusp of unlocking an era of spiritual well-being and expanded consciousness that would usher in a more peaceful and prosperous world. In a nutshell, the idea was that by engaging in a set of pseudo-religious practices, including astrology and the careful curation of crystals, humans would transcend their limitations and enter a kind of hippie utopia.

Today’s tech industry is built on compute, not crystals, but its sense of what’s at stake is no less transcendent: “You know, this idea that there is going to be this fundamental shift, there’s going to be this millenarian turn where we end up in a techno-utopian future,” says Cohen. “And the idea that AGI is going to ultimately allow humanity to overcome the problems that face us.”

In many people’s telling, AGI will arrive all at once. Incremental advances in AI will stack up until, one day, AI will be good enough to start making better AI by itself. At which point—FOOM—it will advance so rapidly that AGI will arrive in what’s often called an intelligence explosion, leading to a point of no return known as the Singularity, a goofy term that’s been popular in AGI circles for years. Co-opting a concept from physics, the science fiction author Vernor Vinge first introduced the idea of a technological singularity in the 1980s. Vinge imagined an event horizon on the path of technological progress beyond which humans would be fast outstripped by the exponential self-improvement of the machines they had created. 

Call it the AI Big Bang—which, again, gives us a before and an after, a transcendent moment when humanity as we know it changes forever (for good or bad). “People imagine it as an event,” says Grace from AI Impacts.

For Vallor, this belief system is notable for the way that a faith in technology has replaced a faith in humans. Despite the woo-woo, New Age thinking was at least motivated by the idea that people had what it took to change the world by themselves, if they could only tap into it. With the pursuit of AGI, we’ve left that self-belief behind and bought into the idea that only technology can save us, she says.  

That’s a compelling—even comforting—thought for many people. “We’re in an era where other paths to material improvement of human lives and our societies seem to have been exhausted,” Vallor says. 

Technology once promised a route to a better future: Progress was a ladder that we would climb toward human and social flourishing. “We’ve passed the peak of that,” says Vallor. “I think the one thing that gives many people hope and a return to that kind of optimism about the future is AGI.”

Push this idea to its conclusion and, again, AGI becomes a kind of god—one that can offer relief from earthly suffering, says Vallor.

Kelly Joyce, a sociologist at the University of North Carolina who studies how cultural, political, and economic beliefs shape the way we think about and use technology, sees all these wild predictions about AGI as something more banal: part of a long-term pattern of overpromising from the tech industry. “What’s interesting to me is that we get sucked in every time,” she says. “There is a deep belief that technology is better than human beings.”

Joyce thinks that’s why, when the hype kicks in, people are predisposed to believe it. “It’s a religion,” she says. “We believe in technology. Technology is God. It’s really hard to push back against it. People don’t want to hear it.”

How AGI hijacked an industry

The fantasy of computers that can do almost anything a person can is seductive. But like many pervasive conspiracy theories, it has very real consequences. It has distorted the way we think about the stakes behind the current technology boom (and potential bust). It may have even derailed the industry, sucking resources away from more immediate, more practical application of the technology. More than anything else, it gives us a free pass to be lazy. It fools us into thinking we might be able to avoid the actual hard work needed to solve intractable, world-spanning problems—problems that will require international cooperation and compromise and expensive aid. Why bother with that when we’ll soon have machines to figure it all out for us?

Consider the resources being sunk into this grand project. Just last month, OpenAI and Nvidia announced an up-to-$100 billion partnership that would see the chip giant supply at least 10 gigawatts of ChatGPT’s insatiable demand. That’s higher than nuclear power plant numbers. A bolt of lightning might release that much energy. The flux capacitor inside Dr. Emmett Brown’s DeLorean time machine only required 1.2 gigawatts to send Marty back to the future. And then, only two weeks later, OpenAI announced a second partnership with chipmaker AMD for another six gigawatts of power.

Promoting the Nvidia deal on CNBC, Altman, straight-faced, claimed that without this kind of data center buildout, people would have to choose between a cure for cancer and free education. “No one wants to make that choice,” he said. (Just a few weeks later, he announced that erotic chats would be coming to ChatGPT.)

Add to those costs the loss of investment in more immediate technology that could change lives today and tomorrow and the next day. “To me it’s a huge missed opportunity,” says Lirio’s Symons, “to put all these resources into solving something nebulous when we already know there’s real problems that we could solve.” 

But that’s not how the likes of OpenAI needs to operate. “With people throwing so much money at these companies, they don’t have to do that,” Symons says. “If you’ve got hundreds of billions of dollars, you don’t have to focus on a practical, solvable project.”

Despite his steadfast belief that AGI is coming, Krueger also thinks the industry’s single-minded pursuit of it means that potential solutions to real problems, such as better health care, are being ignored. “This AGI stuff—it’s nonsense, it’s a distraction, it’s hype,” he tells me. 

And there are consequences for the way governments support and regulate technology (or don’t). Tina Law, who studies technology policy at the University of California–Davis, worries that policymakers are getting lobbied about the ways AI will one day kill us all, instead of addressing real concerns about the ways AI could impact people’s lives in immediate and material ways today. Inequality has been sidetracked by existential risk.

“Hype is a lucrative strategy for tech firms,” says Law. A big part of that hype is the idea that what’s happening is inevitable: If we don’t build it, someone else will. “When something is framed as inevitable,” Law says, “people doubt not only whether they should resist but also whether they have the capacity to do so.” Everyone gets locked in. 

The AGI distortion field isn’t limited to tech policy, says Milton Mueller at the Georgia Institute of Technology, who works on technology policy and regulation. The race to AGI gets compared to the race to the atomic bomb, he says. “So whoever gets it first is going to have ultimate power over everybody else. That’s a crazy and dangerous idea that really will distort our approach to foreign policy.” 

There’s a business incentive for companies (and governments) to push the myth of AGI, says Mueller, because they can then claim that they will be the first to get there. But because they’re running a race in which nobody has agreed on the finish line, the myth can be spun as long as it’s useful. Or as long as investors are willing to buy into it. 

It’s not hard to see how this plays out. It’s not utopia or hell—it’s OpenAI and its peers making a whole lot more money.

The great AGI conspiracy, concluded 

And maybe that brings us back to the whole conspiracy thing—and a late-game twist in this tale. So far we’ve ignored one popular feature of conspiracy thinking: that there’s a group of powerful figures pulling the levers behind the scenes and that, by seeking the truth, believers can expose this elite cabal. 

Sure, the people feeling the AGI aren’t publicly accusing any Illuminati or WEF-like force of preventing the AGI future or withholding its secrets. 

But what if there are, in fact, shadowy puppet masters here—and they’re the very people who have pushed the AGI conspiracy hardest all along? The kings of Silicon Valley are throwing everything they can get at building AGI for profit. The myth of AGI serves their interests more than anybody else’s. 

As one senior executive at an AI company said to us recently, AGI always needs to be six months to a year away, because if it’s any further than that, you won’t be able to recruit people from Jane Street, and if it’s closer to already here, then what’s the point? 

As Vallor puts it: “If OpenAI says they’re building a machine that’s going to make corporations even more powerful than they are today, that isn’t going to get the kind of public buy-in that they need.” 

Remember: You create a god and you become like one yourself. Krueger says there’s a line of thinking running through Silicon Valley in which building AI is a way to seize huge amounts of power. (It’s one of the premises of Aschenbrenner’s “Situational Awareness,” for example.) “You know, we’re going to have this godlike power and we’re going to have to figure out what to do with it,” says Krueger. “A lot of people think if they get there first, they can basically take over the world.”

“They’re putting so much effort into selling their vision of a future with AGI in it, and they’re having a pretty good amount of success because they have so much power,” he adds.

Goertzel, for one, is almost lamenting how successful the maybe-cabal has been. He’s actually starting to miss life on the fringes. “In my generation, you had to have a lot of vision to want to work on AGI, and you had to be very stubborn,” he says. “Now it’s almost, like, what your grandma tells you to do to get a job instead of being a business major.”

“It’s disorienting that this stuff is so broadly accepted,” he says. “It almost gives me the desire to go work on something else that not so many people are doing.” He’s half joking (I think): “Obviously, putting the finishing touches to AGI is more important than gratifying my preference to be out on the frontier.”

But I’m no clearer on what exactly they’re putting the finishing touches on. What does it mean for technology in general if we fall so hard for the fairy tales? In a lot of ways, I think the whole idea of AGI is built on a warped view of what we should expect technology to do, and even what intelligence is in the first place. Stripped back to its essentials, the argument for AGI rests on the premise that one technology, AI, has gotten very good, very fast, and will continue to get better. But set aside the technical objections—what if it doesn’t continue to get better?—and you’re left with the claim that intelligence is a commodity you can get more of if you have the right data or compute or neural network. And it’s not. 

Intelligence doesn’t come as a quantity you can just ratchet up and up. Smart people may be brilliant in one area and not in others. Some Nobel Prize winners are really bad at playing the piano or caring for their kids. Some very smart people insist that AGI is coming next year. 

It’s hard not to wonder what will get its hooks into us next. 

Before we ended our call, Goertzel told me about an event he’d just been to in San Francisco on AI consciousness and parapsychology: “ESP, precognition, and whatnot.”

“That’s where AGI was 20 years ago,” he said. “Everyone thinks it’s batshit crazy.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Agentic AI: What now, what next?

Agentic AI burst onto the scene with its promises of streamliningoperations and accelerating productivity. But what’s real and what’s hype when it comes to deploying agentic AI? This Special Report examines the state of agentic AI, the challenges organizations are facing in deploying it, and the lessons learned from success

Read More »

AMD to build two more supercomputers at Oak Ridge National Labs

Lux is engineered to train, refine, and deploy AI foundation models that accelerate scientific and engineering progress. Its advanced architecture supports data-intensive and model-centric workloads, thereby enhancing AI-driven research capabilities. Discovery differs from Lux in that it uses Instinct MI430X GPUs instead of the 300 series. The MI400 Series is

Read More »

Xcel Energy rolls out $60 billion capital spending plan

By the Numbers: Xcel Energy Q3 2025 $524M Quarterly earnings, down 23% from 2024 on higher depreciation, interest charges and O&M expenses, partially offset by improved recovery from infrastructure investments. 3 GW Capacity of contracted or ‘high probability’ data center load. The utility says it is tracking additional deals that could exceed 20 GW in new load. $60B Five-year capital spending plan Accelerated Growth The Minneapolis-based utility serves about 3.9 million electric customers in parts of Colorado, Michigan, Minnesota, New Mexico, North Dakota, South Dakota, Texas and Wisconsin. The company expects retail sales to grow 5% through 2030. A 3 GW pipeline of contracted and “high probability” data center projects will drive the majority of that growth, according to the company. Leaders believe Xcel’s data center queue could exceed 20 GW if earlier-stage prospects materialize. Xcel Energy announced a $15 billion addition to its five-year capital plan on Thursday, which CEO Bob Frenzel said will now cover 7.5 GW of new renewable generation, 3 GW of new gas generation, 1.9 GW of energy storage, 1,500 miles of high-voltage transmission and $5 billion for wildfire mitigation. Xcel and two telecom companies agreed to a $640 million settlement with plaintiffs in a lawsuit over the 2021-2022 Marshall Fire in Colorado in September. The company excluded a $290 million charge from its share of the Marshall Wildfire settlement in Colorado from quarterly earnings metrics, it noted. Xcel’s long-term vision includes the addition of 4.5 GW of new natural gas capacity as well as 5 GW of energy storage, Frenzel said. “Making sure that we can deliver a cleaner energy product as well as a highly reliable and highly affordable product is very strategic as we approach economic development with data centers,” Frenzel said. New data center load represents about 60% of Xcel’s anticipated retail sales growth through

Read More »

Lukoil to Sell Assets to Gunvor Amid Sanctions

Russian oil producer Lukoil PJSC has agreed to sell its international assets to energy trader Gunvor Group, a week after being hit by US sanctions. The country’s No. 2 oil producer said it had accepted an offer from Gunvor and made a commitment not to negotiate with other potential buyers. If successful, the deal would involve the transfer of a sprawling global network of oil fields, refineries and gas stations to one of the world’s top independent commodity traders.  The US last week blacklisted oil giants Rosneft PJSC and Lukoil as part of a fresh bid to end the war in Ukraine by depriving Moscow of revenues. It was the first major package of sanctions on Russia’s petroleum industry since US President Donald Trump took office, and has left governments and business partners clambering to understand the impact. The offer — for which no value was disclosed — includes Lukoil International’s trading arm Litasco, but not the business units in Dubai which have recently become subject to sanctions, said a person familiar with the matter. Gunvor itself has had a long history with Russia. Its co-founder Gennady Timchenko was placed under US sanctions in the wake of the Kremlin’s annexation of Crimea in 2014, with the US government claiming at the time that Russian President Vladimir Putin had “investments in Gunvor,” which the company has consistently denied.  Since Timchenko sold his shares, it’s now majority-owned by co-founder and chief executive officer Torbjorn Tornqvist.  After making record profits from recent volatility in energy markets, cash-rich commodity traders are spending big on assets to help lock in better margins for the future. A potential deal could provide Gunvor with a system of upstream and downstream businesses akin to the trading units of majors like BP Plc and Shell Plc. The deal is subject to

Read More »

Energy Department Announces $100 Million to Restore America’s Coal Plants

WASHINGTON— The U.S. Department of Energy (DOE) today issued a Notice of Funding Opportunity (NOFO) for up to $100 million in federal funding to refurbish and modernize the nation’s existing coal power plants. It follows the Department’s September announcement of its intent to invest $625 million to expand and reinvigorate America’s coal industry. The effort will support practical, high-impact projects that improve efficiency, plant lifetimes, and performance of coal and natural gas use. “For years, the Biden and Obama administrations relentlessly targeted America’s coal industry and workers, resulting in the closure of reliable power plants and higher electricity costs,” said U.S. Secretary of Energy Chris Wright. “Thankfully, President Trump has ended the war on American coal and is restoring common sense energy policies that put Americans first. These projects will help keep America’s coal plants operating and ensure the United States has the reliable and affordable power it needs to keep the lights on and power our future.” This effort supports President Trump’s Executive Orders, Reinvigorating America’s Beautiful Clean Coal Industry and Strengthening the Reliability and Security of the United States Electric Grid, and advances his commitment to restore U.S. energy dominance. This NOFO seeks applications for projects to design, implement, test, and validate three strategic opportunities for refurbishment and retrofit of existing American coal power plants to make them operate more efficiently, reliably, and affordably: Development, engineering, and implementation of advanced wastewater management systems capable of cost-effective water recovery and other value-added byproducts from wastewater streams. Engineering, design, and implementation of retrofit systems that enable fuel switching between coal and natural gas without compromising critical operational parameters. Deployment, engineering, and implementation of advanced coal-natural gas co-firing systems and system components, including highly fuel-flexible burner designs and advanced control systems, to maximize gas co-firing capacity to provide a low cost retrofit option for coal plants while minimizing efficiency penalties. DOE’s National Energy

Read More »

USA Energy Sec Says Goal Is for Canada Trade Talks to Resume

(Update) October 31, 2025, 4:18 PM GMT: Adds comments from President Trump, starting in the first paragraph. US President Donald Trump said he received an apology from Canadian Prime Minister Mark Carney over a television ad that opposed tariffs, but suggested that trade talks between the two countries won’t restart.  Asked by reporters aboard Air Force One whether negotiations between the White House and Carney’s government would resume, Trump said: “No, but I have a very good relationship. I like him a lot, but you know, what they did was wrong. He was very nice. He apologized for what they did with the commercial.”  Earlier Friday, US Energy Secretary Chris Wright said the goal is for the US and Canada to return to the table after talks broke off last week, and for the countries to cooperate more closely on oil, gas and critical minerals. There has been friction in the talks between Canada and the US “for some good reasons,” Wright told reporters at the Group of Seven energy and environment ministers’ meeting in Toronto on Friday. Trump called off the negotiations last week after the province of Ontario aired an anti-tariff advertisement in the US that drew from a 1987 radio address by former President Ronald Reagan. Trump also threatened an additional 10% tariff on Canada. Before the breakdown, Carney said the two countries had been progressing on a deal on steel and aluminum sectoral tariffs, as well as energy. Carney had pitched Trump on reviving the Keystone XL pipeline project. “Unfortunately we’ve had some bumps on the road,” Wright said. “I would say the goal is to bring those back together and I think to see cooperation between the United States and Canada across critical minerals, across oil and gas.” Trump has also said recently that he’s satisfied with

Read More »

Exxon and Chevron Top Estimates With Oil Output Increases

Exxon Mobil Corp. and Chevron Corp. outperformed Wall Street expectations after new oilfield projects and acquisitions boosted crude output. Exxon’s adjusted third-quarter profit per-share was 7 cents higher than analysts forecast, while Chevron posted an almost 20-cent surprise on Friday. For Exxon, it was the sixth consecutive beat, buoyed by the startup of the explorer’s latest Guyana development. Chevron rose as much as 3.1% in New York. Exxon, meanwhile, dipped as much as 1.5% after a spate of acquisitions during the period pressured free cash flow. North America’s largest oil companies are pursuing divergent paths as global oil markets slip into what is widely expected to be a hefty supply glut. As Exxon presses head with a raft of expansion projects despite slumping crude prices, Chevron is positioning itself to wring cash from operations to weather the market downturn. This is all happening against the backdrop of efforts by the OPEC+ alliance to recapture market share by unleashing more crude onto global markets. Brent crude, the international benchmark, already is on pace for its worst annual decline in half a decade. The US supermajors followed European rival Shell Plc in posting stronger-than-expected results. TotalEnergies SE reported profit that was in-line with expectations. BP Plc is scheduled to disclose results next week. For Exxon, eight of the 10 new developments slated for this year have already started up and the remaining two are “on track,” Chief Executive Officer Darren Woods said in a statement.  Woods is betting Exxon’s low debt level means he has ample capacity to fund growth projects that span from crude in Brazil to chemicals in China while maintaining a $20 billion annual buyback program despite weak oil prices. His goal is to be ready to capitalize on an upturn in commodity prices, which analysts say could come

Read More »

Trump Says Canada Trade Talks Won’t Resume, Contradicting Energy Sec

(Update) October 31, 2025, 4:18 PM GMT: Adds comments from President Trump, starting in the first paragraph. US President Donald Trump said he received an apology from Canadian Prime Minister Mark Carney over a television ad that opposed tariffs, but suggested that trade talks between the two countries won’t restart.  Asked by reporters aboard Air Force One whether negotiations between the White House and Carney’s government would resume, Trump said: “No, but I have a very good relationship. I like him a lot, but you know, what they did was wrong. He was very nice. He apologized for what they did with the commercial.”  Earlier Friday, US Energy Secretary Chris Wright said the goal is for the US and Canada to return to the table after talks broke off last week, and for the countries to cooperate more closely on oil, gas and critical minerals. There has been friction in the talks between Canada and the US “for some good reasons,” Wright told reporters at the Group of Seven energy and environment ministers’ meeting in Toronto on Friday. Trump called off the negotiations last week after the province of Ontario aired an anti-tariff advertisement in the US that drew from a 1987 radio address by former President Ronald Reagan. Trump also threatened an additional 10% tariff on Canada. Before the breakdown, Carney said the two countries had been progressing on a deal on steel and aluminum sectoral tariffs, as well as energy. Carney had pitched Trump on reviving the Keystone XL pipeline project. “Unfortunately we’ve had some bumps on the road,” Wright said. “I would say the goal is to bring those back together and I think to see cooperation between the United States and Canada across critical minerals, across oil and gas.” Trump has also said recently that he’s satisfied with

Read More »

Supermicro Unveils Data Center Building Blocks to Accelerate AI Factory Deployment

Supermicro has introduced a new business line, Data Center Building Block Solutions (DCBBS), expanding its modular approach to data center development. The offering packages servers, storage, liquid-cooling infrastructure, networking, power shelves and battery backup units (BBUs), DCIM and automation software, and on-site services into pre-validated, factory-tested bundles designed to accelerate time-to-online (TTO) and improve long-term serviceability. This move represents a significant step beyond traditional rack integration; a shift toward a one-stop, data-center-scale platform aimed squarely at the hyperscale and AI factory market. By providing a single point of accountability across IT, power, and thermal domains, Supermicro’s model enables faster deployments and reduces integration risk—the modern equivalent of a “single throat to choke” for data center operators racing to bring GB200/NVL72-class racks online. What’s New in DCBBS DCBBS extends Supermicro’s modular design philosophy to an integrated catalog of facility-adjacent building blocks, not just IT nodes. By including critical supporting infrastructure—cooling, power, networking, and lifecycle software—the platform helps operators bring new capacity online more quickly and predictably. According to Supermicro, DCBBS encompasses: Multi-vendor AI system support: Compatibility with NVIDIA, AMD, and Intel architectures, featuring Supermicro-designed cold plates that dissipate up to 98% of component-level heat. In-rack liquid-cooling designs: Coolant distribution manifolds (CDMs) and CDUs rated up to 250 kW, supporting 45 °C liquids, alongside rear-door heat exchangers, 800 GbE switches (51.2 Tb/s), 33 kW power shelves, and 48 V battery backup units. Liquid-to-Air (L2A) sidecars: Each row can reject up to 200 kW of heat without modifying existing building hydronics—an especially practical design for air-to-liquid retrofits. Automation and management software: SuperCloud Composer for rack-scale and liquid-cooling lifecycle management SuperCloud Automation Center for firmware, OS, Kubernetes, and AI pipeline enablement Developer Experience Console for self-service workflows and orchestration End-to-end services: Design, validation, and on-site deployment options—including four-hour response service levels—for both greenfield builds

Read More »

Investments Anchor Vertiv’s Growth Strategy as AI-Driven Data Center Orders Surge 60% YoY

New Acquisitions and Partner Awards Vertiv’s third-quarter financial performance was underscored by a series of strategic acquisitions and ecosystem recognitions that expand the company’s technological capabilities and market reach amid AI-driven demand. Acquisition of Waylay NV: AI and Hyperautomation for Infrastructure Intelligence On August 26, Vertiv announced its acquisition of Waylay NV, a Belgium-based developer of generative AI and hyperautomation software. The move bolsters Vertiv’s portfolio with AI-driven monitoring, predictive services, and performance optimization for digital infrastructure. Waylay’s automation platform integrates real-time analytics, orchestration, and workflow automation across diverse connected assets and cloud services—enabling predictive maintenance, uptime optimization, and energy management across power and cooling systems. “With the addition of Waylay’s technology and software-focused team, Vertiv will accelerate its vision of intelligent infrastructure—data-driven, proactive, and optimized for the world’s most demanding environments,” said CEO Giordano Albertazzi. Completion of Great Lakes Acquisition: Expanding White Space Integration Just days earlier, as alluded to above, Vertiv finalized its $200 million acquisition of Great Lakes Data Racks & Cabinets, a U.S.-based manufacturer of enclosures and integrated rack systems. The addition expands Vertiv’s capabilities in high-density, factory-integrated white space solutions; bridging power, cooling, and IT enclosures for hyperscale and edge data centers alike. Great Lakes’ U.S. and European manufacturing footprint complements Vertiv’s global reach, supporting faster deployment cycles and expanded configuration flexibility.  Albertazzi noted that the acquisition “enhances our ability to deliver comprehensive infrastructure solutions, furthering Vertiv’s capabilities to customize at scale and configure at speed for AI and high-density computing environments.” 2024 Partner Awards: Recognizing the Ecosystem Behind Growth Vertiv also spotlighted its partner ecosystem in August with its 2024 North America Partner Awards. The company recognized 11 partners for 2024 performance, growth, and AI execution across segments: Partner of the Year – SHI for launching a customer-facing high-density AI & Cyber Labs featuring

Read More »

QuEra’s Quantum Leap: From Neutral-Atom Breakthroughs to Hybrid HPC Integration

The race to make quantum computing practical – and commercially consequential – took a major step forward this fall, as Boston-based QuEra Computing announced new research milestones, expanded strategic funding, and an accelerating roadmap for hybrid quantum-classical supercomputing. QuEra’s Chief Commercial Officer Yuval Boger joined the Data Center Frontier Show to discuss how neutral-atom quantum systems are moving from research labs into high-performance computing centers and cloud environments worldwide. NVIDIA Joins Google in Backing QuEra’s $230 Million Round In early September, QuEra disclosed that NVentures, NVIDIA’s venture arm, has joined Google and others in expanding its $230 million Series B round. The investment deepens what has already been one of the most active collaborations between quantum and accelerated-computing companies. “We already work with NVIDIA, pairing our scalable neutral-atom architecture with its accelerated-computing stack to speed the arrival of useful, fault-tolerant quantum machines,” said QuEra CEO Andy Ory. “The decision to invest in us underscores our shared belief that hybrid quantum-classical systems will unlock meaningful value for customers sooner than many expect.” The partnership spans hardware, software, and go-to-market initiatives. QuEra’s neutral-atom machines are being integrated into NVIDIA’s CUDA-Q software platform for hybrid workloads, while the two companies collaborate at the NVIDIA Accelerated Quantum Center (NVAQC) in Boston, linking QuEra hardware with NVIDIA’s GB200 NVL72 GPU clusters for simulation and quantum-error-decoder research. Meanwhile, at Japan’s AIST ABCI-Q supercomputing center, QuEra’s Gemini-class quantum computer now operates beside more than 2,000 H100 GPUs, serving as a national testbed for hybrid workflows. A jointly developed transformer-based decoder running on NVIDIA’s GPUs has already outperformed classical maximum-likelihood error-correction models, marking a concrete step toward practical fault-tolerant quantum computing. For NVIDIA, the move signals conviction that quantum processing units (QPUs) will one day complement GPUs inside large-scale data centers. For QuEra, it widens access to the

Read More »

How CoreWeave and Poolside Are Teaming Up in West Texas to Build the Next Generation of AI Data Centers

In the evolving landscape of artificial-intelligence infrastructure, a singular truth is emerging: access to cutting-edge silicon and massive GPU clusters is no longer enough by itself. For companies chasing the frontier of multi-trillion-parameter model training and agentic AI deployment, the bottleneck increasingly lies not just in compute, but in the seamless integration of compute + power + data center scale. The latest chapter in this story is the collaboration between CoreWeave and Poolside, culminating in the launch of Project Horizon, a 2-gigawatt AI-campus build in West Texas. Setting the Stage: Who’s Involved, and Why It Matters CoreWeave (NASDAQ: CRWV) has positioned itself as “The Essential Cloud for AI™” — a company founded in 2017, publicly listed in March 2025, and aggressively building out its footprint of ultra-high-performance infrastructure.  One of its strategic moves: in July 2025 CoreWeave struck a definitive agreement to acquire Core Scientific (NASDAQ: CORZ) in an all-stock transaction. Through that deal, CoreWeave gains grip over approximately 1.3 GW of gross power across Core Scientific’s nationwide data center footprint, plus more than 1 GW of expansion potential.  That acquisition underlines a broader trend: AI-specialist clouds are no longer renting space and power; they’re working to own or tightly control it. Poolside, founded in 2023, is a foundation-model company with an ambitious mission: building artificial general intelligence (AGI) and deploying enterprise-scale agents.  According to Poolside’s blog: “When people ask what it takes to build frontier AI … the focus is usually on the model … but that’s only half the story. The other half is infrastructure. If you don’t control your infrastructure, you don’t control your destiny—and you don’t have a shot at the frontier.”  Simply put: if you’re chasing multi-trillion-parameter models, you need both the compute horsepower and the power infrastructure; and ideally, tight vertical integration. Together, the

Read More »

Vantage Data Centers Pours $15B Into Wisconsin AI Campus as It Builds Global Giga-Scale Footprint

Expanding in Ohio: Financing Growth Through Green Capital In June 2025, Vantage secured $5 billion in green loan capacity, including $2.25 billion to fully fund its New Albany, Ohio (OH1) campus and expand its existing borrowing base. The 192 MW development will comprise three 64 MW buildings, with first delivery expected in December 2025 and phased completion through 2028. The OH1 campus is designed to come online as Vantage’s larger megasites ramp up, providing early capacity and regional proximity to major cloud and AI customers in the Columbus–New Albany corridor. The site also offers logistical and workforce advantages within one of the fastest-growing data center regions in the U.S. Beyond the U.S. – Vantage Expands Its Global Footprint Moving North: Reinforcing Canada’s Renewable Advantage In February 2025, Vantage announced a C$500 million investment to complete QC24, the fourth and final building at its Québec City campus, adding 32 MW of capacity by 2027. The project strengthens Vantage’s Montreal–Québec platform and reinforces its renewable-heavy power profile, leveraging abundant hydropower to serve sustainability-driven customers. APAC Expansion: Strategic Scale in Southeast Asia In September 2025, Vantage unveiled a $1.6 billion APAC expansion, led by existing investors GIC (Singapore’s sovereign wealth fund) and ADIA (Abu Dhabi Investment Authority). The investment includes the acquisition of Yondr’s Johor, Malaysia campus at Sedenak Tech Park. Currently delivering 72.5 MW, the Johor campus is planned to scale to 300 MW at full build-out, positioning it within one of Southeast Asia’s most active AI and cloud growth corridors. Analysts note that the location’s connectivity to Singapore’s hyperscale market and favorable development economics give Vantage a strong competitive foothold across the region. Italy: Expanding European Presence Under National Priority Status Vantage is also adding a second Italian campus alongside its existing Milan site, totaling 32 MW across two facilities. Phase

Read More »

Nvidia GTC show news you need to know round-up

In the case of Flex, it will use digital twins to unify inventory, labor, and freight operations, streamlining logistics across Flex’s worldwide network. Flex’s new 400,000 sq. ft. facility in Dallas is purpose-built for data center infrastructure, aiming to significantly shorten lead times for U.S. customers. The Flex/Nvidia partnership aims to address the country’s labor shortages and drive innovation in manufacturing, pharmaceuticals, and technology. The companies believe the partnership sets the stage for a new era of giga-scale AI factories. Nvidia and Oracle to Build DOE’s Largest AI Supercomputer Oracle continues its aggressive push into supercomputing with a deal to build the largest AI supercomputer for scientific discovery — Using Nvidia GPUs, obviously — at a Department of Energy facility. The system, dubbed Solstice, will feature an incredible 100,000 Nvidia Blackwell GPUs. A second system, dubbed Equinox, will include 10,000 Blackwell GPUs and is expected to be available in the first half of 2026. Both systems will be interconnected by Nvidia networking and deliver a combined 2,200 exaflops of AI performance. The Solstice and Equinox supercomputers will be located at Argonne National Laboratory, the home to the Aurora supercomputer, built using all Intel parts. They will enable scientists and researchers to develop and train new frontier models and AI reasoning models for open science using the Nvidia Megatron-Core library and scale them using the Nvidia TensorRT inference software stack.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »