Stay Ahead, Stay ONMINE

Inside OpenAI’s empire: A conversation with Karen Hao

Niall Firth: Hello, everyone, and welcome to this special edition of Roundtables. These are our subscriber-only events where you get to listen in to conversations between editors and reporters. Now, I’m delighted to say we’ve got an absolute cracker of an event today. I’m very happy to have our prodigal daughter, Karen Hao, a fabulous AI journalist, here with us to talk about her new book. Hello, Karen, how are you doing? Karen Hao: Good. Thank you so much for having me back, Niall.  Niall Firth: Lovely to have you. So I’m sure you all know Karen and that’s why you’re here. But to give you a quick, quick synopsis, Karen has a degree in mechanical engineering from MIT. She was MIT Technology Review’s senior editor for AI and has won countless awards, been cited in Congress, written for the Wall Street Journal and The Atlantic, and set up a series at the Pulitzer Center to teach journalists how to cover AI.  But most important of all, she’s here to discuss her new book, which I’ve got a copy of here, Empire of AI. The UK version is subtitled “Inside the reckless race for total domination,” and the US one, I believe, is “Dreams and nightmares in Sam Altman’s OpenAI.” It’s been an absolute sensation, a New York Times chart topper. An incredible feat of reporting—like 300 interviews, including 90 with people inside OpenAI. And it’s a brilliant look at not just OpenAI’s rise, and the character of Sam Altman, which is very interesting in its own right, but also a really astute look at what kind of AI we’re building and who holds the keys.  Karen, the core of the book, the rise and rise of OpenAI, was one of your first big features at MIT Technology Review. It’s a brilliant story that lifted the lid for the first time on what was going on at OpenAI … and they really hated it, right? Karen Hao: Yes, and first of all, thank you to everyone for being here. It’s always great to be home. I do still consider MIT Tech Review to be my journalistic home, and that story was—I only did it because Niall assigned it after I said, “Hey, it seems like OpenAI is kind of an interesting thing,” and he was like, you should profile them. And I had never written a profile about a company before, and I didn’t think that I would have it in me, and Niall believed that I would be able to do it. So it really didn’t happen other than because of you. I went into the piece with an open mind about—let me understand what OpenAI is. Let me take what they say at face value. They were founded as a nonprofit. They have this mission to ensure artificial general intelligence benefits all of humanity. What do they mean by that? How are they trying to achieve that ultimately? How are they striking this balance between mission-driven AI development and the need to raise money and capital?  And through the course of embedding within the company for three days, and then interviewing dozens of people outside the company or around the company … I came to realize that there was a fundamental disconnect between what they were publicly espousing and accumulating a lot of goodwill from and how they were operating. And that is what I ended up focusing my profile on, and that is why they were not very pleased. Niall Firth: And how have you seen OpenAI change even since you did the profile? That sort of misalignment feels like it’s got messier and more confusing in the years since. Karen Hao: Absolutely. I mean, it’s kind of remarkable that OpenAI, you could argue that they are now one of the most capitalistic corporations in Silicon Valley. They just raised $40 billion, in the largest-ever private fundraising round in tech industry history. They’re valued at $300 billion. And yet they still say that they are first and foremost a nonprofit.  I think this really gets to the heart of how much OpenAI has tried to position and reposition itself throughout its decade-long history, to ultimately play into the narratives that they think are going to do best with the public and with policymakers, in spite of what they might actually be doing in terms of developing their technologies and commercializing them. Niall Firth: You cite Sam Altman saying, you know, the race for AGI is what motivated a lot of this, and I’ll come back to that a bit before the end. But he talks about it as like the Manhattan Project for AI. You cite him quoting Oppenheimer (of course, you know, there’s no self-aggrandizing there): “Technology happens because it’s possible,” he says in the book.  And it feels to me like this is one of the themes of the book: the idea that technology doesn’t just happen because it comes along. It comes because of choices that people make. It’s not an inevitability that things are the way they are and that people are who they are. What they think is important—that influences the direction of travel. So what does this mean, in practice, if that’s the case? Karen Hao: With OpenAI in particular, they made a very key decision early on in their history that led to all of the AI technologies that we see dominating the marketplace and dominating headlines today. And that was a decision to try and advance AI progress through scaling the existing techniques that were available to them. At the time when OpenAI started, at the end of 2015, and then, when they made that decision, in roughly around 2017, this was a very unpopular perspective within the broader AI research field.  There were kind of two competing ideas about how to advance AI progress, or rather a spectrum of ideas, bookended by two extremes. One extreme being, we have all the techniques we need, and we should just aggressively scale. And the other one being that we don’t actually have the techniques we need. We need to continue innovating and doing fundamental AI research to get more breakthroughs. And largely the field assumed that this side of the spectrum [focusing on fundamental AI research] was the most likely approach for getting advancements, but OpenAI was anomalously committed to the other extreme—this idea that we can just take neural networks and pump ever more data, and train on ever larger supercomputers, larger than have ever been built in history. The reason why they made that decision was because they were competing against Google, which had a dominant monopoly on AI talent. And OpenAI knew that they didn’t necessarily have the ability to beat Google simply by trying to get research breakthroughs. That’s a very hard path. When you’re doing fundamental research, you never really know when the breakthrough might appear. It’s not a very linear line of progress, but scaling is sort of linear. As long as you just pump more data and more compute, you can get gains. And so they thought, we can just do this faster than anyone else. And that’s the way that we’re going to leap ahead of Google. And it particularly aligned with Sam Altman’s skillset, as well, because he is a once-in-a-generation fundraising talent, and when you’re going for scale to advance AI models, the primary bottleneck is capital. And so it was kind of a great fit for what he had to offer, which is, he knows how to accumulate capital, and he knows how to accumulate it very quickly. So that is ultimately how you can see that technology is a product of human choices and human perspectives. And they’re the specific skills and strengths that that team had at the time for how they wanted to move forward. Niall Firth: And to be fair, I mean, it works, right? It was amazing, fabulous. You know the breakthroughs that happened, GPT-2 to GPT-3, just from scale and data and compute, kind of were mind-blowing really, as we look back on it now. Karen Hao: Yeah, it is remarkable how much it did work, because there was a lot of skepticism about the idea that scale could lead to the kind of technical progress that we’ve seen. But one of my biggest critiques of this particular approach is that there’s also an extraordinary amount of costs that come with this particular pathway to getting more advancements. And there are many different pathways to advancing AI, so we could have actually gotten all of these benefits, and moving forward, we could continue to get more benefits from AI, without actually engaging in a hugely consumptive, hugely costly approach to its development. Niall Firth: Yeah, so in terms of consumptive, that’s something we’ve touched on here quite recently at MIT Technology Review, like the energy costs of AI. The data center costs are absolutely extraordinary, right? Like the data behind it is incredible. And it’s only gonna get worse in the next few years if we continue down this path, right?  Karen Hao: Yeah … so first of all, everyone should read the series that Tech Review put out, if you haven’t already, on the energy question, because it really does break down everything from what is the energy consumption of the smallest unit of interacting with these models, all the way up until the highest level.  The number that I have seen a lot, and that I’ve been repeating, is there was a McKinsey report that was looking at if we continue to just look at the pace at which data centers and supercomputers are being built and scaled, in the next five years, we would have to add two to six times the amount of energy consumed by California onto the grid. And most of that will have to be serviced by fossil fuels, because these data centers and supercomputers have to run 24/7, so we cannot rely solely on renewable energy. We do not have enough nuclear power capacity to power these colossal pieces of infrastructure. And so we’re already accelerating the climate crisis.  And we’re also accelerating a public-health crisis, the pumping of thousands of tons of air pollutants into the air from coal plants that are having their lives extended and methane gas turbines that are being built in service of powering these data centers. And in addition to that, there’s also an acceleration of the freshwater crisis, because these pieces of infrastructure have to be cooled with freshwater resources. It has to be fresh water, because if it’s any other type of water, it corrodes the equipment, it leads to bacterial growth. And Bloomberg recently had a story that showed that two-thirds of these data centers are actually going into water-scarce areas, into places where the communities already do not have enough fresh water at their disposal. So that is one dimension of many that I refer to when I say, the extraordinary costs of this particular pathway for AI development. Niall Firth: So in terms of costs and the extractive process of making AI, I wanted to give you the chance to talk about the other theme of the book, apart from just OpenAI’s explosion. It’s the colonial way of looking at the way AI is made: the empire. I’m saying this obviously because we’re here, but this is an idea that came out of reporting you started at MIT Technology Review and then continued into the book. Tell us about how this framing helps us understand how AI is made now. Karen Hao: Yeah, so this was a framing that I started thinking a lot about when I was working on the AI Colonialism series for Tech Review. It was a series of stories that looked at the way that, pre-ChatGPT, the commercialization of AI and its deployment into the world was already leading to entrenchment of historical inequities into the present day. And one example was a story that was about how facial recognition companies were swarming into South Africa to try and harvest more data from South Africa during a time when they were getting criticized for the fact that their technologies did not accurately recognize black faces. And the deployment of those facial recognition technologies into South Africa, into the streets of Johannesburg, was leading to what South African scholars were calling a recreation of a digital apartheid—the controlling of black bodies, movement of black people. And this idea really haunted me for a really long time. Through my reporting in that series, there were so many examples that I kept hitting upon of this thesis, that the AI industry was perpetuating. It felt like it was becoming this neocolonial force. And then, when ChatGPT came out, it became clear that this was just accelerating.  When you accelerate the scale of these technologies, and you start training them on the entirety of the Internet, and you start using these supercomputers that are the size of dozens—if not hundreds—of football fields. Then you really start talking about an extraordinary global level of extraction and exploitation that is happening to produce these technologies. And then the historical power imbalances become even more obvious.  And so there are four parallels that I draw in my book between what I have now termed empires of AI versus empires of old. The first one is that empires lay claim to resources that are not their own. So these companies are scraping all this data that is not their own, taking all the intellectual property that is not their own. The second is that empires exploit a lot of labor. So we see them moving to countries in the Global South or other economically vulnerable communities to contract workers to do some of the worst work in the development pipeline for producing these technologies—and also producing technologies that then inherently are labor-automating and engage in labor exploitation in and of themselves.  And the third feature is that the empires monopolize knowledge production. So, in the last 10 years, we’ve seen the AI industry monopolize more and more of the AI researchers in the world. So AI researchers are no longer contributing to open science, working in universities or independent institutions, and the effect on the research is what you would imagine would happen if most of the climate scientists in the world were being bankrolled by oil and gas companies. You would not be getting a clear picture, and we are not getting a clear picture, of the limitations of these technologies, or if there are better ways to develop these technologies. And the fourth and final feature is that empires always engage in this aggressive race rhetoric, where there are good empires and evil empires. And they, the good empire, have to be strong enough to beat back the evil empire, and that is why they should have unfettered license to consume all of these resources and exploit all of this labor. And if the evil empire gets the technology first, humanity goes to hell. But if the good empire gets the technology first, they’ll civilize the world, and humanity gets to go to heaven. So on many different levels, like the empire theme, I felt like it was the most comprehensive way to name exactly how these companies operate, and exactly what their impacts are on the world. Niall Firth: Yeah, brilliant. I mean, you talk about the evil empire. What happens if the evil empire gets it first? And what I mentioned at the top is AGI. For me, it’s almost like the extra character in the book all the way through. It’s sort of looming over everything, like the ghost at the feast, sort of saying like, this is the thing that motivates everything at OpenAI. This is the thing we’ve got to get to before anyone else gets to it.  There’s a bit in the book about how they’re talking internally at OpenAI, like, we’ve got to make sure that AGI is in US hands where it’s safe versus like anywhere else. And some of the international staff are openly like—that’s kind of a weird way to frame it, isn’t it? Why is the US version of AGI better than others?  So tell us a bit about how it drives what they do. And AGI isn’t an inevitable fact that’s just happening anyway, is it? It’s not even a thing yet. Karen Hao: There’s not even consensus around whether or not it’s even possible or what it even is. There was recently a New York Times story by Cade Metz that was citing a survey of long-standing AI researchers in the field, and 75% of them still think that we don’t have the techniques yet for reaching AGI, whatever that means. And the most classic definition or understanding of what AGI is, is being able to fully recreate human intelligence in software. But the problem is, we also don’t have scientific consensus around what human intelligence is. And so one of the aspects that I talk about a lot in the book is that, when there is a vacuum of shared meaning around this term, and what it would look like, when would we have arrived at it? What capabilities should we be evaluating these systems on to determine that we’ve gotten there? It can basically just be whatever OpenAI wants.  So it’s kind of just this ever-present goalpost that keeps shifting, depending on where the company wants to go. You know, they have a full range, a variety of different definitions that they’ve used throughout the years. In fact, they even have a joke internally: If you ask 13 OpenAI researchers what AGI is, you’ll get 15 definitions. So they are kind of self-aware that this is not really a real term and it doesn’t really have that much meaning.  But it does serve this purpose of creating a kind of quasi-religious fervor around what they’re doing, where people think that they have to keep driving towards this horizon, and that one day when they get there, it’s going to have a civilizationally transformative impact. And therefore, what else should you be working on in your life, but this? And who else should be working on it, but you?  And so it is their justification not just for continuing to push and scale and consume all these resources—because none of that consumption, none of that harm matters anymore if you end up hitting this destination. But they also use it as a way to develop their technologies in a very deeply anti-democratic way, where they say, we are the only people that have the expertise, that have the right to carefully control the development of this technology and usher it into the world. And we cannot let anyone else participate because it’s just too powerful of a technology. Niall Firth: You talk about the factions, particularly the religious framing. AGI has been around as a concept for a while—it was very niche, very kind of nerdy fun, really, to talk about—to suddenly become extremely mainstream. And they have the boomers versus doomers dichotomy. Where are you on that spectrum? Karen Hao: So the boomers are people who think that AGI is going to bring us to utopia, and the doomers think AGI is going to devastate all of humanity. And to me these are actually two sides of the same coin. They both believe that AGI is possible, and it’s imminent, and it’s going to change everything.  And I am not on this spectrum. I’m in a third space, which is the AI accountability space, which is rooted in the observation that these companies have accumulated an extraordinary amount of power, both economic and political power, to go back to the empire analogy.  Ultimately, the thing that we need to do in order to not return to an age of empire and erode a lot of democratic norms is to hold these companies accountable with all the tools at our disposal, and to recognize all the harms that they are already perpetuating through a misguided approach to AI development. Niall Firth: I’ve got a couple of questions from readers. I’m gonna try to pull them together a little bit because Abbas asks, what would post-imperial AI look like? And there was a question from Liam basically along the same lines. How do you make a more ethical version of AI that is not within this framework?  Karen Hao: We sort of already touched a little bit upon this idea. But there are so many different ways to develop AI. There are myriads of techniques throughout the history of AI development, which is decades long. There have been various shifts in the winds of which techniques ultimately rise and fall. And it isn’t based solely on the scientific or technical merit of any particular technique. Oftentimes certain techniques become more popular because of business reasons or because of the funder’s ideologies. And that’s sort of what we’re seeing today with the complete indexing of AI development on large-scale AI model development. And ultimately, these large-scale models … We talked about how it’s a remarkable technical leap, but in terms of social progress or economic progress, the benefits of these models have been kind of middling. And the way that I see us shifting to AI models that are going to be A) more beneficial and B) not so imperial is to refocus on task-specific AI systems that are tackling well-scoped challenges that inherently lend themselves to the strengths of AI systems that are inherently computational optimization problems.  So I’m talking about things like using AI to integrate more renewable energy into the grid. This is something that we definitely need. We need to more quickly accelerate our electrification of the grid, and one of the challenges of using more renewable energy is the unpredictability of it. And this is a key strength of AI technologies, being able to have predictive capabilities and optimization capabilities where you can match the energy generation of different renewables with the energy demands of different people that are drawing from the grid. Niall Firth: Quite a few people have been asking, in the chat, different versions of the same question. If you were an early-career AI scientist, or if you were involved in AI, what can you do yourself to bring about a more ethical version of AI? Do you have any power left, or is it too late?  Karen Hao: No, I don’t think it’s too late at all. I mean, as I’ve been talking with a lot of people just in the lay public, one of the biggest challenges that they have is they don’t have any alternatives for AI. They want the benefits of AI, but they also do not want to participate in a supply chain that is really harmful. And so the first question is, always, is there an alternative? Which tools do I shift to? And unfortunately, there just aren’t that many alternatives right now.  And so the first thing that I would say to early-career AI researchers and entrepreneurs is to build those alternatives, because there are plenty of people that are actually really excited about the possibility of switching to more ethical alternatives. And one of the analogies I often use is that we kind of need to do with the AI industry what happened with the fashion industry. There was also a lot of environmental exploitation, labor exploitation in the fashion industry, and there was enough consumer demand that it created new markets for ethical and sustainably sourced fashion. And so we kind of need to see just more options occupying that space. Niall Firth: Do you feel optimistic about the future? Or where do you sit? You know, things aren’t great as you spell them out now. Where’s the hope for us? Karen Hao: I am. I’m super optimistic. Part of the reason why I’m optimistic is because you know, a few years ago, when I started writing about AI at Tech Review, I remember people would say, wow, that’s a really niche beat. Do you have enough to write about?  And now, I mean, everyone is talking about AI, and I think that’s the first step to actually getting to a better place with AI development. The amount of public awareness and attention and scrutiny that is now going into how we develop these technologies, how we use these technologies, is really, really important. Like, we need to be having this public debate and that in and of itself is a significant step change from what we had before.  But the next step, and part of the reason why I wrote this book, is we need to convert the awareness into action, and people should take an active role. Every single person should feel that they have an active role in shaping the future of AI development, if you think about all of the different ways that you interface with the AI development supply chain and deployment supply chain—like you give your data or withhold your data. There are probably data centers that are being built around you right now. If you’re a parent, there’s some kind of AI policy being crafted at [your kid’s] school. There’s some kind of AI policy being crafted at your workplace. These are all what I consider sites of democratic contestation, where you can use those opportunities to assert your voice about how you want AI to be developed and deployed. If you do not want these companies to use certain kinds of data, push back when they just take the data.  I closed all of my personal social media accounts because I just did not like the fact that they were scraping my personal photos to train their generative AI models. I’ve seen parents and students and teachers start forming committees within schools to talk about what their AI policy should be and to draft it collectively as a community. Same with businesses. They’re doing the same thing. If we all kind of step up to play that active role, I am super optimistic that we’ll get to a better place. Niall Firth: Mark, in the chat, mentions the Māori story from New Zealand towards the end of your book, and that’s an example of sort of community-led AI in action, isn’t it? Karen Hao: Yeah. There was a community in New Zealand that really wanted to help revitalize the Māori language by building a speech recognition tool that could recognize Māori, and therefore be able to transcribe a rich repository of archival audio of their ancestors speaking Māori. And the first thing that they did when engaging in that project was they asked the community, do you want this AI tool?  Niall Firth: Imagine that. Karen Hao: I know! It’s such a radical concept, this idea of consent at every stage. But they first asked that; the community wholeheartedly said yes. They then engaged in a public education campaign to explain to people, okay, what does it take to develop an AI tool? Well, we are going to need data. We’re going to need audio transcription pairs to train this AI model. So then they ran a public contest in which they were able to get dozens, if not hundreds, of people in their community to donate data to this project. And then they made sure that when they developed the model, they actively explained to the community at every step how their data was being used, how it would be stored, how it would continue to be protected. And any other project that would use the data has to get permission and consent from the community first.  And so it was a completely democratic process, for whether they wanted the tool, how to develop the tool, and how the tool should continue to be used, and how their data should continue to be used over time. Niall Firth: Great. I know we’ve gone a bit over time. I’ve got two more things I’m going to ask you, basically putting together lots of questions people have asked in the chat about your view on what role regulations should play. What are your thoughts on that? Karen Hao: Yeah, I mean, in an ideal world where we actually had a functioning government, regulation should absolutely play a huge role. And it shouldn’t just be thinking about once an AI model is built, how to regulate that. But still thinking about the full supply chain of AI development, regulating the data and what’s allowed to be trained in these models, regulating the land use. And what pieces of land are allowed to build data centers? How much energy and water are the data centers allowed to consume? And also regulating the transparency. We don’t know what data is in these training data sets, and we don’t know the environmental costs of training these models. We don’t know how much water these data centers consume and that is all information that these companies actively withhold to prevent democratic processes from happening. So if there were one major intervention that regulators could have, it should be to dramatically increase the amount of transparency along the supply chain. Niall Firth: Okay, great. So just to bring it back around to OpenAI and Sam Altman to finish with. He famously sent an email around, didn’t he? After your original Tech Review story, saying this is not great. We don’t like this. And he didn’t want to speak to you for your book, either, did he? Karen Hao: No, he did not. Niall Firth: No. But imagine Sam Altman is in the chat here. He’s subscribed to Technology Review and is watching this Roundtables because he wants to know what you’re saying about him. If you could talk to him directly, what would you like to ask him?  Karen Hao: What degree of harm do you need to see in order to realize that you should take a different path?  Niall Firth: Nice, blunt, to the point. All right, Karen, thank you so much for your time.  Karen Hao: Thank you so much, everyone. MIT Technology Review Roundtables is a subscriber-only online event series where experts discuss the latest developments and what’s next in emerging technologies. Sign up to get notified about upcoming sessions.

Niall Firth: Hello, everyone, and welcome to this special edition of Roundtables. These are our subscriber-only events where you get to listen in to conversations between editors and reporters. Now, I’m delighted to say we’ve got an absolute cracker of an event today. I’m very happy to have our prodigal daughter, Karen Hao, a fabulous AI journalist, here with us to talk about her new book. Hello, Karen, how are you doing?

Karen Hao: Good. Thank you so much for having me back, Niall. 

Niall Firth: Lovely to have you. So I’m sure you all know Karen and that’s why you’re here. But to give you a quick, quick synopsis, Karen has a degree in mechanical engineering from MIT. She was MIT Technology Review’s senior editor for AI and has won countless awards, been cited in Congress, written for the Wall Street Journal and The Atlantic, and set up a series at the Pulitzer Center to teach journalists how to cover AI. 

But most important of all, she’s here to discuss her new book, which I’ve got a copy of here, Empire of AI. The UK version is subtitled “Inside the reckless race for total domination,” and the US one, I believe, is “Dreams and nightmares in Sam Altman’s OpenAI.”

It’s been an absolute sensation, a New York Times chart topper. An incredible feat of reporting—like 300 interviews, including 90 with people inside OpenAI. And it’s a brilliant look at not just OpenAI’s rise, and the character of Sam Altman, which is very interesting in its own right, but also a really astute look at what kind of AI we’re building and who holds the keys. 

Karen, the core of the book, the rise and rise of OpenAI, was one of your first big features at MIT Technology Review. It’s a brilliant story that lifted the lid for the first time on what was going on at OpenAI … and they really hated it, right?

Karen Hao: Yes, and first of all, thank you to everyone for being here. It’s always great to be home. I do still consider MIT Tech Review to be my journalistic home, and that story was—I only did it because Niall assigned it after I said, “Hey, it seems like OpenAI is kind of an interesting thing,” and he was like, you should profile them. And I had never written a profile about a company before, and I didn’t think that I would have it in me, and Niall believed that I would be able to do it. So it really didn’t happen other than because of you.

I went into the piece with an open mind about—let me understand what OpenAI is. Let me take what they say at face value. They were founded as a nonprofit. They have this mission to ensure artificial general intelligence benefits all of humanity. What do they mean by that? How are they trying to achieve that ultimately? How are they striking this balance between mission-driven AI development and the need to raise money and capital? 

And through the course of embedding within the company for three days, and then interviewing dozens of people outside the company or around the company … I came to realize that there was a fundamental disconnect between what they were publicly espousing and accumulating a lot of goodwill from and how they were operating. And that is what I ended up focusing my profile on, and that is why they were not very pleased.

Niall Firth: And how have you seen OpenAI change even since you did the profile? That sort of misalignment feels like it’s got messier and more confusing in the years since.

Karen Hao: Absolutely. I mean, it’s kind of remarkable that OpenAI, you could argue that they are now one of the most capitalistic corporations in Silicon Valley. They just raised $40 billion, in the largest-ever private fundraising round in tech industry history. They’re valued at $300 billion. And yet they still say that they are first and foremost a nonprofit. 

I think this really gets to the heart of how much OpenAI has tried to position and reposition itself throughout its decade-long history, to ultimately play into the narratives that they think are going to do best with the public and with policymakers, in spite of what they might actually be doing in terms of developing their technologies and commercializing them.

Niall Firth: You cite Sam Altman saying, you know, the race for AGI is what motivated a lot of this, and I’ll come back to that a bit before the end. But he talks about it as like the Manhattan Project for AI. You cite him quoting Oppenheimer (of course, you know, there’s no self-aggrandizing there): “Technology happens because it’s possible,” he says in the book. 

And it feels to me like this is one of the themes of the book: the idea that technology doesn’t just happen because it comes along. It comes because of choices that people make. It’s not an inevitability that things are the way they are and that people are who they are. What they think is important—that influences the direction of travel. So what does this mean, in practice, if that’s the case?

Karen Hao: With OpenAI in particular, they made a very key decision early on in their history that led to all of the AI technologies that we see dominating the marketplace and dominating headlines today. And that was a decision to try and advance AI progress through scaling the existing techniques that were available to them. At the time when OpenAI started, at the end of 2015, and then, when they made that decision, in roughly around 2017, this was a very unpopular perspective within the broader AI research field. 

There were kind of two competing ideas about how to advance AI progress, or rather a spectrum of ideas, bookended by two extremes. One extreme being, we have all the techniques we need, and we should just aggressively scale. And the other one being that we don’t actually have the techniques we need. We need to continue innovating and doing fundamental AI research to get more breakthroughs. And largely the field assumed that this side of the spectrum [focusing on fundamental AI research] was the most likely approach for getting advancements, but OpenAI was anomalously committed to the other extreme—this idea that we can just take neural networks and pump ever more data, and train on ever larger supercomputers, larger than have ever been built in history.

The reason why they made that decision was because they were competing against Google, which had a dominant monopoly on AI talent. And OpenAI knew that they didn’t necessarily have the ability to beat Google simply by trying to get research breakthroughs. That’s a very hard path. When you’re doing fundamental research, you never really know when the breakthrough might appear. It’s not a very linear line of progress, but scaling is sort of linear. As long as you just pump more data and more compute, you can get gains. And so they thought, we can just do this faster than anyone else. And that’s the way that we’re going to leap ahead of Google. And it particularly aligned with Sam Altman’s skillset, as well, because he is a once-in-a-generation fundraising talent, and when you’re going for scale to advance AI models, the primary bottleneck is capital.

And so it was kind of a great fit for what he had to offer, which is, he knows how to accumulate capital, and he knows how to accumulate it very quickly. So that is ultimately how you can see that technology is a product of human choices and human perspectives. And they’re the specific skills and strengths that that team had at the time for how they wanted to move forward.

Niall Firth: And to be fair, I mean, it works, right? It was amazing, fabulous. You know the breakthroughs that happened, GPT-2 to GPT-3, just from scale and data and compute, kind of were mind-blowing really, as we look back on it now.

Karen Hao: Yeah, it is remarkable how much it did work, because there was a lot of skepticism about the idea that scale could lead to the kind of technical progress that we’ve seen. But one of my biggest critiques of this particular approach is that there’s also an extraordinary amount of costs that come with this particular pathway to getting more advancements. And there are many different pathways to advancing AI, so we could have actually gotten all of these benefits, and moving forward, we could continue to get more benefits from AI, without actually engaging in a hugely consumptive, hugely costly approach to its development.

Niall Firth: Yeah, so in terms of consumptive, that’s something we’ve touched on here quite recently at MIT Technology Review, like the energy costs of AI. The data center costs are absolutely extraordinary, right? Like the data behind it is incredible. And it’s only gonna get worse in the next few years if we continue down this path, right? 

Karen Hao: Yeah … so first of all, everyone should read the series that Tech Review put out, if you haven’t already, on the energy question, because it really does break down everything from what is the energy consumption of the smallest unit of interacting with these models, all the way up until the highest level. 

The number that I have seen a lot, and that I’ve been repeating, is there was a McKinsey report that was looking at if we continue to just look at the pace at which data centers and supercomputers are being built and scaled, in the next five years, we would have to add two to six times the amount of energy consumed by California onto the grid. And most of that will have to be serviced by fossil fuels, because these data centers and supercomputers have to run 24/7, so we cannot rely solely on renewable energy. We do not have enough nuclear power capacity to power these colossal pieces of infrastructure. And so we’re already accelerating the climate crisis. 

And we’re also accelerating a public-health crisis, the pumping of thousands of tons of air pollutants into the air from coal plants that are having their lives extended and methane gas turbines that are being built in service of powering these data centers. And in addition to that, there’s also an acceleration of the freshwater crisis, because these pieces of infrastructure have to be cooled with freshwater resources. It has to be fresh water, because if it’s any other type of water, it corrodes the equipment, it leads to bacterial growth.

And Bloomberg recently had a story that showed that two-thirds of these data centers are actually going into water-scarce areas, into places where the communities already do not have enough fresh water at their disposal. So that is one dimension of many that I refer to when I say, the extraordinary costs of this particular pathway for AI development.

Niall Firth: So in terms of costs and the extractive process of making AI, I wanted to give you the chance to talk about the other theme of the book, apart from just OpenAI’s explosion. It’s the colonial way of looking at the way AI is made: the empire. I’m saying this obviously because we’re here, but this is an idea that came out of reporting you started at MIT Technology Review and then continued into the book. Tell us about how this framing helps us understand how AI is made now.

Karen Hao: Yeah, so this was a framing that I started thinking a lot about when I was working on the AI Colonialism series for Tech Review. It was a series of stories that looked at the way that, pre-ChatGPT, the commercialization of AI and its deployment into the world was already leading to entrenchment of historical inequities into the present day.

And one example was a story that was about how facial recognition companies were swarming into South Africa to try and harvest more data from South Africa during a time when they were getting criticized for the fact that their technologies did not accurately recognize black faces. And the deployment of those facial recognition technologies into South Africa, into the streets of Johannesburg, was leading to what South African scholars were calling a recreation of a digital apartheid—the controlling of black bodies, movement of black people.

And this idea really haunted me for a really long time. Through my reporting in that series, there were so many examples that I kept hitting upon of this thesis, that the AI industry was perpetuating. It felt like it was becoming this neocolonial force. And then, when ChatGPT came out, it became clear that this was just accelerating. 

When you accelerate the scale of these technologies, and you start training them on the entirety of the Internet, and you start using these supercomputers that are the size of dozens—if not hundreds—of football fields. Then you really start talking about an extraordinary global level of extraction and exploitation that is happening to produce these technologies. And then the historical power imbalances become even more obvious. 

And so there are four parallels that I draw in my book between what I have now termed empires of AI versus empires of old. The first one is that empires lay claim to resources that are not their own. So these companies are scraping all this data that is not their own, taking all the intellectual property that is not their own.

The second is that empires exploit a lot of labor. So we see them moving to countries in the Global South or other economically vulnerable communities to contract workers to do some of the worst work in the development pipeline for producing these technologies—and also producing technologies that then inherently are labor-automating and engage in labor exploitation in and of themselves. 

And the third feature is that the empires monopolize knowledge production. So, in the last 10 years, we’ve seen the AI industry monopolize more and more of the AI researchers in the world. So AI researchers are no longer contributing to open science, working in universities or independent institutions, and the effect on the research is what you would imagine would happen if most of the climate scientists in the world were being bankrolled by oil and gas companies. You would not be getting a clear picture, and we are not getting a clear picture, of the limitations of these technologies, or if there are better ways to develop these technologies.

And the fourth and final feature is that empires always engage in this aggressive race rhetoric, where there are good empires and evil empires. And they, the good empire, have to be strong enough to beat back the evil empire, and that is why they should have unfettered license to consume all of these resources and exploit all of this labor. And if the evil empire gets the technology first, humanity goes to hell. But if the good empire gets the technology first, they’ll civilize the world, and humanity gets to go to heaven. So on many different levels, like the empire theme, I felt like it was the most comprehensive way to name exactly how these companies operate, and exactly what their impacts are on the world.

Niall Firth: Yeah, brilliant. I mean, you talk about the evil empire. What happens if the evil empire gets it first? And what I mentioned at the top is AGI. For me, it’s almost like the extra character in the book all the way through. It’s sort of looming over everything, like the ghost at the feast, sort of saying like, this is the thing that motivates everything at OpenAI. This is the thing we’ve got to get to before anyone else gets to it. 

There’s a bit in the book about how they’re talking internally at OpenAI, like, we’ve got to make sure that AGI is in US hands where it’s safe versus like anywhere else. And some of the international staff are openly like—that’s kind of a weird way to frame it, isn’t it? Why is the US version of AGI better than others? 

So tell us a bit about how it drives what they do. And AGI isn’t an inevitable fact that’s just happening anyway, is it? It’s not even a thing yet.

Karen Hao: There’s not even consensus around whether or not it’s even possible or what it even is. There was recently a New York Times story by Cade Metz that was citing a survey of long-standing AI researchers in the field, and 75% of them still think that we don’t have the techniques yet for reaching AGI, whatever that means. And the most classic definition or understanding of what AGI is, is being able to fully recreate human intelligence in software. But the problem is, we also don’t have scientific consensus around what human intelligence is. And so one of the aspects that I talk about a lot in the book is that, when there is a vacuum of shared meaning around this term, and what it would look like, when would we have arrived at it? What capabilities should we be evaluating these systems on to determine that we’ve gotten there? It can basically just be whatever OpenAI wants. 

So it’s kind of just this ever-present goalpost that keeps shifting, depending on where the company wants to go. You know, they have a full range, a variety of different definitions that they’ve used throughout the years. In fact, they even have a joke internally: If you ask 13 OpenAI researchers what AGI is, you’ll get 15 definitions. So they are kind of self-aware that this is not really a real term and it doesn’t really have that much meaning. 

But it does serve this purpose of creating a kind of quasi-religious fervor around what they’re doing, where people think that they have to keep driving towards this horizon, and that one day when they get there, it’s going to have a civilizationally transformative impact. And therefore, what else should you be working on in your life, but this? And who else should be working on it, but you? 

And so it is their justification not just for continuing to push and scale and consume all these resources—because none of that consumption, none of that harm matters anymore if you end up hitting this destination. But they also use it as a way to develop their technologies in a very deeply anti-democratic way, where they say, we are the only people that have the expertise, that have the right to carefully control the development of this technology and usher it into the world. And we cannot let anyone else participate because it’s just too powerful of a technology.

Niall Firth: You talk about the factions, particularly the religious framing. AGI has been around as a concept for a while—it was very niche, very kind of nerdy fun, really, to talk about—to suddenly become extremely mainstream. And they have the boomers versus doomers dichotomy. Where are you on that spectrum?

Karen Hao: So the boomers are people who think that AGI is going to bring us to utopia, and the doomers think AGI is going to devastate all of humanity. And to me these are actually two sides of the same coin. They both believe that AGI is possible, and it’s imminent, and it’s going to change everything. 

And I am not on this spectrum. I’m in a third space, which is the AI accountability space, which is rooted in the observation that these companies have accumulated an extraordinary amount of power, both economic and political power, to go back to the empire analogy. 

Ultimately, the thing that we need to do in order to not return to an age of empire and erode a lot of democratic norms is to hold these companies accountable with all the tools at our disposal, and to recognize all the harms that they are already perpetuating through a misguided approach to AI development.

Niall Firth: I’ve got a couple of questions from readers. I’m gonna try to pull them together a little bit because Abbas asks, what would post-imperial AI look like? And there was a question from Liam basically along the same lines. How do you make a more ethical version of AI that is not within this framework? 

Karen Hao: We sort of already touched a little bit upon this idea. But there are so many different ways to develop AI. There are myriads of techniques throughout the history of AI development, which is decades long. There have been various shifts in the winds of which techniques ultimately rise and fall. And it isn’t based solely on the scientific or technical merit of any particular technique. Oftentimes certain techniques become more popular because of business reasons or because of the funder’s ideologies. And that’s sort of what we’re seeing today with the complete indexing of AI development on large-scale AI model development.

And ultimately, these large-scale models … We talked about how it’s a remarkable technical leap, but in terms of social progress or economic progress, the benefits of these models have been kind of middling. And the way that I see us shifting to AI models that are going to be A) more beneficial and B) not so imperial is to refocus on task-specific AI systems that are tackling well-scoped challenges that inherently lend themselves to the strengths of AI systems that are inherently computational optimization problems. 

So I’m talking about things like using AI to integrate more renewable energy into the grid. This is something that we definitely need. We need to more quickly accelerate our electrification of the grid, and one of the challenges of using more renewable energy is the unpredictability of it. And this is a key strength of AI technologies, being able to have predictive capabilities and optimization capabilities where you can match the energy generation of different renewables with the energy demands of different people that are drawing from the grid.

Niall Firth: Quite a few people have been asking, in the chat, different versions of the same question. If you were an early-career AI scientist, or if you were involved in AI, what can you do yourself to bring about a more ethical version of AI? Do you have any power left, or is it too late? 

Karen Hao: No, I don’t think it’s too late at all. I mean, as I’ve been talking with a lot of people just in the lay public, one of the biggest challenges that they have is they don’t have any alternatives for AI. They want the benefits of AI, but they also do not want to participate in a supply chain that is really harmful. And so the first question is, always, is there an alternative? Which tools do I shift to? And unfortunately, there just aren’t that many alternatives right now. 

And so the first thing that I would say to early-career AI researchers and entrepreneurs is to build those alternatives, because there are plenty of people that are actually really excited about the possibility of switching to more ethical alternatives. And one of the analogies I often use is that we kind of need to do with the AI industry what happened with the fashion industry. There was also a lot of environmental exploitation, labor exploitation in the fashion industry, and there was enough consumer demand that it created new markets for ethical and sustainably sourced fashion. And so we kind of need to see just more options occupying that space.

Niall Firth: Do you feel optimistic about the future? Or where do you sit? You know, things aren’t great as you spell them out now. Where’s the hope for us?

Karen Hao: I am. I’m super optimistic. Part of the reason why I’m optimistic is because you know, a few years ago, when I started writing about AI at Tech Review, I remember people would say, wow, that’s a really niche beat. Do you have enough to write about? 

And now, I mean, everyone is talking about AI, and I think that’s the first step to actually getting to a better place with AI development. The amount of public awareness and attention and scrutiny that is now going into how we develop these technologies, how we use these technologies, is really, really important. Like, we need to be having this public debate and that in and of itself is a significant step change from what we had before. 

But the next step, and part of the reason why I wrote this book, is we need to convert the awareness into action, and people should take an active role. Every single person should feel that they have an active role in shaping the future of AI development, if you think about all of the different ways that you interface with the AI development supply chain and deployment supply chain—like you give your data or withhold your data.

There are probably data centers that are being built around you right now. If you’re a parent, there’s some kind of AI policy being crafted at [your kid’s] school. There’s some kind of AI policy being crafted at your workplace. These are all what I consider sites of democratic contestation, where you can use those opportunities to assert your voice about how you want AI to be developed and deployed. If you do not want these companies to use certain kinds of data, push back when they just take the data. 

I closed all of my personal social media accounts because I just did not like the fact that they were scraping my personal photos to train their generative AI models. I’ve seen parents and students and teachers start forming committees within schools to talk about what their AI policy should be and to draft it collectively as a community. Same with businesses. They’re doing the same thing. If we all kind of step up to play that active role, I am super optimistic that we’ll get to a better place.

Niall Firth: Mark, in the chat, mentions the Māori story from New Zealand towards the end of your book, and that’s an example of sort of community-led AI in action, isn’t it?

Karen Hao: Yeah. There was a community in New Zealand that really wanted to help revitalize the Māori language by building a speech recognition tool that could recognize Māori, and therefore be able to transcribe a rich repository of archival audio of their ancestors speaking Māori. And the first thing that they did when engaging in that project was they asked the community, do you want this AI tool? 

Niall Firth: Imagine that.

Karen Hao: I know! It’s such a radical concept, this idea of consent at every stage. But they first asked that; the community wholeheartedly said yes. They then engaged in a public education campaign to explain to people, okay, what does it take to develop an AI tool? Well, we are going to need data. We’re going to need audio transcription pairs to train this AI model. So then they ran a public contest in which they were able to get dozens, if not hundreds, of people in their community to donate data to this project. And then they made sure that when they developed the model, they actively explained to the community at every step how their data was being used, how it would be stored, how it would continue to be protected. And any other project that would use the data has to get permission and consent from the community first. 

And so it was a completely democratic process, for whether they wanted the tool, how to develop the tool, and how the tool should continue to be used, and how their data should continue to be used over time.

Niall Firth: Great. I know we’ve gone a bit over time. I’ve got two more things I’m going to ask you, basically putting together lots of questions people have asked in the chat about your view on what role regulations should play. What are your thoughts on that?

Karen Hao: Yeah, I mean, in an ideal world where we actually had a functioning government, regulation should absolutely play a huge role. And it shouldn’t just be thinking about once an AI model is built, how to regulate that. But still thinking about the full supply chain of AI development, regulating the data and what’s allowed to be trained in these models, regulating the land use. And what pieces of land are allowed to build data centers? How much energy and water are the data centers allowed to consume? And also regulating the transparency. We don’t know what data is in these training data sets, and we don’t know the environmental costs of training these models. We don’t know how much water these data centers consume and that is all information that these companies actively withhold to prevent democratic processes from happening. So if there were one major intervention that regulators could have, it should be to dramatically increase the amount of transparency along the supply chain.

Niall Firth: Okay, great. So just to bring it back around to OpenAI and Sam Altman to finish with. He famously sent an email around, didn’t he? After your original Tech Review story, saying this is not great. We don’t like this. And he didn’t want to speak to you for your book, either, did he?

Karen Hao: No, he did not.

Niall Firth: No. But imagine Sam Altman is in the chat here. He’s subscribed to Technology Review and is watching this Roundtables because he wants to know what you’re saying about him. If you could talk to him directly, what would you like to ask him? 

Karen Hao: What degree of harm do you need to see in order to realize that you should take a different path? 

Niall Firth: Nice, blunt, to the point. All right, Karen, thank you so much for your time. 

Karen Hao: Thank you so much, everyone.

MIT Technology Review Roundtables is a subscriber-only online event series where experts discuss the latest developments and what’s next in emerging technologies. Sign up to get notified about upcoming sessions.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

JPMorgan launches carbon market blockchain app

Dive Brief: JPMorgan Chase is working to allow voluntary carbon markets to issue blockchain tokens at the registry level that represent ownership of carbon credits, permitting market participants to issue, transfer and retire credits, the bank announced Wednesday. JPMorgan is currently exploring testing processes with carbon registries from S&P Global

Read More »

IBM Power11 challenges x86 and GPU giants with security-first server strategy

The IBM Power Cyber Vault solution is designed to provide protection against cyberattacks such as data corruption and encryption with proactive immutable snapshots that are automatically captured, stored, and tested on a custom-defined schedule, IBM said. Power11 also uses NIST-approved built-in quantum-safe cryptography designed to help protect systems from harvest-now, decrypt-later attacks

Read More »

Trump seeks tighter restrictions on wind and solar with executive order

Dive Brief: President Donald Trump issued an executive order Monday instructing the Secretary of the Treasury to publish guidance within 45 days “to ensure that policies concerning the ‘beginning of construction’ are not circumvented” by wind and solar projects that saw their eligibility for the 45Y and 48E clean energy tax credits slashed by budget legislation signed into law on July 4. “It is unclear how the Treasury will amend the ‘beginning of construction’ language while also keeping in mind that a ‘substantial portion of the subject facility has been built,’” Jefferies analysts said in a Tuesday note. This “could be an attempt to pivot back to the House version of the OBBB which had narrowed credit eligibility with ‘begin construction’ AND ‘placed in service’ language.” Prior to the EO, clean energy advocates were already grappling with the anticipated impacts of the legislation, which is expected to slash capital investment in U.S. electricity and clean fuels production by around $500 billion over the next ten years, according to a report from the REPEAT Project. Dive Insight: The executive order appears to make good on a deal Trump reportedly struck with the Freedom Caucus, in which he promised to use his executive powers to further curtail federal subsidies on wind, solar and EVs in exchange for their support, which was necessary to secure the bill’s passage in the House. The order also instructs the Secretary of the Treasury to “implement the enhanced Foreign Entity of Concern restrictions in the law” and “revise any identified regulations, guidance, policies, and practices […] to eliminate any such preferences for wind and solar facilities.” Even before the executive order, the bill’s introduction of tight eligibility standards for projects in development was expected to lead to uncertainty around capital investment. Wind and solar projects must start construction within

Read More »

PHMSA grants work for America. Let’s keep them funded.

Dave Schryver is president and CEO of the American Public Gas Association. Turning on the stove to cook dinner is often second nature — less often do we think about the natural gas system that keeps our homes and businesses running. Yet, every day, public gas utilities across the country are investing in modern energy infrastructure to enhance safety, improve efficiency and keep energy costs low for consumers. A federal grant program that has recently helped accelerate these upgrades is in jeopardy of not being renewed. Created by the Infrastructure Investment and Jobs Act, the Pipeline and Hazardous Materials Safety Administration’s (PHMSA) Natural Gas Distribution Infrastructure Safety and Modernization (NGDISM) grant program invests $200 million annually, totaling $1 billion over five years in funding for municipal and community-owned utilities seeking to repair or replace natural gas pipeline systems. These grants are at risk of losing funding beyond 2026, despite their instrumental role in strengthening communities, creating well-paying jobs, lowering energy costs and reducing emissions. Hundreds of communities across 29 states have already seen firsthand the value of PHMSA grants. Consider rural Montgomery, Louisiana, where a $1 million grant funding the replacement of five regulator stations created many local jobs. Likewise, in urban Knoxville, Tennessee, a $5 million investment to upgrade 16 miles of aging steel main added 106 jobs. From small towns to major cities, these projects are putting people to work, modernizing infrastructure and delivering long-term value — with nearly 2,900 jobs supported by projects initiated in 2024 alone. The program has also earned the support of the United Association of Union Plumbers and Pipefitters, which recognizes PHMSA grants as a vital source of stable job opportunities for its members. With more than 7 million Americans currently unemployed, we can’t afford to lose these opportunities. But this is just

Read More »

Who Is The Biggest Renewable Energy Generator?

The country that generated the most renewable energy in 2024 was China, according to the Energy Institute’s (EI) 2025 statistical review of world energy, which was released recently. China’s renewable energy generation last year came in at 3,398.8 terawatt hours (TH), the review showed. This comprised 997.0 TH from wind energy, 839.0 TH from solar energy, 1,354.3 TH from hydro energy, and 208.5 TH from “other renewables” the review outlined. The country’s renewable energy generation grew by 17.1 percent year on year, the review pointed out. China’s wind energy generation rose 12.2 percent, its solar energy generation grew 43.2 percent, its hydro energy generation grew 10.2 percent, and its “other renewables” energy generation grew 4.9 percent, year on year, the review showed. The country that generated the second most renewable energy last year was the U.S., with 1,068.7 TH, according to the EI’s review. This generation comprised 458.0 TH from wind energy, 306.2 TH from solar energy, 238.7 TH from hydro energy, and 65.7 TH from “other renewables”, the review showed. U.S. renewable energy generation grew 9.3 percent from 2023 to 2024, the review highlighted. The country’s wind energy generation rose by 7.4 percent, its solar energy generation grew 26.5 percent, its hydro energy generation decreased by 1.4 percent, and its “other renewables” energy generation dropped by 2.1 percent, year on year, the review outlined. Brazil ranked third in terms of renewable energy generation in 2024, with 651.3 TH, according to the review, which showed that this this total comprised 108.5 TH from wind energy, 71.3 TH from solar energy, 413.2 TH from hydro energy, and 58.2 TH from “other renewables”. Brazil’s renewable energy generation has grown 3.1 percent year on year, the review showed. The country’s wind energy generation increased by 12.9 percent, its solar energy generation grew 40.5

Read More »

US DOE Expands Hydropower Partnership with Norway

The U.S. Department of Energy (DOE) has extended its collaboration with Norway’s Royal Ministry of Energy on water power research and development. The extension builds on the previously signed memorandum of understanding under which the two countries planned and coordinated activities. The DOE said in a media release the cooperation aims to reduce energy costs and strengthen grid reliability and security.   “Strong partnerships drive innovation, and innovation strengthens America’s energy future”, U.S. Energy Secretary Chris Wright said. “Hydropower is a tremendous resource – one that supports reliable, affordable power across the country and holds vast potential to bolster America’s grid. “By signing this Memorandum of Understanding with Norway, we are building upon our two nations’ shared expertise and advanced marine energy technologies to support President Trump’s pro-growth energy agenda for the American people”. “Hydropower and marine energy have the potential to reduce energy costs and improve the resilience of our electric grid”, Lou Hrkman, Principal Deputy Assistant Secretary for Energy Efficiency and Renewable Energy, added. “Our collaboration with Norway – another country that is rich in water power resources – will help us expand our generation capacity, upgrade existing facilities, and cultivate the technical expertise we need to make the most of these opportunities”. In 2020, the DOE and Norway’s Royal Ministry of Energy signed a five-year MOU Annex, under which the DOE’s Water Power Technologies Office would conduct hydropower research and development with the Norwegian Research Centre for Hydropower Technology, the DOE said. The current MOU Annex broadens this collaboration to include marine energy, which could supply locally sourced power to millions of Americans in densely populated areas, the DOE said. Through the extended MOU, both parties will share key information, tools, and technologies aimed at lowering barriers to developing, testing, and advancing marine energy and new hydropower solutions. To contact

Read More »

US DOI Proposes Easing Regulations on Onshore Commingled Production

The United States Department of the Interior (DOI) is proposing updates to land regulations to promote combined production from multiple leases, saying the practice “reduces environmental impacts, lowers operating costs and increases overall efficiency”. The proposed changes to the Bureau of Land Management oil and gas regulations “will implement the One Big Beautiful Bill’s directive to the Secretary of the Interior to approve onshore commingling applications”, the DOI said in an online statement. Commingling is currently restricted to license areas that have “identical mineral ownership, royalty rates and revenue distribution”, the DOI said. “These requirements create unnecessary barriers in many areas of the West where mineral ownership is complex and varied. “The proposed changes would allow commingling even when these conditions differ, unlocking energy potential that is currently tied up in regulatory red tape”. “Thanks to major advancements in metering and measurement technologies, operators can now accurately track production and allocate royalties even at the individual wellhead”, the DOI added. “These technologies meet the precision standards required by the Bureau of Land Management and ensure that all mineral owners receive their fair share”. It estimates up to $1.8 billion in industry savings annually under the updated regulations. “Those savings would give operators the ability to reinvest in new energy production, helping drive domestic energy development while reducing the need for duplicative infrastructure”, the DOI said. “This rule would also allow for greater flexibility in how production is combined and measured, which will not only increase efficiency but also reduce surface disturbance, benefiting both industry and the environment”. “The Bureau of Land Management intends to move quickly to update the relevant regulations under 43 CFR Subpart 3173, with a formal rulemaking process to follow, including opportunities for public comment”, the DOI said. Interior Secretary Doug Burgum commented, “This is about common

Read More »

Strathcona Closes Montney Assets Sale, Becomes Pure-Play Heavy Oil Firm

Strathcona Resources Ltd. has closed its Montney asset sale in Western Canada, consisting of Groundbirch, Kakwa, and Grande Prairie assets, for a total value of approximately $2.09 billion (CAD 2.86 billion), including closing adjustments. Strathcona said it is now a pure-play heavy oil company producing approximately 120 million barrels per day (MMbpd) with a 50-year 2P reserve life index. The company has plans to grow to 195 MMbpd by 2031, according to a news release. After deducting all debt, Strathcona said it currently has approximately CAD 200 million in positive net cash and marketable securities, inclusive of approximately 4.6 million shares in Tourmaline Oil Corp. and 23.4 million shares in MEG Energy Corp. ARC Resources Ltd. acquired the condensate-rich Montney assets in Alberta’s Kakwa in an all-cash transaction valued at approximately CAD 1.6 billion. The assets are “underpinned by a substantial drilling inventory and include owned and operated infrastructure, reinforcing ARC’s position as Canada’s largest Montney and condensate producer,” the company said in a separate statement. For the remainder of 2025, the assets are expected to deliver average production between 35,000 and 40,000 barrels of oil equivalent per day (boepd), consisting of approximately 50 percent crude oil and liquids and 50 percent natural gas, ARC said. The acquisition increases ARC’s Kakwa production 24 percent to over 210,000 boepd and increases the Montney inventory duration at Kakwa from 12 years to over 15 years, according to the statement. The transaction includes 100 percent ownership of two natural gas processing facilities and condensate handling infrastructure. In addition, the assets include a 19 percent interest in a third-party natural gas processing facility with deep-cut natural gas liquids (NGL) recovery, ARC said. ARC said its development plans for the assets will target the Montney, which is approximately 100 percent working interest land. ARC will

Read More »

CoreWeave acquires Core Scientific for $9B to power AI infrastructure push

Such a shift, analysts say, could offer short-term benefits for enterprises, particularly in cost and access, but also introduces new operational risks. “This acquisition may potentially lower enterprise pricing through lease cost elimination and annual savings, while improving GPU access via expanded power capacity, enabling faster deployment of Nvidia chipsets and systems,” said Charlie Dai, VP and principal analyst at Forrester. “However, service reliability risks persist during this crypto-to-AI retrofitting.” This also indicates that struggling vendors such as Core Scientific and similar have a way to cash out, according to Yugal Joshi, partner at Everest Group. “However, it does not materially impact the availability of Nvidia GPUs and similar for enterprises,” Joshi added. “Consolidation does impact the pricing power of vendors.” Concerns for enterprises Rising demand for AI-ready infrastructure can raise concerns among enterprises, particularly over access to power-rich data centers and future capacity constraints. “The biggest concern that CIOs should have with this acquisition is that mature data center infrastructure with dedicated power is an acquisition target,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “This may turn out to create challenges for CIOs currently collocating data workloads or seeking to keep more of their data loads on private data centers rather than in the cloud.”

Read More »

CoreWeave achieves a first with Nvidia GB300 NVL72 deployment

The deployment, Kimball said, “brings Dell quality to the commodity space. Wins like this really validate what Dell has been doing in reshaping its portfolio to accommodate the needs of the market — both in the cloud and the enterprise.” Although concerns were voiced last year that Nvidia’s next-generation Blackwell data center processors had significant overheating problems when they were installed in high-capacity server racks, he said that a repeat performance is unlikely. Nvidia, said Kimball “has been very disciplined in its approach with its GPUs and not shipping silicon until it is ready. And Dell almost doubles down on this maniacal quality focus. I don’t mean to sound like I have blind faith, but I’ve watched both companies over the last several years be intentional in delivering product in volume. Especially as the competitive market starts to shape up more strongly, I expect there is an extremely high degree of confidence in quality.” CoreWeave ‘has one purpose’ He said, “like Lambda Labs, Crusoe and others, [CoreWeave] seemingly has one purpose (for now): deliver GPU capacity to the market. While I expect these cloud providers will expand in services, I think for now the type of customer employing services is on the early adopter side of AI. From an enterprise perspective, I have to think that organizations well into their AI journey are the consumers of CoreWeave.”  “CoreWeave is also being utilized by a lot of the model providers and tech vendors playing in the AI space,” Kimball pointed out. “For instance, it’s public knowledge that Microsoft, OpenAI, Meta, IBM and others use CoreWeave GPUs for model training and more. It makes sense. These are the customers that truly benefit from the performance lift that we see from generation to generation.”

Read More »

Oracle to power OpenAI’s AGI ambitions with 4.5GW expansion

“For CIOs, this shift means more competition for AI infrastructure. Over the next 12–24 months, securing capacity for AI workloads will likely get harder, not easier. Though cost is coming down but demand is increasing as well, due to which CIOs must plan earlier and build stronger partnerships to ensure availability,” said Pareekh Jain, CEO at EIIRTrend & Pareekh Consulting. He added that CIOs should expect longer wait times for AI infrastructure. To mitigate this, they should lock in capacity through reserved instances, diversify across regions and cloud providers, and work with vendors to align on long-term demand forecasts.  “Enterprises stand to benefit from more efficient and cost-effective AI infrastructure tailored to specialized AI workloads, significantly lower their overall future AI-related investments and expenses. Consequently, CIOs face a critical task: to analyze and predict the diverse AI workloads that will prevail across their organizations, business units, functions, and employee personas in the future. This foresight will be crucial in prioritizing and optimizing AI workloads for either in-house deployment or outsourced infrastructure, ensuring strategic and efficient resource allocation,” said Neil Shah, vice president at Counterpoint Research. Strategic pivot toward AI data centers The OpenAI-Oracle deal comes in stark contrast to developments earlier this year. In April, AWS was reported to be scaling back its plans for leasing new colocation capacity — a move that AWS Vice President for global data centers Kevin Miller described as routine capacity management, not a shift in long-term expansion plans. Still, these announcements raised questions around whether the hyperscale data center boom was beginning to plateau. “This isn’t a slowdown, it’s a strategic pivot. The era of building generic data center capacity is over. The new global imperative is a race for specialized, high-density, AI-ready compute. Hyperscalers are not slowing down; they are reallocating their capital to

Read More »

Arista Buys VeloCloud to reboot SD-WANs amid AI infrastructure shift

What this doesn’t answer is how Arista Networks plans to add newer, security-oriented Secure Access Service Edge (SASE) capabilities to VeloCloud’s older SD-WAN technology. Post-acquisition, it still has only some of the building blocks necessary to achieve this. Mapping AI However, in 2025 there is always more going on with networking acquisitions than simply adding another brick to the wall, and in this case it’s the way AI is changing data flows across networks. “In the new AI era, the concepts of what comprises a user and a site in a WAN have changed fundamentally. The introduction of agentic AI even changes what might be considered a user,” wrote Arista Networks CEO, Jayshree Ullal, in a blog highlighting AI’s effect on WAN architectures. “In addition to people accessing data on demand, new AI agents will be deployed to access data independently, adapting over time to solve problems and enhance user productivity,” she said. Specifically, WANs needed modernization to cope with the effect AI traffic flows are having on data center traffic. Sanjay Uppal, now VP and general manager of the new VeloCloud Division at Arista Networks, elaborated. “The next step in SD-WAN is to identify, secure and optimize agentic AI traffic across that distributed enterprise, this time from all end points across to branches, campus sites, and the different data center locations, both public and private,” he wrote. “The best way to grab this opportunity was in partnership with a networking systems leader, as customers were increasingly looking for a comprehensive solution from LAN/Campus across the WAN to the data center.”

Read More »

Data center capacity continues to shift to hyperscalers

However, even though colocation and on-premises data centers will continue to lose share, they will still continue to grow. They just won’t be growing as fast as hyperscalers. So, it creates the illusion of shrinkage when it’s actually just slower growth. In fact, after a sustained period of essentially no growth, on-premises data center capacity is receiving a boost thanks to genAI applications and GPU infrastructure. “While most enterprise workloads are gravitating towards cloud providers or to off-premise colo facilities, a substantial subset are staying on-premise, driving a substantial increase in enterprise GPU servers,” said John Dinsdale, a chief analyst at Synergy Research Group.

Read More »

Oracle inks $30 billion cloud deal, continuing its strong push into AI infrastructure.

He pointed out that, in addition to its continued growth, OCI has a remaining performance obligation (RPO) — total future revenue expected from contracts not yet reported as revenue — of $138 billion, a 41% increase, year over year. The company is benefiting from the immense demand for cloud computing largely driven by AI models. While traditionally an enterprise resource planning (ERP) company, Oracle launched OCI in 2016 and has been strategically investing in AI and data center infrastructure that can support gigawatts of capacity. Notably, it is a partner in the $500 billion SoftBank-backed Stargate project, along with OpenAI, Arm, Microsoft, and Nvidia, that will build out data center infrastructure in the US. Along with that, the company is reportedly spending about $40 billion on Nvidia chips for a massive new data center in Abilene, Texas, that will serve as Stargate’s first location in the country. Further, the company has signaled its plans to significantly increase its investment in Abu Dhabi to grow out its cloud and AI offerings in the UAE; has partnered with IBM to advance agentic AI; has launched more than 50 genAI use cases with Cohere; and is a key provider for ByteDance, which has said it plans to invest $20 billion in global cloud infrastructure this year, notably in Johor, Malaysia. Ellison’s plan: dominate the cloud world CTO and co-founder Larry Ellison announced in a recent earnings call Oracle’s intent to become No. 1 in cloud databases, cloud applications, and the construction and operation of cloud data centers. He said Oracle is uniquely positioned because it has so much enterprise data stored in its databases. He also highlighted the company’s flexible multi-cloud strategy and said that the latest version of its database, Oracle 23ai, is specifically tailored to the needs of AI workloads. Oracle

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »