
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
We did the math on AI’s energy footprint. Here’s the story you haven’t heard.
It’s well documented that AI is a power-hungry technology. But there has been far less reporting on the extent of that hunger, how much its appetite is set to grow in the coming years, where that power will come from, and who will pay for it.
For the past six months, MIT Technology Review’s team of reporters and editors have worked to answer those questions. The result is an unprecedented look at the state of AI’s energy and resource usage, where it is now, where it is headed in the years to come, and why we have to get it right.
At the centerpiece of this package is an entirely novel line of reporting into the demands of inference—the way human beings interact with AI when we make text queries or ask AI to come up with new images or create videos. Experts say inference is set to eclipse the already massive amount of energy required to train new AI models. Here’s everything we found out.
Here’s what you can expect from the rest of the package, including:
+ We were so startled by what we learned reporting this story that we also put together a brief on everything you need to know about estimating AI’s energy and emissions burden.
+ We went out into the world to see the effects of this energy hunger—from the deserts of Nevada, where data centers in an industrial park the size of Detroit demand ever more water to keep their processors cool and running.
+ In Louisiana, where Meta plans its largest-ever data center, we expose the dirty secret that will fuel its AI ambitions—along with those of many others.
+ Why the clean energy promise of powering AI data centers with nuclear energy will long remain elusive.
+ But it’s not all doom and gloom. Check out the reasons to be optimistic, and examine why future AI systems could be far less energy intensive than today’s.
AI can do a better job of persuading people than we do
The news: Millions of people argue with each other online every day, but remarkably few of them change someone’s mind. New research suggests that large language models (LLMs) might do a better job, especially when they’re given the ability to adapt their arguments using personal information about individuals. The finding suggests that AI could become a powerful tool for persuading people, for better or worse.
The big picture: The findings are the latest in a growing body of research demonstrating LLMs’ powers of persuasion. The authors warn they show how AI tools can craft sophisticated, persuasive arguments if they have even minimal information about the humans they’re interacting with. Read the full story.
—Rhiannon Williams
How AI is introducing errors into courtrooms
It’s been quite a couple weeks for stories about AI in the courtroom. You might have heard about the deceased victim of a road rage incident whose family created an AI avatar of him to show as an impact statement (possibly the first time this has been done in the US).
But there’s a bigger, far more consequential controversy brewing, legal experts say. AI hallucinations are cropping up more and more in legal filings. And it’s starting to infuriate judges. Just consider these three cases, each of which gives a glimpse into what we can expect to see more of as lawyers embrace AI. Read the full story.
—James O’Donnell
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Donald Trump has signed the Take It Down Act into US law
It criminalizes the distribution of non-consensual intimate images, including deepfakes. (The Verge)
+ Tech platforms will be forced to remove such material within 48 hours of being notified. (CNN)
+ It’s only the sixth bill he’s signed into law during his second term. (NBC News)
2 There’s now a buyer for 23andMe
Pharma firm Regeneron has swooped in and offered to help it keep operating. (WSJ $)
+ The worth of your genetic data? $17. (404 Media)
+ Regeneron promised to prioritize security and ethical use of that data. (TechCrunch)
3 Microsoft is adding Elon Musk’s AI models to its cloud platform
Err, is that a good idea? (Bloomberg $)
+ Musk wants to sell Grok to other businesses. (The Information $)
4 Autonomous cars trained to react like humans cause fewer road injuries
A study found they were more cautious around cyclists, pedestrians and motorcyclists. (FT $)
+ Waymo is expanding its robotaxi operations out of San Francisco. (Reuters)
+ How Wayve’s driverless cars will meet one of their biggest challenges yet. (MIT Technology Review)
5 Hurricane season is on its way
DOGE cuts means we’re less prepared. (The Atlantic $)
+ COP30 may be in crisis before it’s even begun. (New Scientist $)
6 Telegram handed over data from more than 20,000 users
In the first three months of 2025 alone. (404 Media)
7 GM has stopped exporting cars to China
Trump’s tariffs have put an end to its export plans. (NYT $)
8 Blended meats are on the rise
Plants account for up to 70% of these new meats—and consumers love them. (WP $)
+ Alternative meat could help the climate. Will anyone eat it? (MIT Technology Review)
9 SAG-AFTRA isn’t happy about Fornite’s AI-voiced Darth Vader
It’s slapped Fortnite’s creators with an unfair labor practice charge. (Ars Technica)
+ How Meta and AI companies recruited striking actors to train AI. (MIT Technology Review)
10 This AI model can swiftly build Lego structures
Thanks to nothing more than a prompt. (Fast Company $)
Quote of the day
“Platforms have no incentive or requirement to make sure what comes through the system is non-consensual intimate imagery.”
—Becca Branum, deputy director of the Center for Democracy and Technology, says the new Take It Down Act could fuel censorship, Wired reports.
One more thing
Are friends electric?
Thankfully, the difference between humans and machines in the real world is easy to discern, at least for now. While machines tend to excel at things adults find difficult—playing world-champion-level chess, say, or multiplying really big numbers—they find it hard to accomplish stuff a five-year-old can do with ease, such as catching a ball or walking around a room without bumping into things.
This fundamental tension—what is hard for humans is easy for machines, and what’s hard for machines is easy for humans—is at the heart of three new books delving into our complex and often fraught relationship with robots, AI, and automation. They force us to reimagine the nature of everything from friendship and love to work, health care, and home life. Read the full story.
—Bryan Gardiner
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ Congratulations to William Goodge, who ran across Australia in just 35 days!
+ A British horticulturist has created a garden at this year’s Chelsea Flower Show just for dogs.
+ The Netherlands just loves a sidewalk garden.
+ Did you know the T Rex is a north American hero? Me neither 🦖