Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Energy Department Approves Additional Time to Commence Exports of LNG from Energía Costa Azul

WASHINGTON — U.S. Secretary of Energy Chris Wright today signed an amendment order granting additional time to commence exports of U.S.-sourced natural gas as liquefied natural gas (LNG) from Sempra Energy’s Energía Costa Azul (ECA) Mid-Scale Project that is currently under construction in Baja California, Mexico. Today’s order grants ECA Liquefaction, S. de R.L. de C.V. approximately six additional months to commence exports of U.S.-sourced natural gas as LNG to non-free trade agreement (FTA) countries. Construction at the ECA Mid-Scale Project is nearly complete and ECA expects to commence exports in the near future. The ECA Mid-Scale Project began construction in 2020 and, once completed and operational, will be able to export up to 0.44 billion cubic feet per day (Bcf/d) of natural gas as LNG. A second phase of the project is authorized to export up to 1.74 Bcf/d and is pending a final investment decision. “I am pleased that the Department of Energy can take this action to provide this project the time it needs to complete construction and get U.S.-sourced LNG on the water, particularly given its strategic location on the West Coast of North America,” said Kyle Haustveit, Assistant Secretary of the Hydrocarbons and Geothermal Energy Office.  Thanks to President Trump’s leadership and American innovation, the United States is the world’s largest natural gas producer and exporter. Since the President ended the previous administration’s LNG export approval ban, the Department has approved more than 19 Bcf/d of LNG export authorizations. With recent final investment decisions, U.S. LNG exports are set to more than double from current levels by the early 2030s. 

Read More »

Energy Department Announces Partnership to Ensure Affordable Energy and Power America’s AI Future

WASHINGTON—The U.S. Department of Energy (DOE), alongside the U.S. Department of Commerce (DOC), today announced a unique public-private partnership with SoftBank and AEP Ohio to redevelop DOE land, modernize energy infrastructure, and develop advanced computing in Southern Ohio. As part of the partnership, SB Energy, a SoftBank Group company, is planning to build 10 gigawatts (GW) of new power generation—including 9.2 GW of natural gas generation—that will connect to the local grid and provide power to a new 10 GW data center development at the Portsmouth Site in Pike County, Ohio at no cost to American families. These collective efforts will deliver lower electricity costs across the region, create thousands of American jobs, and strengthen America’s national security.  Portions of this announcement were previously announced as part of President Trump’s U.S.-Japan Strategic Trade and Investment Agreement. This includes the $33.3 billion in Japanese funding for 9.2 GW of new natural gas generation. “Thanks to President Trump, the U.S. government is leveraging its assets—like our federal lands—to add power generation, create jobs, and ensure the United States wins the AI race,” said U.S. Energy Secretary Chris Wright. “I’m pleased to be working with our partners at SoftBank and AEP Ohio on this important project. By bringing new power online and upgrading our existing infrastructure, this investment supports the AI boom and cutting-edge technologies while strengthening our energy system and helping keep costs down for the American people.” “Our Japanese partnership is a direct result of President Trump’s America First trade policies,” said U.S. Commerce Secretary Howard Lutnick. “Japan has committed to invest $550 billion across America. With this historic trade deal we are reindustrializing the country through critical projects like this $33 billion dollar power project in Portsmouth, Ohio. Yesterday we announced additional mega projects in Alabama, Pennsylvania, Tennessee and Texas.”

Read More »

Nvidia overhauls the data center for OpenClaw era

Nvidia’s products for data centers now encompass a full stack with all the pieces, said Sandeep Gupta, executive managing director and head of global strategic alliances at NTT Data. “From a customer perspective, if they believe in an integrated stack, it makes things simple,” Gupta said. The integrated data center cuts complexity and improves efficiency across cooling, networking and storage. “It is driven by the sentiment of an enterprise on how dependent they want to be on one provider versus mix and match,” Gupta said. AI complexity has gone up manifold with multi-agent systems and technologies like OpenClaw, which Huang said is as big a deal as HTML and Linux. Those technologies will generate tokens at an unprecedented pace and strain network, memory and storage simultaneously. AI data also has context, and moving it inefficiently wastes power and cost. A new networking and storage layer is needed to move data intelligently and efficiently. A technology called KV Cache holds the contextual memory necessary for processing agentic AI systems. “It’s going to pound on memory really hard… It’s going to be pounding on the storage system really really hard, which is the reason why we reinvented the storage system,” Huang said. Nvidia’s blueprint turns data centers into one giant AI GPU. It is spearheaded by the GPU known as Rubin and CPU called Vera, which were announced at GTC. Nvidia also slipped in a new inference chip; the Groq LPU has significantly more memory bandwidth than GPUs and is designed for low-latency token generation.

Read More »

The Download: OpenAI is building a fully automated researcher, and a psychedelic trial blind spot

Plus: OpenAI is also creating a “super app.”
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. OpenAI is throwing everything into building a fully automated researcher  OpenAI has a new grand challenge: building an AI researcher—a fully automated agent-based system capable of tackling large, complex problems by itself. The San Francisco firm said the new goal will be its “north star” for the next few years.   By September, the company plans to build “an autonomous AI research intern” that can take on a small number of specific research problems. The intern will be the precursor to the fully automated multi-agent system, which is slated to debut in 2028.  In an exclusive interview this week, OpenAI’s chief scientist, Jakub Pachocki, talked me through the plans. Find out what I discovered. 
—Will Douglas Heaven  Mind-altering substances are (still) falling short in clinical trials  Over the last decade, we’ve seen scientific interest in psychedelic drugs explode. Compounds like psilocybin—which is found in magic mushrooms—are being explored for all sorts of health applications, including treatments for depression, PTSD, addiction, and even obesity. But two studies out earlier this week demonstrate just how difficult it is to study these drugs.  
For me, they show just how overhyped these substances have become. Find out why here.  —Jessica Hamzelou  This story first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. Sign up to receive it in your inbox every Wednesday.  Read more: What do psychedelic drugs do to our brains? AI could help us find out  The must-reads  I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 OpenAI is building a “super app”  It’s merging ChatGPT, a web browser, and a coding tool into a single app. (The Verge) + It’s also buying coding startup Astral to enhance its Codex model. (Ars Technica) + The moves come amid a cutback on side projects. (WSJ $) + OpenAI has lost ground to Anthropic in the enterprise market. (Axios)  2 The US has charged Super Micro’s co-founder with smuggling AI tech to China  Super Micro is third on Fortune’s list of the fastest-growing companies. (Reuters)  + GenAI is learning to spy for the US military. (MIT Technology Review) + The compute competition is shaping the China-US rivalry. (Politico) 

3 The DoJ has taken down botnets behind the largest-ever DDoS attack They had infected more than 3 million devices. (Wired $) + The DoJ has also seized domains tied to Iranian “hacktivists.” (Axios)  4 The Pentagon says Anthropic’s foreign workers are a security risk It cited Chinese employees as a particular concern. (Axios) + Anthropic’s moral boundaries have incensed the DoD. (MIT Technology Review)  5 High oil prices could wreck the AI boom, the WTO has warned Fears are growing of a prolonged energy shock. (The Guardian) + We did the math on AI’s energy footprint. (MIT Technology Review)  6 Jeff Bezos is trying to raise $100 billion to use AI in manufacturing The funds would buy manufacturing firms and infuse them with AI. (WSJ $) + Here’s how to fine-tune AI for prosperity. (MIT Technology Review)  7 Signal’s creator is helping to encrypt Meta’s AI  Moxie Marlinspike is integrating his encrypted chatbot, Confer. (Wired $) + Meta is also ditching human moderators for AI again. (CNBC) + AI is making online crimes easier. (MIT Technology Review)  8 Prediction market Kalshi has raised $1 billion at a $22 billion valuation That’s double its valuation from December. (Bloomberg $) + Arizona’s AG has charged the company with “illegal gambling.” (NPR)  9 Meta isn’t killing Horizon Worlds for VR after all It’s canceled plans to dump the metaverse app (for now). (CNBC)  10 A US startup is recruiting an “AI bully”  The successful candidate must test the patience of leading chatbots. (The Guardian) 
Quote of the day  “Imagine a sports bar… but just for situation monitoring — live X feeds, flight radar, Bloomberg terminals, and Polymarket screens.”  —Kalshi rival Polymarket unveils its hellish vision for a new bar. 
One More Thing  SELMAN DESIGN How gamification took over the world  It’s a thought that occurs to every video-game player at some point: what if the weird, hyper-focused state I enter in virtual worlds could somehow be applied to the real one?  For a handful of consultants, startup gurus, and game designers in the late 2000s, this state of “blissful productivity” became the key to unlocking our true human potential. Their vision became the global phenomenon of gamification—but it didn’t live up to the hype.  Instead of liberating us, gamification became a tool for coercion, distraction, and control. Find out why we fell for it—and how we can recover.  —Bryan Gardiner  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + In a landmark legal win for trolling, Afroman has won his diss track case against the police. + This LEGO artist remixes standard sets into completely different iconic objects. + Ease your search for aliens with these interactive estimates of advanced civilizations.  + A rare superbloom in Death Valley has been caught on camera. 

Read More »

OpenAI is throwing everything into building a fully automated researcher

EXECUTIVE SUMMARY OpenAI is refocusing its research efforts and throwing its resources into a new grand challenge. The San Francisco firm has set its sights on building what it calls an AI researcher, a fully automated agent-based system that will be able to go off and tackle large, complex problems by itself. ​​OpenAI says that the new goal will be its “north star” for the next few years, pulling together multiple research strands, including work on reasoning models, agents, and interpretability. There’s even a timeline. OpenAI plans to build “an autonomous AI research intern”—a system that can take on a small number of specific research problems by itself—by September. The AI intern will be the precursor to a fully automated multi-agent research system that the company plans to debut in 2028. This AI researcher (OpenAI says) will be able to tackle problems that are too large or complex for humans to cope with. Those tasks might be related to math and physics—such as coming up with new proofs or conjectures—or life sciences like biology and chemistry, or even business and policy dilemmas. In theory, you would throw such a tool any kind of problem that can be formulated in text, code or whiteboard scribbles—which covers a lot. OpenAI has been setting the agenda for the AI industry for years. Its early dominance with large language models shaped the technology that hundreds of millions of people use every day. But it now faces fierce competition from rival model makers like Anthropic and Google DeepMind. What OpenAI decides to build next matters—for itself and for the future of AI.   
A big part of that decision falls to Jakub Pachocki, OpenAI’s chief scientist. Alongside chief research officer Mark Chen, Pachocki is one of two people responsible for setting the company’s long-term research goals. Pachocki played key roles in the development of both GPT-4, a game-changing LLM released in 2023, and so-called reasoning models, a technology that first appeared in 2024 and now underpins all major chatbots and agent-based systems.  In an exclusive interview this week, Pachocki talked me through OpenAI’s new grand challenge. “I think we are getting close to a point where we’ll have models capable of working indefinitely in a coherent way just like people do,” he says. “Of course, you still want people in charge and setting the goals. But I think we will get to a point where you kind of have a whole research lab in a data center.”
Such big claims aren’t new. Saving the world by solving its hardest problems is the stated mission of all the top AI firms. Demis Hassabis told me back in 2022 that it was why he started DeepMind. Anthropic CEO Dario Amodei says he is building the equivalent of a country of geniuses in a data center. Pachocki’s boss, Sam Altman, wants to cure cancer. But Pachocki says OpenAI now has most of what it needs to get there. In January, OpenAI released Codex, an agent-based app that can spin up code on the fly to carry out tasks on your computer. It can analyze documents, generate charts, make you a daily digest of your inbox and social media, and much more. OpenAI claims that most of its technical staff now use Codex in their work. You can look at Codex as a very early version of the AI researcher, says Pachocki: “I expect Codex to get fundamentally better.” The key is to make a system that can run for longer periods of time, with less human guidance. “What we’re really looking at for an automated research intern is a system that you can delegate tasks that would take a person a few days,” says Pachocki. “There are a lot of people excited about building systems that can do more long-running scientific research,” says Doug Downey, a research scientist at the Allen Institute for AI, who is not connected to OpenAI. “I think it’s largely driven by the success of these coding agents. The fact that you can delegate quite substantial coding tasks to tools like Codex is incredibly useful and incredibly impressive. And it raises the question: Can we do similar things outside coding, in broader areas of science?” For Pachocki, that’s a clear Yes. In fact, he thinks it’s just a matter of pushing ahead on the path we’re already on. A simple boost in all-round capability also leads to models working for longer without help, he says. He points to the leap from 2020’s GPT-3 to 2023’s GPT-4, two of OpenAI’s previous models. GPT-4 was able to work on a problem for far longer than its predecessor, even without specialized training, he says.  So-called reasoning models brought another bump. Training LLMs to work through problems step by step, backtracking when they make a mistake or hit a dead end, has also made models better at working for longer periods of time. And Pachocki is convinced that OpenAI’s reasoning models will continue to get better. But OpenAI is also training its systems to work by themselves for longer by feeding them specific samples of complex tasks, such as hard puzzles taken from math and coding contests, which force models to learn how to do things like keep track of very large chunks of text and split problems up into (and then manage) multiple subtasks. The aim isn’t to build models that just win math competitions. “That lets you prove that the technology works before you connect it to the real world,” says Pachocki. “If we really wanted to, we could build an amazing automated mathematician, we have all the tools, and I think it would be relatively easy. But it’s not something we’re going to prioritize now because, you know, at the point where you believe you can do it, there’s much more urgent things to do.”

“We are much more focused now on research that’s relevant in the real world,” he adds. Right now that means taking what Codex (and tools like it) can do with coding and trying to apply that to problem-solving in general. “There’s a big change happening, especially in programming,” he says. “Our jobs are now totally different than they were even a year ago. Nobody really edits code all the time anymore. Instead, you manage a group of Codex agents.” If Codex can solve coding problems (the argument goes), it can solve any problem. The line always goes up It’s true that OpenAI has had a handful of remarkable successes in the last few months. Researchers have used GPT-5 (the LLM that powers Codex) to discover new solutions to a number of unsolved math problems and punch through apparent dead ends in a handful of biology, chemistry and physics puzzles.    “Just looking at these models coming up with ideas that would take most PhD weeks, at least, makes me expect that we’ll see much more acceleration coming from this technology in the near future,” Pachocki says. But Pachocki admits that it’s not a done deal. He also understands why some people still have doubts about how much of a game-changer the technology really is. He thinks it depends on how people like to work and what they need to do. “I can believe some people don’t find it very useful yet,” he says. He tells me that he didn’t even use autocomplete—the most basic version of generative coding tech—a year ago himself. “I’m very pedantic about my code,” he says. “I like to type it all manually in vim if I can help it.” (Vim is a text editor favored by many hardcore programmers that you interact with via dozens of keyboard shortcuts instead of a mouse.) But that changed when he saw what the latest models could do. He still wouldn’t hand over complex design tasks, but it’s a time saver when he just wants to try out a few ideas. “I can have it run experiments in a weekend that previously would have taken me like a week to code,” he says. “I don’t think it is at the level where I would just let it take the reins and design the whole thing,” he adds. “But once you see it do something that would take a week to do, I mean that’s hard to argue with.”
Pachocki’s game plan is to supercharge the existing problem-solving abilities that tools like Codex have now and apply them across the sciences.   Downey agrees that the idea of an automated researcher is very cool: “It would be exciting if we could come back tomorrow morning and the agent’s done a bunch of work and there’s new results we can examine,” he says.
But he cautions that building such a system could be harder than Pachocki makes out. Last summer, Downey and his colleagues tested several top-tier LLMs on a range of scientific tasks. OpenAI’s latest model, GPT-5, came out on top but still made lots of errors. “If you have to chain tasks together then the odds that you get several of them right in succession tend to go down,” he says. Downey admits that things move fast and he has not tested the latest versions of GPT-5 (OpenAI released GPT-5.4 two weeks ago). “So those results might already be stale,” he says.  Serious unanswered questions I ask Pachocki about the risks that may come with a system that can solve large, complex problems by itself with little human oversight. Pachocki says people at OpenAI talk about those risks all the time. “If you believe that AI is about to substantially accelerate research, including AI research, that’s a big change in the world, that’s a big thing,” he says. “And it comes with some serious unanswered questions. If it’s so smart and capable, if it can run an entire research program, what if it does something bad?” The way Pachocki sees it, that could happen in a number of ways. The system could go off the rails. It could get hacked. Or it could simply misunderstand its instructions. The best technique OpenAI has right now to address these concerns is to train its reasoning models to share details about what they are doing as they work. This approach to keeping tabs on LLMs is known as chain-of-thought monitoring.
In short, LLMs are trained to jot down notes about what they are doing in a kind of scratchpad as they step through tasks. Researchers can then use those notes to make sure a model is behaving as expected. Yesterday OpenAI published new details on how it is using chain-of-thought monitoring in-house to study Codex.  “Once we get to systems working mostly autonomously for a long time in a big data center, I think this will be something that we’re really going to depend on,” says Pachocki. The idea would be to monitor an AI researcher’s scratchpads using other LLMs and catch unwanted behavior before it’s a problem, rather than stop that bad behavior from happening in the first place. LLMs are not understood well enough to control them fully. “I think it’s going to be a long time before we can really be like, okay, this problem is solved,” he says. “Until you can really trust the systems, you definitely want to have restrictions in place.” Pachocki thinks that very powerful models should be deployed in sandboxes cut off from anything they could break or use to cause harm. 
AI tools have already been used to come up with novel cyberattacks. Some worry that they will be used to design synthetic pathogens that could be used as bioweapons. You can insert any number of evil-scientist scare stories here. “I definitely think there are worrying scenarios that we can imagine,” says Pachocki.  “It’s going to be a very weird thing, it’s extremely concentrated power that’s in some ways unprecedented,” says Pachocki. “Imagine you get to a world where you have a data center that can do all the work that OpenAI or Google can do. Things that in the past required large human organisations, would now be done by a couple of people.” “I think this is a big challenge for governments to figure out,” he adds. And yet some people would say governments were part of the problem. The US government wants to use AI on the battlefield, for example. The recent showdown between Anthropic and the Pentagon revealed that there is little agreement across society about where we draw red lines for how this technology should and should not be used—let alone who should draw them. In the immediate aftermath of that dispute, OpenAI stepped up to sign a deal with the Pentagon instead of its rival. The situation remains murky. I push Pachocki on this. Does he really trust other people to figure it out or does he, as a key architect of the future, feel personal responsibility? “I do feel personal responsibility,” he says. “But I don’t think this can be resolved by OpenAI alone, pushing its technology in a particular way or designing its products in a particular way. We’ll definitely need a lot of involvement from policy makers.”

Read More »

Mind-altering substances are (still) falling short in clinical trials

This week I want to look at where we are with psychedelics, the mind-altering substances that have somehow made the leap from counterculture to major focus of clinical research. Compounds like psilocybin—which is found in magic mushrooms—are being explored for all sorts of health applications, including treatments for depression, PTSD, addiction, and even obesity. Over the last decade, we’ve seen scientific interest in these drugs explode. But most clinical trials of psychedelics have been small and plagued by challenges. And a lot of the trial results have been underwhelming or inconclusive. Two studies out earlier this week demonstrate just how difficult it is to study these drugs. And to my mind, they also show just how overhyped these substances have become. To some in the field, the hype is not necessarily a bad thing. Let me explain.
The two new studies both focus on the effectiveness of psilocybin in treating depression. And they both attempt to account for one of the biggest challenges in trialing psychedelics: what scientists call “blinding.” The best way to test the effectiveness of a new drug is to perform a randomized controlled trial. In these studies, some volunteers receive the drug while others get a placebo. For a fair comparison, the volunteers shouldn’t know whether they’re getting the drug or placebo.
That is almost impossible to do with psychedelics. Almost anyone can tell whether they’ve taken a dose of psilocybin or a dummy pill. The hallucinations are a dead giveaway. Still, the authors behind the two new studies have tried to overcome this challenge. In one, a team based in Germany gave 144 volunteers with treatment-resistant depression either a high or low dose of psilocybin or an “active” placebo, which has its own physical (but not hallucinatory) effects, along with psychotherapy. In their trial, neither the volunteers nor the investigators knew who was getting the drug. The volunteers who got psilocybin did show some improvement—but it was not significantly any better than the improvement experienced by those who took the placebo. And while those who took psilocybin did have a bigger reduction in their symptoms six weeks later, “the divergence between [the two results] renders the findings inconclusive,” the authors write. Not great news so far. The authors of the second study took a different approach. Balázs Szigeti at UCSF and his colleagues instead looked at what are known as “open label” studies of both psychedelics and traditional antidepressants. In those studies, the volunteers knew when they were getting a psychedelic—but they also knew when they were getting an antidepressant. The team assessed 24 such trials to find that … psychedelics were no more effective than traditional antidepressants. Sad trombone. “When I set up the study, I wanted to be a really cool psychedelic scientist to show that even if you consider this blinding problem, psychedelics are so much better than traditional antidepressants,” says Szigeti. “But unfortunately, the data came out the other way around.” His study highlights another problem, too.

In trials of traditional antidepressant drugs, the placebo effect is pretty strong. Depressive symptoms are often measured using a scale, and in trials, antidepressant drugs typically lower symptoms by around 10 points on that scale. Placebos can lower symptoms by around eight points. When a drug regulator looks at those results, the takeaway is that the antidepressant drug lowers symptoms by an additional two points on the scale, relative to a placebo. But with psychedelics, the difference between active drug and placebo is much greater. That’s partly because people who get the psychedelic drug know they’re getting it and are expecting the drug to improve their symptoms, says David Owens, emeritus professor of clinical psychiatry at the University of Edinburgh, UK. But it’s also partly because of the effect on those who know they’re not getting it. It’s pretty obvious when you’re getting a placebo, says Szigeti, and it can be disappointing. Scientists have long recognized the “nocebo” effect as placebo’s “evil twin”—essentially, when you expect to feel worse, you will. The disappointment of getting a placebo is slightly different, and Szigeti calls it the “knowcebo effect.” “It’s kind of like a negative psychedelic effect, because you have figured out that you’re taking the placebo,” he says. This phenomenon can distort the results of psychedelic drug trials. While a placebo in a traditional antidepressant drug trial improves symptoms by eight points, placebos in psychedelic trials improve symptoms by a mere four points, says Szigeti. If the active drug similarly improves symptoms by around 10 points, that makes it look as though the psychedelic is improving symptoms by around six points compared with a placebo. It “gives the illusion” of a huge effect, says Szigeti. So why have those smaller trials of the past received so much attention? Many have been published in high-end journals, accompanied by breathless press releases and media coverage. Even the inconclusive ones. I’ve often thought that those studies might not have seen the light of day if they’d been investigating any other drug.
“Yeah, nobody would care,” Szigeti agrees. It’s partly because people who work in mental health are so desperate for new treatments, says Owens. There has been little innovation in the last 40 years or so, since the advent of selective serotonin reuptake inhibitors. “Psychiatry is hemmed in with old theories … and we don’t need another SSRI for depression,” he says. But it’s also because psychedelics are inherently fascinating, says Szigeti. “Psychedelics are cool,” he says. “Culturally, they are exciting.”
I’ve often worried that psychedelics are overhyped—that people might get the mistaken impression they are cure-alls for mental-health disorders. I’ve worried that vulnerable people might be harmed by self-experimentation. Szigeti takes a different view. Given how effective we know the placebo effect can be, maybe hype isn’t a totally bad thing, he says. “The placebo response is the expectation of a benefit,” he says. “The better response patients are expecting, the better they’re going to get.” Tempering the hype might end up making those drugs less effective, he says. “At the end of the day, the goal of medicine is to help patients,” he says. “I think most [mental health] patients don’t care whether they feel better because of some expectancy and placebo effects or because of an active drug effect.” Either way, we need to know exactly what these drugs are doing. Maybe they will be able to help some people with depression. Maybe they won’t. Research that acknowledges the pitfalls associated with psychedelic drug trials is essential. “These are potentially exciting times,” says Owens. “But it’s really important we do this [research] well. And that means with eyes wide open.” This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Read More »

Energy Department Approves Additional Time to Commence Exports of LNG from Energía Costa Azul

WASHINGTON — U.S. Secretary of Energy Chris Wright today signed an amendment order granting additional time to commence exports of U.S.-sourced natural gas as liquefied natural gas (LNG) from Sempra Energy’s Energía Costa Azul (ECA) Mid-Scale Project that is currently under construction in Baja California, Mexico. Today’s order grants ECA Liquefaction, S. de R.L. de C.V. approximately six additional months to commence exports of U.S.-sourced natural gas as LNG to non-free trade agreement (FTA) countries. Construction at the ECA Mid-Scale Project is nearly complete and ECA expects to commence exports in the near future. The ECA Mid-Scale Project began construction in 2020 and, once completed and operational, will be able to export up to 0.44 billion cubic feet per day (Bcf/d) of natural gas as LNG. A second phase of the project is authorized to export up to 1.74 Bcf/d and is pending a final investment decision. “I am pleased that the Department of Energy can take this action to provide this project the time it needs to complete construction and get U.S.-sourced LNG on the water, particularly given its strategic location on the West Coast of North America,” said Kyle Haustveit, Assistant Secretary of the Hydrocarbons and Geothermal Energy Office.  Thanks to President Trump’s leadership and American innovation, the United States is the world’s largest natural gas producer and exporter. Since the President ended the previous administration’s LNG export approval ban, the Department has approved more than 19 Bcf/d of LNG export authorizations. With recent final investment decisions, U.S. LNG exports are set to more than double from current levels by the early 2030s. 

Read More »

Energy Department Announces Partnership to Ensure Affordable Energy and Power America’s AI Future

WASHINGTON—The U.S. Department of Energy (DOE), alongside the U.S. Department of Commerce (DOC), today announced a unique public-private partnership with SoftBank and AEP Ohio to redevelop DOE land, modernize energy infrastructure, and develop advanced computing in Southern Ohio. As part of the partnership, SB Energy, a SoftBank Group company, is planning to build 10 gigawatts (GW) of new power generation—including 9.2 GW of natural gas generation—that will connect to the local grid and provide power to a new 10 GW data center development at the Portsmouth Site in Pike County, Ohio at no cost to American families. These collective efforts will deliver lower electricity costs across the region, create thousands of American jobs, and strengthen America’s national security.  Portions of this announcement were previously announced as part of President Trump’s U.S.-Japan Strategic Trade and Investment Agreement. This includes the $33.3 billion in Japanese funding for 9.2 GW of new natural gas generation. “Thanks to President Trump, the U.S. government is leveraging its assets—like our federal lands—to add power generation, create jobs, and ensure the United States wins the AI race,” said U.S. Energy Secretary Chris Wright. “I’m pleased to be working with our partners at SoftBank and AEP Ohio on this important project. By bringing new power online and upgrading our existing infrastructure, this investment supports the AI boom and cutting-edge technologies while strengthening our energy system and helping keep costs down for the American people.” “Our Japanese partnership is a direct result of President Trump’s America First trade policies,” said U.S. Commerce Secretary Howard Lutnick. “Japan has committed to invest $550 billion across America. With this historic trade deal we are reindustrializing the country through critical projects like this $33 billion dollar power project in Portsmouth, Ohio. Yesterday we announced additional mega projects in Alabama, Pennsylvania, Tennessee and Texas.”

Read More »

Nvidia overhauls the data center for OpenClaw era

Nvidia’s products for data centers now encompass a full stack with all the pieces, said Sandeep Gupta, executive managing director and head of global strategic alliances at NTT Data. “From a customer perspective, if they believe in an integrated stack, it makes things simple,” Gupta said. The integrated data center cuts complexity and improves efficiency across cooling, networking and storage. “It is driven by the sentiment of an enterprise on how dependent they want to be on one provider versus mix and match,” Gupta said. AI complexity has gone up manifold with multi-agent systems and technologies like OpenClaw, which Huang said is as big a deal as HTML and Linux. Those technologies will generate tokens at an unprecedented pace and strain network, memory and storage simultaneously. AI data also has context, and moving it inefficiently wastes power and cost. A new networking and storage layer is needed to move data intelligently and efficiently. A technology called KV Cache holds the contextual memory necessary for processing agentic AI systems. “It’s going to pound on memory really hard… It’s going to be pounding on the storage system really really hard, which is the reason why we reinvented the storage system,” Huang said. Nvidia’s blueprint turns data centers into one giant AI GPU. It is spearheaded by the GPU known as Rubin and CPU called Vera, which were announced at GTC. Nvidia also slipped in a new inference chip; the Groq LPU has significantly more memory bandwidth than GPUs and is designed for low-latency token generation.

Read More »

The Download: OpenAI is building a fully automated researcher, and a psychedelic trial blind spot

Plus: OpenAI is also creating a “super app.”
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. OpenAI is throwing everything into building a fully automated researcher  OpenAI has a new grand challenge: building an AI researcher—a fully automated agent-based system capable of tackling large, complex problems by itself. The San Francisco firm said the new goal will be its “north star” for the next few years.   By September, the company plans to build “an autonomous AI research intern” that can take on a small number of specific research problems. The intern will be the precursor to the fully automated multi-agent system, which is slated to debut in 2028.  In an exclusive interview this week, OpenAI’s chief scientist, Jakub Pachocki, talked me through the plans. Find out what I discovered. 
—Will Douglas Heaven  Mind-altering substances are (still) falling short in clinical trials  Over the last decade, we’ve seen scientific interest in psychedelic drugs explode. Compounds like psilocybin—which is found in magic mushrooms—are being explored for all sorts of health applications, including treatments for depression, PTSD, addiction, and even obesity. But two studies out earlier this week demonstrate just how difficult it is to study these drugs.  
For me, they show just how overhyped these substances have become. Find out why here.  —Jessica Hamzelou  This story first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. Sign up to receive it in your inbox every Wednesday.  Read more: What do psychedelic drugs do to our brains? AI could help us find out  The must-reads  I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 OpenAI is building a “super app”  It’s merging ChatGPT, a web browser, and a coding tool into a single app. (The Verge) + It’s also buying coding startup Astral to enhance its Codex model. (Ars Technica) + The moves come amid a cutback on side projects. (WSJ $) + OpenAI has lost ground to Anthropic in the enterprise market. (Axios)  2 The US has charged Super Micro’s co-founder with smuggling AI tech to China  Super Micro is third on Fortune’s list of the fastest-growing companies. (Reuters)  + GenAI is learning to spy for the US military. (MIT Technology Review) + The compute competition is shaping the China-US rivalry. (Politico) 

3 The DoJ has taken down botnets behind the largest-ever DDoS attack They had infected more than 3 million devices. (Wired $) + The DoJ has also seized domains tied to Iranian “hacktivists.” (Axios)  4 The Pentagon says Anthropic’s foreign workers are a security risk It cited Chinese employees as a particular concern. (Axios) + Anthropic’s moral boundaries have incensed the DoD. (MIT Technology Review)  5 High oil prices could wreck the AI boom, the WTO has warned Fears are growing of a prolonged energy shock. (The Guardian) + We did the math on AI’s energy footprint. (MIT Technology Review)  6 Jeff Bezos is trying to raise $100 billion to use AI in manufacturing The funds would buy manufacturing firms and infuse them with AI. (WSJ $) + Here’s how to fine-tune AI for prosperity. (MIT Technology Review)  7 Signal’s creator is helping to encrypt Meta’s AI  Moxie Marlinspike is integrating his encrypted chatbot, Confer. (Wired $) + Meta is also ditching human moderators for AI again. (CNBC) + AI is making online crimes easier. (MIT Technology Review)  8 Prediction market Kalshi has raised $1 billion at a $22 billion valuation That’s double its valuation from December. (Bloomberg $) + Arizona’s AG has charged the company with “illegal gambling.” (NPR)  9 Meta isn’t killing Horizon Worlds for VR after all It’s canceled plans to dump the metaverse app (for now). (CNBC)  10 A US startup is recruiting an “AI bully”  The successful candidate must test the patience of leading chatbots. (The Guardian) 
Quote of the day  “Imagine a sports bar… but just for situation monitoring — live X feeds, flight radar, Bloomberg terminals, and Polymarket screens.”  —Kalshi rival Polymarket unveils its hellish vision for a new bar. 
One More Thing  SELMAN DESIGN How gamification took over the world  It’s a thought that occurs to every video-game player at some point: what if the weird, hyper-focused state I enter in virtual worlds could somehow be applied to the real one?  For a handful of consultants, startup gurus, and game designers in the late 2000s, this state of “blissful productivity” became the key to unlocking our true human potential. Their vision became the global phenomenon of gamification—but it didn’t live up to the hype.  Instead of liberating us, gamification became a tool for coercion, distraction, and control. Find out why we fell for it—and how we can recover.  —Bryan Gardiner  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + In a landmark legal win for trolling, Afroman has won his diss track case against the police. + This LEGO artist remixes standard sets into completely different iconic objects. + Ease your search for aliens with these interactive estimates of advanced civilizations.  + A rare superbloom in Death Valley has been caught on camera. 

Read More »

OpenAI is throwing everything into building a fully automated researcher

EXECUTIVE SUMMARY OpenAI is refocusing its research efforts and throwing its resources into a new grand challenge. The San Francisco firm has set its sights on building what it calls an AI researcher, a fully automated agent-based system that will be able to go off and tackle large, complex problems by itself. ​​OpenAI says that the new goal will be its “north star” for the next few years, pulling together multiple research strands, including work on reasoning models, agents, and interpretability. There’s even a timeline. OpenAI plans to build “an autonomous AI research intern”—a system that can take on a small number of specific research problems by itself—by September. The AI intern will be the precursor to a fully automated multi-agent research system that the company plans to debut in 2028. This AI researcher (OpenAI says) will be able to tackle problems that are too large or complex for humans to cope with. Those tasks might be related to math and physics—such as coming up with new proofs or conjectures—or life sciences like biology and chemistry, or even business and policy dilemmas. In theory, you would throw such a tool any kind of problem that can be formulated in text, code or whiteboard scribbles—which covers a lot. OpenAI has been setting the agenda for the AI industry for years. Its early dominance with large language models shaped the technology that hundreds of millions of people use every day. But it now faces fierce competition from rival model makers like Anthropic and Google DeepMind. What OpenAI decides to build next matters—for itself and for the future of AI.   
A big part of that decision falls to Jakub Pachocki, OpenAI’s chief scientist. Alongside chief research officer Mark Chen, Pachocki is one of two people responsible for setting the company’s long-term research goals. Pachocki played key roles in the development of both GPT-4, a game-changing LLM released in 2023, and so-called reasoning models, a technology that first appeared in 2024 and now underpins all major chatbots and agent-based systems.  In an exclusive interview this week, Pachocki talked me through OpenAI’s new grand challenge. “I think we are getting close to a point where we’ll have models capable of working indefinitely in a coherent way just like people do,” he says. “Of course, you still want people in charge and setting the goals. But I think we will get to a point where you kind of have a whole research lab in a data center.”
Such big claims aren’t new. Saving the world by solving its hardest problems is the stated mission of all the top AI firms. Demis Hassabis told me back in 2022 that it was why he started DeepMind. Anthropic CEO Dario Amodei says he is building the equivalent of a country of geniuses in a data center. Pachocki’s boss, Sam Altman, wants to cure cancer. But Pachocki says OpenAI now has most of what it needs to get there. In January, OpenAI released Codex, an agent-based app that can spin up code on the fly to carry out tasks on your computer. It can analyze documents, generate charts, make you a daily digest of your inbox and social media, and much more. OpenAI claims that most of its technical staff now use Codex in their work. You can look at Codex as a very early version of the AI researcher, says Pachocki: “I expect Codex to get fundamentally better.” The key is to make a system that can run for longer periods of time, with less human guidance. “What we’re really looking at for an automated research intern is a system that you can delegate tasks that would take a person a few days,” says Pachocki. “There are a lot of people excited about building systems that can do more long-running scientific research,” says Doug Downey, a research scientist at the Allen Institute for AI, who is not connected to OpenAI. “I think it’s largely driven by the success of these coding agents. The fact that you can delegate quite substantial coding tasks to tools like Codex is incredibly useful and incredibly impressive. And it raises the question: Can we do similar things outside coding, in broader areas of science?” For Pachocki, that’s a clear Yes. In fact, he thinks it’s just a matter of pushing ahead on the path we’re already on. A simple boost in all-round capability also leads to models working for longer without help, he says. He points to the leap from 2020’s GPT-3 to 2023’s GPT-4, two of OpenAI’s previous models. GPT-4 was able to work on a problem for far longer than its predecessor, even without specialized training, he says.  So-called reasoning models brought another bump. Training LLMs to work through problems step by step, backtracking when they make a mistake or hit a dead end, has also made models better at working for longer periods of time. And Pachocki is convinced that OpenAI’s reasoning models will continue to get better. But OpenAI is also training its systems to work by themselves for longer by feeding them specific samples of complex tasks, such as hard puzzles taken from math and coding contests, which force models to learn how to do things like keep track of very large chunks of text and split problems up into (and then manage) multiple subtasks. The aim isn’t to build models that just win math competitions. “That lets you prove that the technology works before you connect it to the real world,” says Pachocki. “If we really wanted to, we could build an amazing automated mathematician, we have all the tools, and I think it would be relatively easy. But it’s not something we’re going to prioritize now because, you know, at the point where you believe you can do it, there’s much more urgent things to do.”

“We are much more focused now on research that’s relevant in the real world,” he adds. Right now that means taking what Codex (and tools like it) can do with coding and trying to apply that to problem-solving in general. “There’s a big change happening, especially in programming,” he says. “Our jobs are now totally different than they were even a year ago. Nobody really edits code all the time anymore. Instead, you manage a group of Codex agents.” If Codex can solve coding problems (the argument goes), it can solve any problem. The line always goes up It’s true that OpenAI has had a handful of remarkable successes in the last few months. Researchers have used GPT-5 (the LLM that powers Codex) to discover new solutions to a number of unsolved math problems and punch through apparent dead ends in a handful of biology, chemistry and physics puzzles.    “Just looking at these models coming up with ideas that would take most PhD weeks, at least, makes me expect that we’ll see much more acceleration coming from this technology in the near future,” Pachocki says. But Pachocki admits that it’s not a done deal. He also understands why some people still have doubts about how much of a game-changer the technology really is. He thinks it depends on how people like to work and what they need to do. “I can believe some people don’t find it very useful yet,” he says. He tells me that he didn’t even use autocomplete—the most basic version of generative coding tech—a year ago himself. “I’m very pedantic about my code,” he says. “I like to type it all manually in vim if I can help it.” (Vim is a text editor favored by many hardcore programmers that you interact with via dozens of keyboard shortcuts instead of a mouse.) But that changed when he saw what the latest models could do. He still wouldn’t hand over complex design tasks, but it’s a time saver when he just wants to try out a few ideas. “I can have it run experiments in a weekend that previously would have taken me like a week to code,” he says. “I don’t think it is at the level where I would just let it take the reins and design the whole thing,” he adds. “But once you see it do something that would take a week to do, I mean that’s hard to argue with.”
Pachocki’s game plan is to supercharge the existing problem-solving abilities that tools like Codex have now and apply them across the sciences.   Downey agrees that the idea of an automated researcher is very cool: “It would be exciting if we could come back tomorrow morning and the agent’s done a bunch of work and there’s new results we can examine,” he says.
But he cautions that building such a system could be harder than Pachocki makes out. Last summer, Downey and his colleagues tested several top-tier LLMs on a range of scientific tasks. OpenAI’s latest model, GPT-5, came out on top but still made lots of errors. “If you have to chain tasks together then the odds that you get several of them right in succession tend to go down,” he says. Downey admits that things move fast and he has not tested the latest versions of GPT-5 (OpenAI released GPT-5.4 two weeks ago). “So those results might already be stale,” he says.  Serious unanswered questions I ask Pachocki about the risks that may come with a system that can solve large, complex problems by itself with little human oversight. Pachocki says people at OpenAI talk about those risks all the time. “If you believe that AI is about to substantially accelerate research, including AI research, that’s a big change in the world, that’s a big thing,” he says. “And it comes with some serious unanswered questions. If it’s so smart and capable, if it can run an entire research program, what if it does something bad?” The way Pachocki sees it, that could happen in a number of ways. The system could go off the rails. It could get hacked. Or it could simply misunderstand its instructions. The best technique OpenAI has right now to address these concerns is to train its reasoning models to share details about what they are doing as they work. This approach to keeping tabs on LLMs is known as chain-of-thought monitoring.
In short, LLMs are trained to jot down notes about what they are doing in a kind of scratchpad as they step through tasks. Researchers can then use those notes to make sure a model is behaving as expected. Yesterday OpenAI published new details on how it is using chain-of-thought monitoring in-house to study Codex.  “Once we get to systems working mostly autonomously for a long time in a big data center, I think this will be something that we’re really going to depend on,” says Pachocki. The idea would be to monitor an AI researcher’s scratchpads using other LLMs and catch unwanted behavior before it’s a problem, rather than stop that bad behavior from happening in the first place. LLMs are not understood well enough to control them fully. “I think it’s going to be a long time before we can really be like, okay, this problem is solved,” he says. “Until you can really trust the systems, you definitely want to have restrictions in place.” Pachocki thinks that very powerful models should be deployed in sandboxes cut off from anything they could break or use to cause harm. 
AI tools have already been used to come up with novel cyberattacks. Some worry that they will be used to design synthetic pathogens that could be used as bioweapons. You can insert any number of evil-scientist scare stories here. “I definitely think there are worrying scenarios that we can imagine,” says Pachocki.  “It’s going to be a very weird thing, it’s extremely concentrated power that’s in some ways unprecedented,” says Pachocki. “Imagine you get to a world where you have a data center that can do all the work that OpenAI or Google can do. Things that in the past required large human organisations, would now be done by a couple of people.” “I think this is a big challenge for governments to figure out,” he adds. And yet some people would say governments were part of the problem. The US government wants to use AI on the battlefield, for example. The recent showdown between Anthropic and the Pentagon revealed that there is little agreement across society about where we draw red lines for how this technology should and should not be used—let alone who should draw them. In the immediate aftermath of that dispute, OpenAI stepped up to sign a deal with the Pentagon instead of its rival. The situation remains murky. I push Pachocki on this. Does he really trust other people to figure it out or does he, as a key architect of the future, feel personal responsibility? “I do feel personal responsibility,” he says. “But I don’t think this can be resolved by OpenAI alone, pushing its technology in a particular way or designing its products in a particular way. We’ll definitely need a lot of involvement from policy makers.”

Read More »

Mind-altering substances are (still) falling short in clinical trials

This week I want to look at where we are with psychedelics, the mind-altering substances that have somehow made the leap from counterculture to major focus of clinical research. Compounds like psilocybin—which is found in magic mushrooms—are being explored for all sorts of health applications, including treatments for depression, PTSD, addiction, and even obesity. Over the last decade, we’ve seen scientific interest in these drugs explode. But most clinical trials of psychedelics have been small and plagued by challenges. And a lot of the trial results have been underwhelming or inconclusive. Two studies out earlier this week demonstrate just how difficult it is to study these drugs. And to my mind, they also show just how overhyped these substances have become. To some in the field, the hype is not necessarily a bad thing. Let me explain.
The two new studies both focus on the effectiveness of psilocybin in treating depression. And they both attempt to account for one of the biggest challenges in trialing psychedelics: what scientists call “blinding.” The best way to test the effectiveness of a new drug is to perform a randomized controlled trial. In these studies, some volunteers receive the drug while others get a placebo. For a fair comparison, the volunteers shouldn’t know whether they’re getting the drug or placebo.
That is almost impossible to do with psychedelics. Almost anyone can tell whether they’ve taken a dose of psilocybin or a dummy pill. The hallucinations are a dead giveaway. Still, the authors behind the two new studies have tried to overcome this challenge. In one, a team based in Germany gave 144 volunteers with treatment-resistant depression either a high or low dose of psilocybin or an “active” placebo, which has its own physical (but not hallucinatory) effects, along with psychotherapy. In their trial, neither the volunteers nor the investigators knew who was getting the drug. The volunteers who got psilocybin did show some improvement—but it was not significantly any better than the improvement experienced by those who took the placebo. And while those who took psilocybin did have a bigger reduction in their symptoms six weeks later, “the divergence between [the two results] renders the findings inconclusive,” the authors write. Not great news so far. The authors of the second study took a different approach. Balázs Szigeti at UCSF and his colleagues instead looked at what are known as “open label” studies of both psychedelics and traditional antidepressants. In those studies, the volunteers knew when they were getting a psychedelic—but they also knew when they were getting an antidepressant. The team assessed 24 such trials to find that … psychedelics were no more effective than traditional antidepressants. Sad trombone. “When I set up the study, I wanted to be a really cool psychedelic scientist to show that even if you consider this blinding problem, psychedelics are so much better than traditional antidepressants,” says Szigeti. “But unfortunately, the data came out the other way around.” His study highlights another problem, too.

In trials of traditional antidepressant drugs, the placebo effect is pretty strong. Depressive symptoms are often measured using a scale, and in trials, antidepressant drugs typically lower symptoms by around 10 points on that scale. Placebos can lower symptoms by around eight points. When a drug regulator looks at those results, the takeaway is that the antidepressant drug lowers symptoms by an additional two points on the scale, relative to a placebo. But with psychedelics, the difference between active drug and placebo is much greater. That’s partly because people who get the psychedelic drug know they’re getting it and are expecting the drug to improve their symptoms, says David Owens, emeritus professor of clinical psychiatry at the University of Edinburgh, UK. But it’s also partly because of the effect on those who know they’re not getting it. It’s pretty obvious when you’re getting a placebo, says Szigeti, and it can be disappointing. Scientists have long recognized the “nocebo” effect as placebo’s “evil twin”—essentially, when you expect to feel worse, you will. The disappointment of getting a placebo is slightly different, and Szigeti calls it the “knowcebo effect.” “It’s kind of like a negative psychedelic effect, because you have figured out that you’re taking the placebo,” he says. This phenomenon can distort the results of psychedelic drug trials. While a placebo in a traditional antidepressant drug trial improves symptoms by eight points, placebos in psychedelic trials improve symptoms by a mere four points, says Szigeti. If the active drug similarly improves symptoms by around 10 points, that makes it look as though the psychedelic is improving symptoms by around six points compared with a placebo. It “gives the illusion” of a huge effect, says Szigeti. So why have those smaller trials of the past received so much attention? Many have been published in high-end journals, accompanied by breathless press releases and media coverage. Even the inconclusive ones. I’ve often thought that those studies might not have seen the light of day if they’d been investigating any other drug.
“Yeah, nobody would care,” Szigeti agrees. It’s partly because people who work in mental health are so desperate for new treatments, says Owens. There has been little innovation in the last 40 years or so, since the advent of selective serotonin reuptake inhibitors. “Psychiatry is hemmed in with old theories … and we don’t need another SSRI for depression,” he says. But it’s also because psychedelics are inherently fascinating, says Szigeti. “Psychedelics are cool,” he says. “Culturally, they are exciting.”
I’ve often worried that psychedelics are overhyped—that people might get the mistaken impression they are cure-alls for mental-health disorders. I’ve worried that vulnerable people might be harmed by self-experimentation. Szigeti takes a different view. Given how effective we know the placebo effect can be, maybe hype isn’t a totally bad thing, he says. “The placebo response is the expectation of a benefit,” he says. “The better response patients are expecting, the better they’re going to get.” Tempering the hype might end up making those drugs less effective, he says. “At the end of the day, the goal of medicine is to help patients,” he says. “I think most [mental health] patients don’t care whether they feel better because of some expectancy and placebo effects or because of an active drug effect.” Either way, we need to know exactly what these drugs are doing. Maybe they will be able to help some people with depression. Maybe they won’t. Research that acknowledges the pitfalls associated with psychedelic drug trials is essential. “These are potentially exciting times,” says Owens. “But it’s really important we do this [research] well. And that means with eyes wide open.” This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Read More »

Energy Department Approves Additional Time to Commence Exports of LNG from Energía Costa Azul

WASHINGTON — U.S. Secretary of Energy Chris Wright today signed an amendment order granting additional time to commence exports of U.S.-sourced natural gas as liquefied natural gas (LNG) from Sempra Energy’s Energía Costa Azul (ECA) Mid-Scale Project that is currently under construction in Baja California, Mexico. Today’s order grants ECA Liquefaction, S. de R.L. de C.V. approximately six additional months to commence exports of U.S.-sourced natural gas as LNG to non-free trade agreement (FTA) countries. Construction at the ECA Mid-Scale Project is nearly complete and ECA expects to commence exports in the near future. The ECA Mid-Scale Project began construction in 2020 and, once completed and operational, will be able to export up to 0.44 billion cubic feet per day (Bcf/d) of natural gas as LNG. A second phase of the project is authorized to export up to 1.74 Bcf/d and is pending a final investment decision. “I am pleased that the Department of Energy can take this action to provide this project the time it needs to complete construction and get U.S.-sourced LNG on the water, particularly given its strategic location on the West Coast of North America,” said Kyle Haustveit, Assistant Secretary of the Hydrocarbons and Geothermal Energy Office.  Thanks to President Trump’s leadership and American innovation, the United States is the world’s largest natural gas producer and exporter. Since the President ended the previous administration’s LNG export approval ban, the Department has approved more than 19 Bcf/d of LNG export authorizations. With recent final investment decisions, U.S. LNG exports are set to more than double from current levels by the early 2030s. 

Read More »

Energy Department Announces Partnership to Ensure Affordable Energy and Power America’s AI Future

WASHINGTON—The U.S. Department of Energy (DOE), alongside the U.S. Department of Commerce (DOC), today announced a unique public-private partnership with SoftBank and AEP Ohio to redevelop DOE land, modernize energy infrastructure, and develop advanced computing in Southern Ohio. As part of the partnership, SB Energy, a SoftBank Group company, is planning to build 10 gigawatts (GW) of new power generation—including 9.2 GW of natural gas generation—that will connect to the local grid and provide power to a new 10 GW data center development at the Portsmouth Site in Pike County, Ohio at no cost to American families. These collective efforts will deliver lower electricity costs across the region, create thousands of American jobs, and strengthen America’s national security.  Portions of this announcement were previously announced as part of President Trump’s U.S.-Japan Strategic Trade and Investment Agreement. This includes the $33.3 billion in Japanese funding for 9.2 GW of new natural gas generation. “Thanks to President Trump, the U.S. government is leveraging its assets—like our federal lands—to add power generation, create jobs, and ensure the United States wins the AI race,” said U.S. Energy Secretary Chris Wright. “I’m pleased to be working with our partners at SoftBank and AEP Ohio on this important project. By bringing new power online and upgrading our existing infrastructure, this investment supports the AI boom and cutting-edge technologies while strengthening our energy system and helping keep costs down for the American people.” “Our Japanese partnership is a direct result of President Trump’s America First trade policies,” said U.S. Commerce Secretary Howard Lutnick. “Japan has committed to invest $550 billion across America. With this historic trade deal we are reindustrializing the country through critical projects like this $33 billion dollar power project in Portsmouth, Ohio. Yesterday we announced additional mega projects in Alabama, Pennsylvania, Tennessee and Texas.”

Read More »

Energy Department Announces $500 Million to Strengthen Domestic Critical Materials Processing and Manufacturing

 Funding will expand domestic manufacturing of battery supply chains for defense, grid resilience, transportation, manufacturing and other industries WASHINGTON—The U.S. Department of Energy’s (DOE) Office of Critical Minerals and Energy Innovation (CMEI) today announced a Notice of Funding Opportunity (NOFO) for up to $500 million to expand U.S. critical mineral and materials processing and derivative battery manufacturing and recycling. Assistant Secretary of Energy (EERE) Audrey Robertson is currently in Japan meeting with regional allies at the Indo-Pacific Energy Security Ministerial and Business Forum (IPEM) to advance shared efforts on supply chain resilience and energy security issues. Her engagements at IPEM underscore the importance of close cooperation with partners as the United States strengthens its supply chain through this NOFO. “For too long, the United States has relied on hostile foreign actors to supply and process the critical materials that are essential in battery manufacturing and materials processing,” said U.S. Energy Secretary Chris Wright. “Thanks to President Trump’s leadership, the Department of Energy is playing a leading role in strengthening these domestic industries that will position the U.S. to win the AI race, meeting rising energy demand, and achieve energy dominance.” “I am delighted to be in Japan meeting with our allies, underscoring the important connection between critical materials and energy security,” said Assistant Secretary of Energy (EERE) Audrey Robertson. “Critical minerals processing is a vital component of our nation’s critical minerals supply base. Boosting domestic production, including through recycling, will bolster national security and ensure the United States and our partners are prepared to meet the energy challenges of the 21st century.” Funding awarded through this NOFO will support demonstration and/or commercial facilities for processing, recycling, or utilizing for manufacturing of critical materials which may include traditional battery minerals such as lithium, graphite, nickel, copper, aluminum, as well as other

Read More »

Energy Department Announces $293 Million in Funding to Support Genesis Mission National Science and Technology Challenges

WASHINGTON—The U.S. Department of Energy (DOE) today announced funding to advance the Genesis Mission’s efforts to tackle the nation’s most complex science and technology challenges. This includes a $293 million Request for Application (RFA),“The Genesis Mission: Transforming Science and Energy with AI.” Through this RFA, DOE invites interdisciplinary teams to leverage novel AI models and frameworks to address over 20 national challenges spanning advanced manufacturing, biotechnology, critical materials, nuclear energy, and quantum information science.    “The Genesis Mission has caught the imagination of our scientific and engineering communities to tackle national challenges in the age of AI,” said Under Secretary for Science Darío Gil and Genesis Mission Director. “With these investments we seek breakthrough ideas and novel collaborations leveraging the scientific prowess of our National Laboratories, the private sector, universities, and science philanthropies.”  The RFA is open to interdisciplinary teams from DOE National Laboratories, U.S. industry, and academia. Phase I awards will range from $500,000 to $750,000 and will support a nine month project period. Phase II awards will range from $6 million to $15 million over a three year project period. Teams may apply directly to either phase in FY 2026, and successful Phase I teams will be eligible to compete for larger Phase II awards in future cycles. Phase I applications and Phase II letters of intent are due April 28, 2026. Phase II applications are due May 19, 2026. DOE plans to hold an informational webinar about this RFA on March 26, 2026.  For full eligibility, application instructions, and challenge details, see the official NOFO: DE-FOA-0003612. Registration instructions and other details will be posted here.  ### 

Read More »

Trump Administration Keeps Coal Plant Open to Ensure Affordable, Reliable and Secure Power in the Northwest

Emergency order addresses critical grid reliability issues, lowering risk of blackouts and ensuring affordable electricity access. WASHINGTON—U.S. Secretary of Energy Chris Wright today issued an emergency order to ensure Americans in the Northwestern region of the United States have access to affordable, reliable and secure electricity. The order directs TransAlta to keep Unit 2 of the Centralia Generating Station in Centralia, Washington available to operate. Unit 2 of the coal plant was scheduled to shut down at the end of 2025. The reliable supply of power from the Centralia plant is essential to maintaining grid stability across the Northwest, and this order ensures that the region avoids unnecessary blackout risks and costs. “The last administration’s energy subtraction policies had the United States on track to likely experience significantly more blackouts in the coming years — thankfully, President Trump won’t let that happen,” said Energy Secretary Wright. “The Trump administration will continue taking action to keep America’s coal plants running so we can stop the price spikes and ensure we don’t lose critical generation sources. Americans deserve access to affordable, reliable, and secure energy to power their homes all the time, regardless of whether the wind is blowing or the sun is shining.” Thanks to President Trump’s leadership, coal plants across the country are reversing plans to shut down. On December 16, 2025, Secretary Wright issued an emergency order directing TransAlta to keep Unit 2 (729.9 MW) available to operate.According to DOE’s Resource Adequacy Report, blackouts were on track to potentially increase 100 times by 2030 if the U.S. continued to take reliable power offline as it did during the Biden administration. This order is in effect beginning on March 17, 2026, through June 14, 2026. ### 

Read More »

Brent retreats from highs after Trump signals Iran war nearing end

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Oil futures eased from recent highs Tuesday as markets reacted to comments from US President Donald Trump suggesting the war with Iran may be nearing its conclusion, easing concerns about prolonged disruptions to Middle East crude supplies. Brent crude had climbed above $100/bbl amid escalating tensions in the region and fears that the war could prolong disruptions to shipments through the Strait of Hormuz—one of the world’s most critical energy chokepoints and a transit route for roughly one-fifth of global oil supply. Prices pulled back after Pres. Trump said the war was “almost done,” prompting traders to reassess the risk premium that had built into crude markets during the latest escalation. The earlier gains were driven by the fact that the war had disrupted tanker traffic in the Strait of Hormuz, raising concerns about wider supply disruptions from major Gulf oil producers. While the latest remarks helped calm markets, analysts note that geopolitical risks remain elevated and price volatility is likely to persist as traders monitor developments in the region. Any renewed escalation could quickly send crude prices higher again.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Three Aberdeen oil company headquarters sell for £45m

Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

Read More »

2025 ransomware predictions, trends, and how to prepare

Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

Read More »

The Download: OpenAI is building a fully automated researcher, and a psychedelic trial blind spot

Plus: OpenAI is also creating a “super app.”
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. OpenAI is throwing everything into building a fully automated researcher  OpenAI has a new grand challenge: building an AI researcher—a fully automated agent-based system capable of tackling large, complex problems by itself. The San Francisco firm said the new goal will be its “north star” for the next few years.   By September, the company plans to build “an autonomous AI research intern” that can take on a small number of specific research problems. The intern will be the precursor to the fully automated multi-agent system, which is slated to debut in 2028.  In an exclusive interview this week, OpenAI’s chief scientist, Jakub Pachocki, talked me through the plans. Find out what I discovered. 
—Will Douglas Heaven  Mind-altering substances are (still) falling short in clinical trials  Over the last decade, we’ve seen scientific interest in psychedelic drugs explode. Compounds like psilocybin—which is found in magic mushrooms—are being explored for all sorts of health applications, including treatments for depression, PTSD, addiction, and even obesity. But two studies out earlier this week demonstrate just how difficult it is to study these drugs.  
For me, they show just how overhyped these substances have become. Find out why here.  —Jessica Hamzelou  This story first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. Sign up to receive it in your inbox every Wednesday.  Read more: What do psychedelic drugs do to our brains? AI could help us find out  The must-reads  I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 OpenAI is building a “super app”  It’s merging ChatGPT, a web browser, and a coding tool into a single app. (The Verge) + It’s also buying coding startup Astral to enhance its Codex model. (Ars Technica) + The moves come amid a cutback on side projects. (WSJ $) + OpenAI has lost ground to Anthropic in the enterprise market. (Axios)  2 The US has charged Super Micro’s co-founder with smuggling AI tech to China  Super Micro is third on Fortune’s list of the fastest-growing companies. (Reuters)  + GenAI is learning to spy for the US military. (MIT Technology Review) + The compute competition is shaping the China-US rivalry. (Politico) 

3 The DoJ has taken down botnets behind the largest-ever DDoS attack They had infected more than 3 million devices. (Wired $) + The DoJ has also seized domains tied to Iranian “hacktivists.” (Axios)  4 The Pentagon says Anthropic’s foreign workers are a security risk It cited Chinese employees as a particular concern. (Axios) + Anthropic’s moral boundaries have incensed the DoD. (MIT Technology Review)  5 High oil prices could wreck the AI boom, the WTO has warned Fears are growing of a prolonged energy shock. (The Guardian) + We did the math on AI’s energy footprint. (MIT Technology Review)  6 Jeff Bezos is trying to raise $100 billion to use AI in manufacturing The funds would buy manufacturing firms and infuse them with AI. (WSJ $) + Here’s how to fine-tune AI for prosperity. (MIT Technology Review)  7 Signal’s creator is helping to encrypt Meta’s AI  Moxie Marlinspike is integrating his encrypted chatbot, Confer. (Wired $) + Meta is also ditching human moderators for AI again. (CNBC) + AI is making online crimes easier. (MIT Technology Review)  8 Prediction market Kalshi has raised $1 billion at a $22 billion valuation That’s double its valuation from December. (Bloomberg $) + Arizona’s AG has charged the company with “illegal gambling.” (NPR)  9 Meta isn’t killing Horizon Worlds for VR after all It’s canceled plans to dump the metaverse app (for now). (CNBC)  10 A US startup is recruiting an “AI bully”  The successful candidate must test the patience of leading chatbots. (The Guardian) 
Quote of the day  “Imagine a sports bar… but just for situation monitoring — live X feeds, flight radar, Bloomberg terminals, and Polymarket screens.”  —Kalshi rival Polymarket unveils its hellish vision for a new bar. 
One More Thing  SELMAN DESIGN How gamification took over the world  It’s a thought that occurs to every video-game player at some point: what if the weird, hyper-focused state I enter in virtual worlds could somehow be applied to the real one?  For a handful of consultants, startup gurus, and game designers in the late 2000s, this state of “blissful productivity” became the key to unlocking our true human potential. Their vision became the global phenomenon of gamification—but it didn’t live up to the hype.  Instead of liberating us, gamification became a tool for coercion, distraction, and control. Find out why we fell for it—and how we can recover.  —Bryan Gardiner  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + In a landmark legal win for trolling, Afroman has won his diss track case against the police. + This LEGO artist remixes standard sets into completely different iconic objects. + Ease your search for aliens with these interactive estimates of advanced civilizations.  + A rare superbloom in Death Valley has been caught on camera. 

Read More »

OpenAI is throwing everything into building a fully automated researcher

EXECUTIVE SUMMARY OpenAI is refocusing its research efforts and throwing its resources into a new grand challenge. The San Francisco firm has set its sights on building what it calls an AI researcher, a fully automated agent-based system that will be able to go off and tackle large, complex problems by itself. ​​OpenAI says that the new goal will be its “north star” for the next few years, pulling together multiple research strands, including work on reasoning models, agents, and interpretability. There’s even a timeline. OpenAI plans to build “an autonomous AI research intern”—a system that can take on a small number of specific research problems by itself—by September. The AI intern will be the precursor to a fully automated multi-agent research system that the company plans to debut in 2028. This AI researcher (OpenAI says) will be able to tackle problems that are too large or complex for humans to cope with. Those tasks might be related to math and physics—such as coming up with new proofs or conjectures—or life sciences like biology and chemistry, or even business and policy dilemmas. In theory, you would throw such a tool any kind of problem that can be formulated in text, code or whiteboard scribbles—which covers a lot. OpenAI has been setting the agenda for the AI industry for years. Its early dominance with large language models shaped the technology that hundreds of millions of people use every day. But it now faces fierce competition from rival model makers like Anthropic and Google DeepMind. What OpenAI decides to build next matters—for itself and for the future of AI.   
A big part of that decision falls to Jakub Pachocki, OpenAI’s chief scientist. Alongside chief research officer Mark Chen, Pachocki is one of two people responsible for setting the company’s long-term research goals. Pachocki played key roles in the development of both GPT-4, a game-changing LLM released in 2023, and so-called reasoning models, a technology that first appeared in 2024 and now underpins all major chatbots and agent-based systems.  In an exclusive interview this week, Pachocki talked me through OpenAI’s new grand challenge. “I think we are getting close to a point where we’ll have models capable of working indefinitely in a coherent way just like people do,” he says. “Of course, you still want people in charge and setting the goals. But I think we will get to a point where you kind of have a whole research lab in a data center.”
Such big claims aren’t new. Saving the world by solving its hardest problems is the stated mission of all the top AI firms. Demis Hassabis told me back in 2022 that it was why he started DeepMind. Anthropic CEO Dario Amodei says he is building the equivalent of a country of geniuses in a data center. Pachocki’s boss, Sam Altman, wants to cure cancer. But Pachocki says OpenAI now has most of what it needs to get there. In January, OpenAI released Codex, an agent-based app that can spin up code on the fly to carry out tasks on your computer. It can analyze documents, generate charts, make you a daily digest of your inbox and social media, and much more. OpenAI claims that most of its technical staff now use Codex in their work. You can look at Codex as a very early version of the AI researcher, says Pachocki: “I expect Codex to get fundamentally better.” The key is to make a system that can run for longer periods of time, with less human guidance. “What we’re really looking at for an automated research intern is a system that you can delegate tasks that would take a person a few days,” says Pachocki. “There are a lot of people excited about building systems that can do more long-running scientific research,” says Doug Downey, a research scientist at the Allen Institute for AI, who is not connected to OpenAI. “I think it’s largely driven by the success of these coding agents. The fact that you can delegate quite substantial coding tasks to tools like Codex is incredibly useful and incredibly impressive. And it raises the question: Can we do similar things outside coding, in broader areas of science?” For Pachocki, that’s a clear Yes. In fact, he thinks it’s just a matter of pushing ahead on the path we’re already on. A simple boost in all-round capability also leads to models working for longer without help, he says. He points to the leap from 2020’s GPT-3 to 2023’s GPT-4, two of OpenAI’s previous models. GPT-4 was able to work on a problem for far longer than its predecessor, even without specialized training, he says.  So-called reasoning models brought another bump. Training LLMs to work through problems step by step, backtracking when they make a mistake or hit a dead end, has also made models better at working for longer periods of time. And Pachocki is convinced that OpenAI’s reasoning models will continue to get better. But OpenAI is also training its systems to work by themselves for longer by feeding them specific samples of complex tasks, such as hard puzzles taken from math and coding contests, which force models to learn how to do things like keep track of very large chunks of text and split problems up into (and then manage) multiple subtasks. The aim isn’t to build models that just win math competitions. “That lets you prove that the technology works before you connect it to the real world,” says Pachocki. “If we really wanted to, we could build an amazing automated mathematician, we have all the tools, and I think it would be relatively easy. But it’s not something we’re going to prioritize now because, you know, at the point where you believe you can do it, there’s much more urgent things to do.”

“We are much more focused now on research that’s relevant in the real world,” he adds. Right now that means taking what Codex (and tools like it) can do with coding and trying to apply that to problem-solving in general. “There’s a big change happening, especially in programming,” he says. “Our jobs are now totally different than they were even a year ago. Nobody really edits code all the time anymore. Instead, you manage a group of Codex agents.” If Codex can solve coding problems (the argument goes), it can solve any problem. The line always goes up It’s true that OpenAI has had a handful of remarkable successes in the last few months. Researchers have used GPT-5 (the LLM that powers Codex) to discover new solutions to a number of unsolved math problems and punch through apparent dead ends in a handful of biology, chemistry and physics puzzles.    “Just looking at these models coming up with ideas that would take most PhD weeks, at least, makes me expect that we’ll see much more acceleration coming from this technology in the near future,” Pachocki says. But Pachocki admits that it’s not a done deal. He also understands why some people still have doubts about how much of a game-changer the technology really is. He thinks it depends on how people like to work and what they need to do. “I can believe some people don’t find it very useful yet,” he says. He tells me that he didn’t even use autocomplete—the most basic version of generative coding tech—a year ago himself. “I’m very pedantic about my code,” he says. “I like to type it all manually in vim if I can help it.” (Vim is a text editor favored by many hardcore programmers that you interact with via dozens of keyboard shortcuts instead of a mouse.) But that changed when he saw what the latest models could do. He still wouldn’t hand over complex design tasks, but it’s a time saver when he just wants to try out a few ideas. “I can have it run experiments in a weekend that previously would have taken me like a week to code,” he says. “I don’t think it is at the level where I would just let it take the reins and design the whole thing,” he adds. “But once you see it do something that would take a week to do, I mean that’s hard to argue with.”
Pachocki’s game plan is to supercharge the existing problem-solving abilities that tools like Codex have now and apply them across the sciences.   Downey agrees that the idea of an automated researcher is very cool: “It would be exciting if we could come back tomorrow morning and the agent’s done a bunch of work and there’s new results we can examine,” he says.
But he cautions that building such a system could be harder than Pachocki makes out. Last summer, Downey and his colleagues tested several top-tier LLMs on a range of scientific tasks. OpenAI’s latest model, GPT-5, came out on top but still made lots of errors. “If you have to chain tasks together then the odds that you get several of them right in succession tend to go down,” he says. Downey admits that things move fast and he has not tested the latest versions of GPT-5 (OpenAI released GPT-5.4 two weeks ago). “So those results might already be stale,” he says.  Serious unanswered questions I ask Pachocki about the risks that may come with a system that can solve large, complex problems by itself with little human oversight. Pachocki says people at OpenAI talk about those risks all the time. “If you believe that AI is about to substantially accelerate research, including AI research, that’s a big change in the world, that’s a big thing,” he says. “And it comes with some serious unanswered questions. If it’s so smart and capable, if it can run an entire research program, what if it does something bad?” The way Pachocki sees it, that could happen in a number of ways. The system could go off the rails. It could get hacked. Or it could simply misunderstand its instructions. The best technique OpenAI has right now to address these concerns is to train its reasoning models to share details about what they are doing as they work. This approach to keeping tabs on LLMs is known as chain-of-thought monitoring.
In short, LLMs are trained to jot down notes about what they are doing in a kind of scratchpad as they step through tasks. Researchers can then use those notes to make sure a model is behaving as expected. Yesterday OpenAI published new details on how it is using chain-of-thought monitoring in-house to study Codex.  “Once we get to systems working mostly autonomously for a long time in a big data center, I think this will be something that we’re really going to depend on,” says Pachocki. The idea would be to monitor an AI researcher’s scratchpads using other LLMs and catch unwanted behavior before it’s a problem, rather than stop that bad behavior from happening in the first place. LLMs are not understood well enough to control them fully. “I think it’s going to be a long time before we can really be like, okay, this problem is solved,” he says. “Until you can really trust the systems, you definitely want to have restrictions in place.” Pachocki thinks that very powerful models should be deployed in sandboxes cut off from anything they could break or use to cause harm. 
AI tools have already been used to come up with novel cyberattacks. Some worry that they will be used to design synthetic pathogens that could be used as bioweapons. You can insert any number of evil-scientist scare stories here. “I definitely think there are worrying scenarios that we can imagine,” says Pachocki.  “It’s going to be a very weird thing, it’s extremely concentrated power that’s in some ways unprecedented,” says Pachocki. “Imagine you get to a world where you have a data center that can do all the work that OpenAI or Google can do. Things that in the past required large human organisations, would now be done by a couple of people.” “I think this is a big challenge for governments to figure out,” he adds. And yet some people would say governments were part of the problem. The US government wants to use AI on the battlefield, for example. The recent showdown between Anthropic and the Pentagon revealed that there is little agreement across society about where we draw red lines for how this technology should and should not be used—let alone who should draw them. In the immediate aftermath of that dispute, OpenAI stepped up to sign a deal with the Pentagon instead of its rival. The situation remains murky. I push Pachocki on this. Does he really trust other people to figure it out or does he, as a key architect of the future, feel personal responsibility? “I do feel personal responsibility,” he says. “But I don’t think this can be resolved by OpenAI alone, pushing its technology in a particular way or designing its products in a particular way. We’ll definitely need a lot of involvement from policy makers.”

Read More »

Mind-altering substances are (still) falling short in clinical trials

This week I want to look at where we are with psychedelics, the mind-altering substances that have somehow made the leap from counterculture to major focus of clinical research. Compounds like psilocybin—which is found in magic mushrooms—are being explored for all sorts of health applications, including treatments for depression, PTSD, addiction, and even obesity. Over the last decade, we’ve seen scientific interest in these drugs explode. But most clinical trials of psychedelics have been small and plagued by challenges. And a lot of the trial results have been underwhelming or inconclusive. Two studies out earlier this week demonstrate just how difficult it is to study these drugs. And to my mind, they also show just how overhyped these substances have become. To some in the field, the hype is not necessarily a bad thing. Let me explain.
The two new studies both focus on the effectiveness of psilocybin in treating depression. And they both attempt to account for one of the biggest challenges in trialing psychedelics: what scientists call “blinding.” The best way to test the effectiveness of a new drug is to perform a randomized controlled trial. In these studies, some volunteers receive the drug while others get a placebo. For a fair comparison, the volunteers shouldn’t know whether they’re getting the drug or placebo.
That is almost impossible to do with psychedelics. Almost anyone can tell whether they’ve taken a dose of psilocybin or a dummy pill. The hallucinations are a dead giveaway. Still, the authors behind the two new studies have tried to overcome this challenge. In one, a team based in Germany gave 144 volunteers with treatment-resistant depression either a high or low dose of psilocybin or an “active” placebo, which has its own physical (but not hallucinatory) effects, along with psychotherapy. In their trial, neither the volunteers nor the investigators knew who was getting the drug. The volunteers who got psilocybin did show some improvement—but it was not significantly any better than the improvement experienced by those who took the placebo. And while those who took psilocybin did have a bigger reduction in their symptoms six weeks later, “the divergence between [the two results] renders the findings inconclusive,” the authors write. Not great news so far. The authors of the second study took a different approach. Balázs Szigeti at UCSF and his colleagues instead looked at what are known as “open label” studies of both psychedelics and traditional antidepressants. In those studies, the volunteers knew when they were getting a psychedelic—but they also knew when they were getting an antidepressant. The team assessed 24 such trials to find that … psychedelics were no more effective than traditional antidepressants. Sad trombone. “When I set up the study, I wanted to be a really cool psychedelic scientist to show that even if you consider this blinding problem, psychedelics are so much better than traditional antidepressants,” says Szigeti. “But unfortunately, the data came out the other way around.” His study highlights another problem, too.

In trials of traditional antidepressant drugs, the placebo effect is pretty strong. Depressive symptoms are often measured using a scale, and in trials, antidepressant drugs typically lower symptoms by around 10 points on that scale. Placebos can lower symptoms by around eight points. When a drug regulator looks at those results, the takeaway is that the antidepressant drug lowers symptoms by an additional two points on the scale, relative to a placebo. But with psychedelics, the difference between active drug and placebo is much greater. That’s partly because people who get the psychedelic drug know they’re getting it and are expecting the drug to improve their symptoms, says David Owens, emeritus professor of clinical psychiatry at the University of Edinburgh, UK. But it’s also partly because of the effect on those who know they’re not getting it. It’s pretty obvious when you’re getting a placebo, says Szigeti, and it can be disappointing. Scientists have long recognized the “nocebo” effect as placebo’s “evil twin”—essentially, when you expect to feel worse, you will. The disappointment of getting a placebo is slightly different, and Szigeti calls it the “knowcebo effect.” “It’s kind of like a negative psychedelic effect, because you have figured out that you’re taking the placebo,” he says. This phenomenon can distort the results of psychedelic drug trials. While a placebo in a traditional antidepressant drug trial improves symptoms by eight points, placebos in psychedelic trials improve symptoms by a mere four points, says Szigeti. If the active drug similarly improves symptoms by around 10 points, that makes it look as though the psychedelic is improving symptoms by around six points compared with a placebo. It “gives the illusion” of a huge effect, says Szigeti. So why have those smaller trials of the past received so much attention? Many have been published in high-end journals, accompanied by breathless press releases and media coverage. Even the inconclusive ones. I’ve often thought that those studies might not have seen the light of day if they’d been investigating any other drug.
“Yeah, nobody would care,” Szigeti agrees. It’s partly because people who work in mental health are so desperate for new treatments, says Owens. There has been little innovation in the last 40 years or so, since the advent of selective serotonin reuptake inhibitors. “Psychiatry is hemmed in with old theories … and we don’t need another SSRI for depression,” he says. But it’s also because psychedelics are inherently fascinating, says Szigeti. “Psychedelics are cool,” he says. “Culturally, they are exciting.”
I’ve often worried that psychedelics are overhyped—that people might get the mistaken impression they are cure-alls for mental-health disorders. I’ve worried that vulnerable people might be harmed by self-experimentation. Szigeti takes a different view. Given how effective we know the placebo effect can be, maybe hype isn’t a totally bad thing, he says. “The placebo response is the expectation of a benefit,” he says. “The better response patients are expecting, the better they’re going to get.” Tempering the hype might end up making those drugs less effective, he says. “At the end of the day, the goal of medicine is to help patients,” he says. “I think most [mental health] patients don’t care whether they feel better because of some expectancy and placebo effects or because of an active drug effect.” Either way, we need to know exactly what these drugs are doing. Maybe they will be able to help some people with depression. Maybe they won’t. Research that acknowledges the pitfalls associated with psychedelic drug trials is essential. “These are potentially exciting times,” says Owens. “But it’s really important we do this [research] well. And that means with eyes wide open.” This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Read More »

The Download: Quantum computing for health, and why the world doesn’t recycle more nuclear waste

Plus: The FBI has admitted it’s buying Americans’ location data.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. A $5 million prize awaits proof that quantum computers can solve health care problems  In a laboratory on the outskirts of Oxford, a quantum computer built from atoms and light awaits its moment. The device is small but powerful—and also very valuable. Infleqtion, the company that owns it, is hoping its abilities will win $5 million at a competition next week.  The prize will go to the quantum computer that can solve real health care problems that conventional “classical” computers are unable to solve. But there can be only one big winner—if there is a winner at all. Read the full story.  —Michael Brooks 
Why the world doesn’t recycle more nuclear waste  There’s still a lot of usable uranium in spent nuclear fuel when it’s pulled out of reactors. Recycling could reduce both the waste and the need to mine new material, but the process is costly, complicated, and not fully efficient.  Find out why it’s such an issue. —Casey Crownhart 
This story is from The Spark, MIT Technology Review’s weekly climate newsletter. Sign up to receive it in your inbox every Wednesday.  The must-reads  I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 The FBI has confirmed it’s buying Americans’ location data  Director Kash Patel said it’s led to “valuable intelligence.” (Politico) + What AI “remembers” about you is privacy’s next frontier. (MIT Technology Review)  2 The first draft of a federal AI bill has been introduced It aims to protect “children, creators, conservatives, and communities.” (Engadget) + A war is brewing over AI regulation in the US. (MIT Technology Review)    3 Google is pitching itself to the Pentagon as the perfect defense partner It’s framing its AI as a safe alternative to OpenAI and Anthropic. (NYT $) + Here’s where OpenAI’s tech could show up in Iran. (MIT Technology Review)  4 A rogue AI agent at Meta leaked sensitive information to employees The exposure lasted for hours before it was contained. (The Information $) + Don’t let AI agent hype get ahead of reality. (MIT Technology Review $)  5 Sony just removed 135,000 ‘deepfakes’ of its music Fraudsters were impersonating the label’s artists on streaming services. (BBC) + AI works better as a collaborator than a creator. (MIT Technology Review)  6 The EU has backed a ban on nonconsensual sexualized deepfakes It has reacted to Elon Musk’s Grok chatbot “nudifying” children. (Bloomberg $) 

7 Two quantum cryptography pioneers have won the Turing Award Their encryption method can (theoretically) never be broken. (Quanta)  8 Gamers are disgusted by Nvidia’s new rendering model  They’ve labeled it an “AI slop filter.” (The Verge)  9 The White House has registered the aliens.gov domain It’s sparked speculation that Trump’s long-awaited UFO disclosure is imminent. (404 Media) + Meet the new biologists treating LLMs like ETs. (MIT Technology Review)  10 Silicon Valley has embraced a new buzzword: “taste” As a USP amid the deluge of AI-driven recommendations. (The New Yorker $)  Quote of the day  “Big tech and China win. The rest of us lose.”  —Elizabeth Warren gives her take on the Trump administration allowing Nvidia to sell advanced chips to China.  One More Thing  PSIQUANTUM Useful quantum computing is inevitable—and increasingly imminent  Last year, Nvidia CEO Jensen Huang jolted the stock market by saying that practical quantum computing is still 15 to 30 years away. He also suggested that those computers would need Nvidia GPUs to function. But Huang’s predictions miss the mark—both on the timeline and the role his company’s technology will play.  
Quantum computing is rapidly converging on utility. And that’s good news, because the hope is that they will be able to perform calculations that no amount of AI or classical computation could ever achieve. Read the full story.  —Peter Barrett 
We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + A self-described “mad scientist” has powered a car with vape batteries. + Someone squeezed an Apple Mac Mini inside a classic LEGO computer. + Watch thousands of satellites orbit Earth in real-time with this mesmerizing interactive map. + This grilled wall cheese art looks good enough to eat.  

Read More »

A $5 million prize awaits proof that quantum computers can solve health care problems

EXECUTIVE SUMMARY I’m standing in front of a quantum computer built out of atoms and light at the UK’s National Quantum Computing Centre on the outskirts of Oxford. On a laboratory table, a complex matrix of mirrors and lenses surrounds a Rubik’s Cube–size cell where 100 cesium atoms are suspended in grid formation by a carefully manipulated laser beam.  The cesium atom setup is so compact that I could pick it up, carry it out of the lab, and put it on the backseat of my car to take home. I’d be unlikely to get very far, though. It’s small but powerful—and so it’s very valuable. Infleqtion, the Colorado-based company that owns it, is hoping the machine’s abilities will win $5 million next week, at an event to be held in Marina del Rey, California.  Infleqtion is one of six teams that have made it to the final stage of a 30-month-long quantum computing competition called Quantum for Bio (Q4Bio). Run by the nonprofit Wellcome Leap, it aims to show that today’s quantum computers, though messy and error-prone and far from the large-scale machines engineers hope to build, could actually benefit human health. Success would be a significant step forward in proving the worth of quantum computers. But for now, it turns out, that worth seems to be linked to harnessing and improving the performance of conventional (also called classical) computers in tandem, creating a quantum-classical hybrid that can exceed what’s possible on classical machines by themselves. There are two prize categories. A prize of $2 million will go to any and all teams that can run a significantly useful health care algorithm on computers with 50 or more qubits (a qubit is the basic processing unit in a quantum computer). To win the $5 million grand prize, a team must successfully run a quantum algorithm that solves a significant real-world problem in health care, and the work must use 100 or more qubits. Winners have to meet strict performance criteria, and they must solve a health care problem that can’t be solved with conventional computers—a tough task.
Despite the scale of the challenge, most of the teams think some of this money could be theirs. “I think we’re in with a good shout,” says Jonathan D. Hirst, a computational chemist at the University of Nottingham, UK. “We’re very firmly within the criteria for the $2 million prize,” says Stanford University’s Grant Rotskoff, whose collaboration is investigating the quantum properties of the ATP molecule that powers biological cells.  The grand prize is perhaps less of a sure thing. “This is really at the very edge of doable,” Rotskoff says. Insiders say the challenge is so difficult, given the state of quantum computing technology, that much of the money could stay in Wellcome Leap’s account. 
With most of the Q4Bio work unpublished and protected by NDAs, and the quantum computing field already rife with claims and counterclaims about performance and achievements, only the judges will be in a position to decide who’s right.  A hybrid solution The idea behind quantum computers is that they can use small-scale objects that obey the laws of quantum mechanics, such as atoms and photons of light,  to simulate real-world processes too complex to model on our everyday classical machines.  Researchers have been working for decades to build such systems, which could deliver insights for creating new materials, developing pharmaceuticals, and improving chemical processes such as fertilizer production.  But dealing with quantum stuff like atoms is excruciatingly difficult. The biggest, shiniest applications require huge, robust machines capable of withstanding the environmental “noise” that can very easily disrupt delicate quantum systems. We don’t have those yet—and it’s unclear when we will.  Wellcome Leap wanted to find out if the smaller-scale machines we have today can be made to do something—anything—useful for health care while we wait for the era of powerful, large-scale quantum computers. The group started the competition in 2024, offering $1.5 million in funding to each group of 12 selected teams. The six Q4Bio finalists have taken a range of approaches. Crucially, they’ve all come up with ingenious ways to overcome quantum computing’s drawbacks. Faced with noisy, limited machines, they have learned how to outsource much of the computational load to classical processors running newly developed algorithms that are, in many cases, better than the previous state of the art. The quantum processors are then required only for the parts of the problem where classical methods don’t scale well enough as the calculation gets bigger. For example, a team led by Sergii Strelchuk of Oxford University is using a quantum computer to map genetic diversity among humans and pathogens on complex graph-based structures. These will—the researchers hope—expose hidden connections and potential treatment pathways. “You can think about it as a platform for solving difficult problems in computational genomics,” Strelchuk says.  The corresponding classical tools struggle with even modest scale-up to large databases. Strelchuk’s team has built an automated pipeline that provides a way of determining whether classical solvers will struggle with a particular problem, and how a quantum algorithm might be able to formulate the data so that it becomes solvable on a classical computer or handleable on a noisy quantum one. “You can do all this before you start spending money on computing,” Strelchuk says. In collaboration with Cleveland Clinic, Helsinki-based Algorithmiq has used a superconducting quantum computer built by IBM to simulate a cancer drug that is triggered by specific types of light. “The idea is you take the drug, and it’s everywhere in your body, but it’s doing nothing, just sitting there, until there’s light on it of a certain wavelength,” says Guillermo García-Pérez, Algorithmiq’s chief scientific officer. Then it acts as a molecular bullet, attacking the tumor only at the location in the body where that light is directed. 

The drug with which Algorithmiq began its work is already in phase II clinical trials for treating bladder cancers. The quantum-computed simulation, which adapts and improves on classical algorithms, will allow it to be redesigned for treating other conditions. “It has remained a niche treatment precisely because it can’t be simulated classically,” says Sabrina Maniscalco, Algorithmiq’s CEO and cofounder.  Maniscalco, who is also confident of walking away from the competition with prize money, believes the methods used to create the algorithm will have wide applications:  “What we’ve done in the period of the Q4Bio program is something unique that can change how to simulate chemistry for health care and life sciences.” Infleqtion’s entry, running on its cesium-powered machine, is an effort to improve the identification of cancer signatures in medical data. Together with collaborators at the University of Chicago and MIT, the company’s scientists have developed a quantum algorithm that mines huge data sets such as the Cancer Genome Atlas.  The aim is to find patterns that allow clinicians to determine factors such as the likely origin of a patient’s metastasized cancer. “It’s very important to know where it came from because that can inform the best treatment,” says Teague Tomesh, a quantum software engineer who is Infleqtion’s Q4Bio project lead. Unfortunately, those patterns are hidden inside data sets so large that they overwhelm classical solvers. Infleqtion uses the quantum computer to find correlations in the data that can reduce the size of the computation. “Then we hand the reduced problem back to the classical solver,” Teague says. “I’m basically trying to use the best of my quantum and my classical resources.” The Nottingham-based team, meanwhile, is using quantum computing to nail down a drug candidate that can cure myotonic dystrophy, the most common adult-onset form of muscular dystrophy. One member of the team, David Brook, played a role in identifying the gene behind this condition in 1992. Over 30 years later, Brook, Hirst, and the others in their group—which includes QuEra, a Boston company developing a quantum computer based on neutral atoms—has now quantum-computed a way in which drugs can form chemical bonds with the protein that brings on the disease, blocking the mechanism that causes the problem. Low expectations  The entrants’ confidence might be high, but Shihan Sajeed’s is much lower. Sajeed, a quantum computing entrepreneur based in Waterloo, Ontario, is program director for Q4Bio. He believes the error-prone quantum machines the researchers must work with are unlikely to deliver on all the grand prize criteria. “It is very difficult to achieve something with a noisy quantum computer that a classical machine can’t do,” he says. That said, he has been surprised by the progress. “When we started the program, people didn’t know about any use cases where quantum can definitely impact biology,” he says. But the teams have found promising applications, he adds: “We now know the fields where quantum can matter.” 
And the developments in “hybrid quantum-classical” processing that the entrants are using are “transformational,” Sajeed reckons. Will it be enough to make him part with Wellcome Leap’s money? That’s down to a judging panel, whose members’ identities are a closely guarded secret to ensure that no one tailors their presentation to a particular kind of approach. But we won’t know the outcome for a while; the winner, or winners, will be announced in mid-April.  If it does turn out that there are no winners, Sajeed has some words of comfort for the competitors. The goal has always been about running a useful algorithm on a machine that exists today, he points out; missing the mark doesn’t mean your algorithm won’t be useful on a future quantum computer. “It just means the machine you need doesn’t exist yet.”

Read More »

Why the world doesn’t recycle more nuclear waste

The prospect of making trash useful is always fascinating to me. Whether it’s used batteries, solar panels, or spent nuclear fuel, getting use out of something destined for disposal sounds like a win all around. In nuclear energy, figuring out what to do with waste has always been a challenge, since the material needs to be dealt with carefully. In a new story, I dug into the question of what advanced nuclear reactors will mean for spent fuel waste. New coolants, fuels, and logistics popping up in companies’ designs could require some adjustments. My reporting also helped answer another question that was lingering in my brain: Why doesn’t the world recycle more nuclear waste? There’s still a lot of usable uranium in spent nuclear fuel when it’s pulled out of reactors. Getting more use out of the spent fuel could cut down on both waste and the need to mine new material, but the process is costly, complicated, and not 100% effective.
France has the largest and most established reprocessing program in the world today. The La Hague plant in northern France has the capacity to reprocess about 1,700 tons of spent fuel each year. The plant uses a process called PUREX—spent fuel is dissolved in acid and goes through chemical processing to pull out the uranium and plutonium, which are then separated. The plutonium is used to make mixed oxide (or MOX) fuel, which can be used in a mixture to fuel conventional nuclear reactors or alone as fuel in some specialized designs. And the uranium can go on to be re-enriched and used in standard low-enriched uranium fuel.
Reprocessing can cut down on the total volume of high-level nuclear waste that needs special handling, says Allison Macfarlane, director of the school of public policy and global affairs at the University of British Columbia and a former chair of the NRC. But there’s a bit of a catch. Today, the gold standard for permanent nuclear waste storage is a geological repository, a deep underground storage facility. Heat, not volume, is often the key limiting factor for how much material can be socked away in those facilities, depending on the specific repository. And spent MOX fuel gives off much more heat than conventional spent fuel, Macfarlane says. So even if there’s a smaller volume, the material might take up as much, or even more, space in a repository.  It’s also tricky to make this a true loop: The uranium that’s produced from reprocessing is contaminated with isotopes that can be difficult to separate, Macfarlane says. Today, France essentially saves the uranium for possible future enrichment as a sort of strategic stockpile. (Historically, it’s also exported some to Russia for enrichment.) And while MOX fuel can be used in some reactors, once it is spent, it is technically challenging to reprocess. So today, the best case is that fuel could be used twice, not infinitely. “Every responsible analyst understands that no matter what, no matter how good your recycling process is, you’re still going to need a geological repository in the end,” says Edwin Lyman, director of nuclear power safety at the Union of Concerned Scientists. Reprocessing also has its downsides, Lyman adds. One risk comes from the plutonium made in the process, which can be used in nuclear weapons. France handles that risk with high security, and by quickly turning that plutonium into the MOX fuel product. Reprocessing is also quite expensive, and uranium supply isn’t meaningfully limited. “There’s no economic benefit to reprocessing at this time,” says Paul Dickman, a former Department of Energy and NRC official. France bears the higher cost that comes with reprocessing largely for political reasons, he says. The country doesn’t have uranium resources, importing its supply today. Reprocessing helps ensure its energy independence: “They’re willing to pay a national security premium.” Japan is currently constructing a spent-fuel reprocessing facility, though delays have plagued the project, which started construction in 1993 and was originally supposed to start up by 1997. Now the facility is expected to open by 2027. It’s possible that new technologies could make reprocessing more appealing, and agencies like the Department of Energy should do longer-term research on advanced separation technologies, Dickman says. Some companies working on advanced reactors say they plan to use alternative reprocessing methods in their fuel cycle. This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. 

Read More »

Energy Department Approves Additional Time to Commence Exports of LNG from Energía Costa Azul

WASHINGTON — U.S. Secretary of Energy Chris Wright today signed an amendment order granting additional time to commence exports of U.S.-sourced natural gas as liquefied natural gas (LNG) from Sempra Energy’s Energía Costa Azul (ECA) Mid-Scale Project that is currently under construction in Baja California, Mexico. Today’s order grants ECA Liquefaction, S. de R.L. de C.V. approximately six additional months to commence exports of U.S.-sourced natural gas as LNG to non-free trade agreement (FTA) countries. Construction at the ECA Mid-Scale Project is nearly complete and ECA expects to commence exports in the near future. The ECA Mid-Scale Project began construction in 2020 and, once completed and operational, will be able to export up to 0.44 billion cubic feet per day (Bcf/d) of natural gas as LNG. A second phase of the project is authorized to export up to 1.74 Bcf/d and is pending a final investment decision. “I am pleased that the Department of Energy can take this action to provide this project the time it needs to complete construction and get U.S.-sourced LNG on the water, particularly given its strategic location on the West Coast of North America,” said Kyle Haustveit, Assistant Secretary of the Hydrocarbons and Geothermal Energy Office.  Thanks to President Trump’s leadership and American innovation, the United States is the world’s largest natural gas producer and exporter. Since the President ended the previous administration’s LNG export approval ban, the Department has approved more than 19 Bcf/d of LNG export authorizations. With recent final investment decisions, U.S. LNG exports are set to more than double from current levels by the early 2030s. 

Read More »

Energy Department Announces Partnership to Ensure Affordable Energy and Power America’s AI Future

WASHINGTON—The U.S. Department of Energy (DOE), alongside the U.S. Department of Commerce (DOC), today announced a unique public-private partnership with SoftBank and AEP Ohio to redevelop DOE land, modernize energy infrastructure, and develop advanced computing in Southern Ohio. As part of the partnership, SB Energy, a SoftBank Group company, is planning to build 10 gigawatts (GW) of new power generation—including 9.2 GW of natural gas generation—that will connect to the local grid and provide power to a new 10 GW data center development at the Portsmouth Site in Pike County, Ohio at no cost to American families. These collective efforts will deliver lower electricity costs across the region, create thousands of American jobs, and strengthen America’s national security.  Portions of this announcement were previously announced as part of President Trump’s U.S.-Japan Strategic Trade and Investment Agreement. This includes the $33.3 billion in Japanese funding for 9.2 GW of new natural gas generation. “Thanks to President Trump, the U.S. government is leveraging its assets—like our federal lands—to add power generation, create jobs, and ensure the United States wins the AI race,” said U.S. Energy Secretary Chris Wright. “I’m pleased to be working with our partners at SoftBank and AEP Ohio on this important project. By bringing new power online and upgrading our existing infrastructure, this investment supports the AI boom and cutting-edge technologies while strengthening our energy system and helping keep costs down for the American people.” “Our Japanese partnership is a direct result of President Trump’s America First trade policies,” said U.S. Commerce Secretary Howard Lutnick. “Japan has committed to invest $550 billion across America. With this historic trade deal we are reindustrializing the country through critical projects like this $33 billion dollar power project in Portsmouth, Ohio. Yesterday we announced additional mega projects in Alabama, Pennsylvania, Tennessee and Texas.”

Read More »

Nvidia overhauls the data center for OpenClaw era

Nvidia’s products for data centers now encompass a full stack with all the pieces, said Sandeep Gupta, executive managing director and head of global strategic alliances at NTT Data. “From a customer perspective, if they believe in an integrated stack, it makes things simple,” Gupta said. The integrated data center cuts complexity and improves efficiency across cooling, networking and storage. “It is driven by the sentiment of an enterprise on how dependent they want to be on one provider versus mix and match,” Gupta said. AI complexity has gone up manifold with multi-agent systems and technologies like OpenClaw, which Huang said is as big a deal as HTML and Linux. Those technologies will generate tokens at an unprecedented pace and strain network, memory and storage simultaneously. AI data also has context, and moving it inefficiently wastes power and cost. A new networking and storage layer is needed to move data intelligently and efficiently. A technology called KV Cache holds the contextual memory necessary for processing agentic AI systems. “It’s going to pound on memory really hard… It’s going to be pounding on the storage system really really hard, which is the reason why we reinvented the storage system,” Huang said. Nvidia’s blueprint turns data centers into one giant AI GPU. It is spearheaded by the GPU known as Rubin and CPU called Vera, which were announced at GTC. Nvidia also slipped in a new inference chip; the Groq LPU has significantly more memory bandwidth than GPUs and is designed for low-latency token generation.

Read More »

The Download: OpenAI is building a fully automated researcher, and a psychedelic trial blind spot

Plus: OpenAI is also creating a “super app.”
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. OpenAI is throwing everything into building a fully automated researcher  OpenAI has a new grand challenge: building an AI researcher—a fully automated agent-based system capable of tackling large, complex problems by itself. The San Francisco firm said the new goal will be its “north star” for the next few years.   By September, the company plans to build “an autonomous AI research intern” that can take on a small number of specific research problems. The intern will be the precursor to the fully automated multi-agent system, which is slated to debut in 2028.  In an exclusive interview this week, OpenAI’s chief scientist, Jakub Pachocki, talked me through the plans. Find out what I discovered. 
—Will Douglas Heaven  Mind-altering substances are (still) falling short in clinical trials  Over the last decade, we’ve seen scientific interest in psychedelic drugs explode. Compounds like psilocybin—which is found in magic mushrooms—are being explored for all sorts of health applications, including treatments for depression, PTSD, addiction, and even obesity. But two studies out earlier this week demonstrate just how difficult it is to study these drugs.  
For me, they show just how overhyped these substances have become. Find out why here.  —Jessica Hamzelou  This story first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. Sign up to receive it in your inbox every Wednesday.  Read more: What do psychedelic drugs do to our brains? AI could help us find out  The must-reads  I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 OpenAI is building a “super app”  It’s merging ChatGPT, a web browser, and a coding tool into a single app. (The Verge) + It’s also buying coding startup Astral to enhance its Codex model. (Ars Technica) + The moves come amid a cutback on side projects. (WSJ $) + OpenAI has lost ground to Anthropic in the enterprise market. (Axios)  2 The US has charged Super Micro’s co-founder with smuggling AI tech to China  Super Micro is third on Fortune’s list of the fastest-growing companies. (Reuters)  + GenAI is learning to spy for the US military. (MIT Technology Review) + The compute competition is shaping the China-US rivalry. (Politico) 

3 The DoJ has taken down botnets behind the largest-ever DDoS attack They had infected more than 3 million devices. (Wired $) + The DoJ has also seized domains tied to Iranian “hacktivists.” (Axios)  4 The Pentagon says Anthropic’s foreign workers are a security risk It cited Chinese employees as a particular concern. (Axios) + Anthropic’s moral boundaries have incensed the DoD. (MIT Technology Review)  5 High oil prices could wreck the AI boom, the WTO has warned Fears are growing of a prolonged energy shock. (The Guardian) + We did the math on AI’s energy footprint. (MIT Technology Review)  6 Jeff Bezos is trying to raise $100 billion to use AI in manufacturing The funds would buy manufacturing firms and infuse them with AI. (WSJ $) + Here’s how to fine-tune AI for prosperity. (MIT Technology Review)  7 Signal’s creator is helping to encrypt Meta’s AI  Moxie Marlinspike is integrating his encrypted chatbot, Confer. (Wired $) + Meta is also ditching human moderators for AI again. (CNBC) + AI is making online crimes easier. (MIT Technology Review)  8 Prediction market Kalshi has raised $1 billion at a $22 billion valuation That’s double its valuation from December. (Bloomberg $) + Arizona’s AG has charged the company with “illegal gambling.” (NPR)  9 Meta isn’t killing Horizon Worlds for VR after all It’s canceled plans to dump the metaverse app (for now). (CNBC)  10 A US startup is recruiting an “AI bully”  The successful candidate must test the patience of leading chatbots. (The Guardian) 
Quote of the day  “Imagine a sports bar… but just for situation monitoring — live X feeds, flight radar, Bloomberg terminals, and Polymarket screens.”  —Kalshi rival Polymarket unveils its hellish vision for a new bar. 
One More Thing  SELMAN DESIGN How gamification took over the world  It’s a thought that occurs to every video-game player at some point: what if the weird, hyper-focused state I enter in virtual worlds could somehow be applied to the real one?  For a handful of consultants, startup gurus, and game designers in the late 2000s, this state of “blissful productivity” became the key to unlocking our true human potential. Their vision became the global phenomenon of gamification—but it didn’t live up to the hype.  Instead of liberating us, gamification became a tool for coercion, distraction, and control. Find out why we fell for it—and how we can recover.  —Bryan Gardiner  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + In a landmark legal win for trolling, Afroman has won his diss track case against the police. + This LEGO artist remixes standard sets into completely different iconic objects. + Ease your search for aliens with these interactive estimates of advanced civilizations.  + A rare superbloom in Death Valley has been caught on camera. 

Read More »

OpenAI is throwing everything into building a fully automated researcher

EXECUTIVE SUMMARY OpenAI is refocusing its research efforts and throwing its resources into a new grand challenge. The San Francisco firm has set its sights on building what it calls an AI researcher, a fully automated agent-based system that will be able to go off and tackle large, complex problems by itself. ​​OpenAI says that the new goal will be its “north star” for the next few years, pulling together multiple research strands, including work on reasoning models, agents, and interpretability. There’s even a timeline. OpenAI plans to build “an autonomous AI research intern”—a system that can take on a small number of specific research problems by itself—by September. The AI intern will be the precursor to a fully automated multi-agent research system that the company plans to debut in 2028. This AI researcher (OpenAI says) will be able to tackle problems that are too large or complex for humans to cope with. Those tasks might be related to math and physics—such as coming up with new proofs or conjectures—or life sciences like biology and chemistry, or even business and policy dilemmas. In theory, you would throw such a tool any kind of problem that can be formulated in text, code or whiteboard scribbles—which covers a lot. OpenAI has been setting the agenda for the AI industry for years. Its early dominance with large language models shaped the technology that hundreds of millions of people use every day. But it now faces fierce competition from rival model makers like Anthropic and Google DeepMind. What OpenAI decides to build next matters—for itself and for the future of AI.   
A big part of that decision falls to Jakub Pachocki, OpenAI’s chief scientist. Alongside chief research officer Mark Chen, Pachocki is one of two people responsible for setting the company’s long-term research goals. Pachocki played key roles in the development of both GPT-4, a game-changing LLM released in 2023, and so-called reasoning models, a technology that first appeared in 2024 and now underpins all major chatbots and agent-based systems.  In an exclusive interview this week, Pachocki talked me through OpenAI’s new grand challenge. “I think we are getting close to a point where we’ll have models capable of working indefinitely in a coherent way just like people do,” he says. “Of course, you still want people in charge and setting the goals. But I think we will get to a point where you kind of have a whole research lab in a data center.”
Such big claims aren’t new. Saving the world by solving its hardest problems is the stated mission of all the top AI firms. Demis Hassabis told me back in 2022 that it was why he started DeepMind. Anthropic CEO Dario Amodei says he is building the equivalent of a country of geniuses in a data center. Pachocki’s boss, Sam Altman, wants to cure cancer. But Pachocki says OpenAI now has most of what it needs to get there. In January, OpenAI released Codex, an agent-based app that can spin up code on the fly to carry out tasks on your computer. It can analyze documents, generate charts, make you a daily digest of your inbox and social media, and much more. OpenAI claims that most of its technical staff now use Codex in their work. You can look at Codex as a very early version of the AI researcher, says Pachocki: “I expect Codex to get fundamentally better.” The key is to make a system that can run for longer periods of time, with less human guidance. “What we’re really looking at for an automated research intern is a system that you can delegate tasks that would take a person a few days,” says Pachocki. “There are a lot of people excited about building systems that can do more long-running scientific research,” says Doug Downey, a research scientist at the Allen Institute for AI, who is not connected to OpenAI. “I think it’s largely driven by the success of these coding agents. The fact that you can delegate quite substantial coding tasks to tools like Codex is incredibly useful and incredibly impressive. And it raises the question: Can we do similar things outside coding, in broader areas of science?” For Pachocki, that’s a clear Yes. In fact, he thinks it’s just a matter of pushing ahead on the path we’re already on. A simple boost in all-round capability also leads to models working for longer without help, he says. He points to the leap from 2020’s GPT-3 to 2023’s GPT-4, two of OpenAI’s previous models. GPT-4 was able to work on a problem for far longer than its predecessor, even without specialized training, he says.  So-called reasoning models brought another bump. Training LLMs to work through problems step by step, backtracking when they make a mistake or hit a dead end, has also made models better at working for longer periods of time. And Pachocki is convinced that OpenAI’s reasoning models will continue to get better. But OpenAI is also training its systems to work by themselves for longer by feeding them specific samples of complex tasks, such as hard puzzles taken from math and coding contests, which force models to learn how to do things like keep track of very large chunks of text and split problems up into (and then manage) multiple subtasks. The aim isn’t to build models that just win math competitions. “That lets you prove that the technology works before you connect it to the real world,” says Pachocki. “If we really wanted to, we could build an amazing automated mathematician, we have all the tools, and I think it would be relatively easy. But it’s not something we’re going to prioritize now because, you know, at the point where you believe you can do it, there’s much more urgent things to do.”

“We are much more focused now on research that’s relevant in the real world,” he adds. Right now that means taking what Codex (and tools like it) can do with coding and trying to apply that to problem-solving in general. “There’s a big change happening, especially in programming,” he says. “Our jobs are now totally different than they were even a year ago. Nobody really edits code all the time anymore. Instead, you manage a group of Codex agents.” If Codex can solve coding problems (the argument goes), it can solve any problem. The line always goes up It’s true that OpenAI has had a handful of remarkable successes in the last few months. Researchers have used GPT-5 (the LLM that powers Codex) to discover new solutions to a number of unsolved math problems and punch through apparent dead ends in a handful of biology, chemistry and physics puzzles.    “Just looking at these models coming up with ideas that would take most PhD weeks, at least, makes me expect that we’ll see much more acceleration coming from this technology in the near future,” Pachocki says. But Pachocki admits that it’s not a done deal. He also understands why some people still have doubts about how much of a game-changer the technology really is. He thinks it depends on how people like to work and what they need to do. “I can believe some people don’t find it very useful yet,” he says. He tells me that he didn’t even use autocomplete—the most basic version of generative coding tech—a year ago himself. “I’m very pedantic about my code,” he says. “I like to type it all manually in vim if I can help it.” (Vim is a text editor favored by many hardcore programmers that you interact with via dozens of keyboard shortcuts instead of a mouse.) But that changed when he saw what the latest models could do. He still wouldn’t hand over complex design tasks, but it’s a time saver when he just wants to try out a few ideas. “I can have it run experiments in a weekend that previously would have taken me like a week to code,” he says. “I don’t think it is at the level where I would just let it take the reins and design the whole thing,” he adds. “But once you see it do something that would take a week to do, I mean that’s hard to argue with.”
Pachocki’s game plan is to supercharge the existing problem-solving abilities that tools like Codex have now and apply them across the sciences.   Downey agrees that the idea of an automated researcher is very cool: “It would be exciting if we could come back tomorrow morning and the agent’s done a bunch of work and there’s new results we can examine,” he says.
But he cautions that building such a system could be harder than Pachocki makes out. Last summer, Downey and his colleagues tested several top-tier LLMs on a range of scientific tasks. OpenAI’s latest model, GPT-5, came out on top but still made lots of errors. “If you have to chain tasks together then the odds that you get several of them right in succession tend to go down,” he says. Downey admits that things move fast and he has not tested the latest versions of GPT-5 (OpenAI released GPT-5.4 two weeks ago). “So those results might already be stale,” he says.  Serious unanswered questions I ask Pachocki about the risks that may come with a system that can solve large, complex problems by itself with little human oversight. Pachocki says people at OpenAI talk about those risks all the time. “If you believe that AI is about to substantially accelerate research, including AI research, that’s a big change in the world, that’s a big thing,” he says. “And it comes with some serious unanswered questions. If it’s so smart and capable, if it can run an entire research program, what if it does something bad?” The way Pachocki sees it, that could happen in a number of ways. The system could go off the rails. It could get hacked. Or it could simply misunderstand its instructions. The best technique OpenAI has right now to address these concerns is to train its reasoning models to share details about what they are doing as they work. This approach to keeping tabs on LLMs is known as chain-of-thought monitoring.
In short, LLMs are trained to jot down notes about what they are doing in a kind of scratchpad as they step through tasks. Researchers can then use those notes to make sure a model is behaving as expected. Yesterday OpenAI published new details on how it is using chain-of-thought monitoring in-house to study Codex.  “Once we get to systems working mostly autonomously for a long time in a big data center, I think this will be something that we’re really going to depend on,” says Pachocki. The idea would be to monitor an AI researcher’s scratchpads using other LLMs and catch unwanted behavior before it’s a problem, rather than stop that bad behavior from happening in the first place. LLMs are not understood well enough to control them fully. “I think it’s going to be a long time before we can really be like, okay, this problem is solved,” he says. “Until you can really trust the systems, you definitely want to have restrictions in place.” Pachocki thinks that very powerful models should be deployed in sandboxes cut off from anything they could break or use to cause harm. 
AI tools have already been used to come up with novel cyberattacks. Some worry that they will be used to design synthetic pathogens that could be used as bioweapons. You can insert any number of evil-scientist scare stories here. “I definitely think there are worrying scenarios that we can imagine,” says Pachocki.  “It’s going to be a very weird thing, it’s extremely concentrated power that’s in some ways unprecedented,” says Pachocki. “Imagine you get to a world where you have a data center that can do all the work that OpenAI or Google can do. Things that in the past required large human organisations, would now be done by a couple of people.” “I think this is a big challenge for governments to figure out,” he adds. And yet some people would say governments were part of the problem. The US government wants to use AI on the battlefield, for example. The recent showdown between Anthropic and the Pentagon revealed that there is little agreement across society about where we draw red lines for how this technology should and should not be used—let alone who should draw them. In the immediate aftermath of that dispute, OpenAI stepped up to sign a deal with the Pentagon instead of its rival. The situation remains murky. I push Pachocki on this. Does he really trust other people to figure it out or does he, as a key architect of the future, feel personal responsibility? “I do feel personal responsibility,” he says. “But I don’t think this can be resolved by OpenAI alone, pushing its technology in a particular way or designing its products in a particular way. We’ll definitely need a lot of involvement from policy makers.”

Read More »

Mind-altering substances are (still) falling short in clinical trials

This week I want to look at where we are with psychedelics, the mind-altering substances that have somehow made the leap from counterculture to major focus of clinical research. Compounds like psilocybin—which is found in magic mushrooms—are being explored for all sorts of health applications, including treatments for depression, PTSD, addiction, and even obesity. Over the last decade, we’ve seen scientific interest in these drugs explode. But most clinical trials of psychedelics have been small and plagued by challenges. And a lot of the trial results have been underwhelming or inconclusive. Two studies out earlier this week demonstrate just how difficult it is to study these drugs. And to my mind, they also show just how overhyped these substances have become. To some in the field, the hype is not necessarily a bad thing. Let me explain.
The two new studies both focus on the effectiveness of psilocybin in treating depression. And they both attempt to account for one of the biggest challenges in trialing psychedelics: what scientists call “blinding.” The best way to test the effectiveness of a new drug is to perform a randomized controlled trial. In these studies, some volunteers receive the drug while others get a placebo. For a fair comparison, the volunteers shouldn’t know whether they’re getting the drug or placebo.
That is almost impossible to do with psychedelics. Almost anyone can tell whether they’ve taken a dose of psilocybin or a dummy pill. The hallucinations are a dead giveaway. Still, the authors behind the two new studies have tried to overcome this challenge. In one, a team based in Germany gave 144 volunteers with treatment-resistant depression either a high or low dose of psilocybin or an “active” placebo, which has its own physical (but not hallucinatory) effects, along with psychotherapy. In their trial, neither the volunteers nor the investigators knew who was getting the drug. The volunteers who got psilocybin did show some improvement—but it was not significantly any better than the improvement experienced by those who took the placebo. And while those who took psilocybin did have a bigger reduction in their symptoms six weeks later, “the divergence between [the two results] renders the findings inconclusive,” the authors write. Not great news so far. The authors of the second study took a different approach. Balázs Szigeti at UCSF and his colleagues instead looked at what are known as “open label” studies of both psychedelics and traditional antidepressants. In those studies, the volunteers knew when they were getting a psychedelic—but they also knew when they were getting an antidepressant. The team assessed 24 such trials to find that … psychedelics were no more effective than traditional antidepressants. Sad trombone. “When I set up the study, I wanted to be a really cool psychedelic scientist to show that even if you consider this blinding problem, psychedelics are so much better than traditional antidepressants,” says Szigeti. “But unfortunately, the data came out the other way around.” His study highlights another problem, too.

In trials of traditional antidepressant drugs, the placebo effect is pretty strong. Depressive symptoms are often measured using a scale, and in trials, antidepressant drugs typically lower symptoms by around 10 points on that scale. Placebos can lower symptoms by around eight points. When a drug regulator looks at those results, the takeaway is that the antidepressant drug lowers symptoms by an additional two points on the scale, relative to a placebo. But with psychedelics, the difference between active drug and placebo is much greater. That’s partly because people who get the psychedelic drug know they’re getting it and are expecting the drug to improve their symptoms, says David Owens, emeritus professor of clinical psychiatry at the University of Edinburgh, UK. But it’s also partly because of the effect on those who know they’re not getting it. It’s pretty obvious when you’re getting a placebo, says Szigeti, and it can be disappointing. Scientists have long recognized the “nocebo” effect as placebo’s “evil twin”—essentially, when you expect to feel worse, you will. The disappointment of getting a placebo is slightly different, and Szigeti calls it the “knowcebo effect.” “It’s kind of like a negative psychedelic effect, because you have figured out that you’re taking the placebo,” he says. This phenomenon can distort the results of psychedelic drug trials. While a placebo in a traditional antidepressant drug trial improves symptoms by eight points, placebos in psychedelic trials improve symptoms by a mere four points, says Szigeti. If the active drug similarly improves symptoms by around 10 points, that makes it look as though the psychedelic is improving symptoms by around six points compared with a placebo. It “gives the illusion” of a huge effect, says Szigeti. So why have those smaller trials of the past received so much attention? Many have been published in high-end journals, accompanied by breathless press releases and media coverage. Even the inconclusive ones. I’ve often thought that those studies might not have seen the light of day if they’d been investigating any other drug.
“Yeah, nobody would care,” Szigeti agrees. It’s partly because people who work in mental health are so desperate for new treatments, says Owens. There has been little innovation in the last 40 years or so, since the advent of selective serotonin reuptake inhibitors. “Psychiatry is hemmed in with old theories … and we don’t need another SSRI for depression,” he says. But it’s also because psychedelics are inherently fascinating, says Szigeti. “Psychedelics are cool,” he says. “Culturally, they are exciting.”
I’ve often worried that psychedelics are overhyped—that people might get the mistaken impression they are cure-alls for mental-health disorders. I’ve worried that vulnerable people might be harmed by self-experimentation. Szigeti takes a different view. Given how effective we know the placebo effect can be, maybe hype isn’t a totally bad thing, he says. “The placebo response is the expectation of a benefit,” he says. “The better response patients are expecting, the better they’re going to get.” Tempering the hype might end up making those drugs less effective, he says. “At the end of the day, the goal of medicine is to help patients,” he says. “I think most [mental health] patients don’t care whether they feel better because of some expectancy and placebo effects or because of an active drug effect.” Either way, we need to know exactly what these drugs are doing. Maybe they will be able to help some people with depression. Maybe they won’t. Research that acknowledges the pitfalls associated with psychedelic drug trials is essential. “These are potentially exciting times,” says Owens. “But it’s really important we do this [research] well. And that means with eyes wide open.” This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE