Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Utilities under pressure: 6 power sector trends to watch in 2026

Listen to the article 10 min This audio is auto-generated. Please let us know if you have feedback. 2026 will be a year of reckoning for the electric power industry.  Major policy changes in the One Big Beautiful Bill Act, which axed most subsidies for clean energy and electric vehicles, are forcing utilities, manufacturers, developers and others to pivot fast. The impacts of those changes will become more pronounced over the coming months. Market forces will also have their say. Demand for power has never been greater. But some of the most aggressive predictions driving resource planning may not come to pass, leading some to fear the possibility of another tech bubble. At the same time, each passing day brings more distributed energy resources onto the grid, increasing the opportunities — and expectations — for utilities to harness those resources into a more dynamic, flexible and resilient system. Here are some of the top trends Utility Dive will be tracking over the coming year. Large loads — where are they, and who controls their interconnection — dominate industry concerns Across the United States, but particularly in markets like Texas and the Mid-Atlantic, large loads — mainly data centers designed to run artificial intelligence programs — are seeking to connect to the grid, driving up electricity demand forecasts and ballooning interconnection queues. That’s led some states to introduce new large load tariffs to weed out speculative requests, with more states expected to follow suit.  The Department of Energy is now pushing federal regulators to take a more active role in regulating how those loads get connected to the grid, setting the stage for a power struggle between state and federal authorities. The DOE asked the Federal Energy Regulatory Commission to issue rules by April 30, a deadline many say will be hard to meet. A

Read More »

China’s Top Oil Firms Turn to Beijing for Guidance on VEN

Leading Chinese oil companies with interests in Venezuela have asked Beijing for guidance on how to protect their investments as Washington cranks up pressure on the Latin American country to increase its economic ties with the US. State-owned firms led by China National Petroleum Corp. raised concerns this week with government agencies and sought advice from officials, in an effort to align their responses with Beijing’s diplomatic strategy and to salvage existing claims to some of the world’s largest oil reserves, according to people familiar with the situation. They asked not to be identified as the discussions are private. The companies, closely monitoring developments even before the US seized President Nicolas Maduro at the weekend, are also conducting their own assessments of the situation on the ground, the people said. Top Beijing officials are separately reviewing events and trying to better understand corporate exposure, while planning for scenarios including a worst case where China’s investments would go to zero, they added.  While it is typical for government-backed firms to maintain close ties with officials in Beijing, the emergency consultations underscore the stakes for Chinese majors, caught off-guard by Washington’s raid and by the rapid escalation of efforts to establish a US sphere of influence in the Americas. Beyond the immediate impact of US actions, all are concerned about long-term prospects, the people said. Chinese companies have established a significant footprint across Latin America over the past decades, including under the Belt and Road Initiative. Venezuela, with few other friends, has been among the most important beneficiaries of this largesse — in part because of its vast oil wealth. China first extended financing for infrastructure and oil projects in 2007, under former President Hugo Chavez. Public data supports estimates that Beijing had lent upwards of $60 billion in oil-backed loans through state-run banks by 2015. 

Read More »

America’s new dietary guidelines ignore decades of scientific research

The new year has barely begun, but the first days of 2026 have brought big news for health. On Monday, the US’s federal health agency upended its recommendations for routine childhood vaccinations—a move that health associations worry puts children at unnecessary risk of preventable disease. There was more news from the federal government on Wednesday, when health secretary Robert F. Kennedy Jr. and his colleagues at the Departments of Health and Human Services and Agriculture unveiled new dietary guidelines for Americans. And they are causing a bit of a stir. That’s partly because they recommend products like red meat, butter, and beef tallow—foods that have been linked to cardiovascular disease, and that nutrition experts have been recommending people limit in their diets. These guidelines are a big deal—they influence food assistance programs and school lunches, for example. So this week let’s look at the good, the bad, and the ugly advice being dished up to Americans by their government.
The government dietary guidelines have been around since the 1980s. They are updated every five years, in a process that typically involves a team of nutrition scientists who have combed over scientific research for years. That team will first publish its findings in a scientific report, and, around a year later, the finalized Dietary Guidelines for Americans are published. The last guidelines covered the period 2020 to 2025, and new guidelines were expected in the summer of 2025. Work had already been underway for years; the scientific report intended to inform them was published back in 2024. But the publication of the guidelines was delayed by last year’s government shutdown, Kennedy said last year. They were finally published yesterday.
Nutrition experts had been waiting with bated breath. Nutrition science has evolved slightly over the last five years, and some were expecting to see new recommendations. Research now suggests, for example, that there is no “safe” level of alcohol consumption. We are also beginning to learn more about health risks associated with some ultraprocessed foods (although we still don’t have a good understanding of what they might be, or what even counts as “ultraprocessed”.) And some scientists were expecting to see the new guidelines factor in environmental sustainability, says Gabby Headrick, the associate director of food and nutrition policy at George Washington University’s Institute for Food Safety & Nutrition Security in Washington DC. They didn’t. Many of the recommendations are sensible. The guidelines recommend a diet rich in whole foods, particularly fresh fruits and vegetables. They recommend avoiding highly processed foods and added sugars. They also highlight the importance of dietary protein, whole grains, and “healthy” fats. But not all of them are, says Headrick. The guidelines open with a “new pyramid” of foods. This inverted triangle is topped with “protein, dairy, and healthy fats” on one side and “vegetables and fruits” on the other. There are a few problems with this image. For starters, its shape—nutrition scientists have long moved on from the food pyramids of the 1990s, says Headrick. They’re confusing and make it difficult for people to understand what the contents of their plate should look like. That’s why scientists now use an image of a plate to depict a healthy diet. “We’ve been using MyPlate to describe the dietary guidelines in a very consumer-friendly, nutrition-education-friendly way for over the last decade now,” says Headrick. (The UK’s National Health Service takes a similar approach.) And then there’s the content of that food pyramid. It puts a significant focus on meat and whole-fat dairy produce. The top left image—the one most viewers will probably see first—is of a steak. Smack in the middle of the pyramid is a stick of butter. That’s new. And it’s not a good thing.

While both red meat and whole-fat dairy can certainly form part of a healthy diet, nutrition scientists have long been recommending that most people try to limit their consumption of these foods. Both can be high in saturated fat, which can increase the risk of cardiovascular disease—the leading cause of death in the US. In 2015, on the basis of limited evidence, the World Health Organization classified red meat as “probably carcinogenic to humans.”  Also concerning is the document’s definition of “healthy fats,” which includes butter and beef tallow (a MAHA favorite). Neither food is generally considered to be as healthy as olive oil, for example. While olive oil contains around two grams of saturated fat per tablespoon, a tablespoon of beef tallow has around six grams of saturated fat, and the same amount of butter contains around seven grams of saturated fat, says Headrick. “I think these are pretty harmful dietary recommendations to be making when we have established that those specific foods likely do not have health-promoting benefits,” she adds. Red meat is not exactly a sustainable food, and neither are dairy products. And the advice on alcohol is relatively vague, recommending that people “consume less alcohol for better overall health” (which might leave you wondering: Less than what?). There are other questionable recommendations in the guidelines. Americans are advised to include more protein in their diets—at levels between 1.2 and 1.6 grams daily per kilo of body weight, 50% to 100% more than recommended in previous guidelines. There’s a risk that increasing protein consumption to such levels could raise a person’s intake of both calories and saturated fats to unhealthy levels, says José Ordovás, a senior nutrition scientist at Tufts University. “I would err on the low side,” he says. Some nutrition scientists are questioning why these changes have been made. It’s not as though the new recommendations were in the 2024 scientific report. And the evidence on red meat and saturated fat hasn’t changed, says Headrick. In reporting this piece, I contacted many contributors to the previous guidelines, and some who had led research for 2024’s scientific report. None of them agreed to comment on the new guidelines on the record. Some seemed disgruntled. One merely told me that the process by which the new guidelines had been created was “opaque.” “These people invested a lot of their time, and they did a thorough job [over] a couple of years, identifying [relevant scientific studies],” says Ordovás. “I’m not surprised that when they see that [their] work was ignored and replaced with something [put together] quickly, that they feel a little bit disappointed,” he says. This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Read More »

USA Crude Oil Stocks Drop Nearly 4MM Barrels WoW

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 3.8 million barrels from the week ending December 26 to the week ending January 2, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. This report was released on January 7 and included data for the week ending January 2. According to the report, crude oil stocks, not including the SPR, stood at 419.1 million barrels on January 2, 422.9 million barrels on December 26, 2025, and 414.6 million barrels on January 3, 2025. Crude oil in the SPR stood at 413.5 million barrels on January 2, 413.2 million barrels on December 26, and 393.8 million barrels on January 3, 2025, the report showed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.707 billion barrels on January 2, the report revealed. Total petroleum stocks were up 8.4 million barrels week on week and up 78.7 million barrels year on year, the report pointed out. “At 419.1 million barrels, U.S. crude oil inventories are about three percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 7.7 million barrels from last week and are about three percent above the five year average for this time of year. Finished gasoline inventories decreased, while blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 5.6 million barrels last week and are about three percent below the five year average for this time of year. Propane/propylene inventories decreased 2.2 million barrels from last week and are about 29 percent above the five year

Read More »

The Download: mimicking pregnancy’s first moments in a lab, and AI parameters explained

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Researchers are getting organoids pregnant with human embryos At first glance, it looks like the start of a human pregnancy: A ball-shaped embryo presses into the lining of the uterus then grips tight, burrowing in as the first tendrils of a future placenta appear. This is implantation—the moment that pregnancy officially begins. Only none of it is happening inside a body. These images were captured in a Beijing laboratory, inside a microfluidic chip, as scientists watched the scene unfold.
In three recent papers published by Cell Press, scientists report what they call the most accurate efforts yet to mimic the first moments of pregnancy in the lab. They’ve taken human embryos from IVF centers and let these merge with “organoids” made of endometrial cells, which form the lining of the uterus. Read our story about their work, and what might come next. —Antonio Regalado
LLMs contain a LOT of parameters. But what’s a parameter? A large language model’s parameters are often said to be the dials and levers that control how it behaves. Think of a planet-size pinball machine that sends its balls pinging from one end to the other via billions of paddles and bumpers set just so. Tweak those settings and the balls will behave in a different way.   OpenAI’s GPT-3, released in 2020, had 175 billion parameters. Google DeepMind’s latest LLM, Gemini 3, may have at least a trillion—some think it’s probably more like 7 trillion—but the company isn’t saying. (With competition now fierce, AI firms no longer share information about how their models are built.) But the basics of what parameters are and how they make LLMs do the remarkable things that they do are the same across different models. Ever wondered what makes an LLM really tick—what’s behind the colorful pinball-machine metaphors? Let’s dive in.  —Will Douglas Heaven What new legal challenges mean for the future of US offshore wind

For offshore wind power in the US, the new year is bringing new legal battles. On December 22, the Trump administration announced it would pause the leases of five wind farms currently under construction off the US East Coast. Developers were ordered to stop work immediately. The cited reason? Concerns that turbines can cause radar interference. But that’s a known issue, and developers have worked with the government to deal with it for years. Companies have been quick to file lawsuits, and the court battles could begin as soon as this week. Here’s what the latest kerfuffle might mean for the US’s struggling offshore wind industry. —Casey Crownhart This story is from The Spark, our weekly newsletter that explains the tech that could combat the climate crisis. Sign up to receive it in your inbox every Wednesday. The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Google and Character.AI have agreed to settle a lawsuit over a teenager’s deathIt’s one of five lawsuits the companies have settled linked to young people’s deaths this week. (NYT $)+ AI companions are the final stage of digital addiction, and lawmakers are taking aim. (MIT Technology Review)2 The Trump administration’s chief output is online trollingWitness the Maduro memes. (The Atlantic $)
3 OpenAI has created a new ChatGPT Health feature It’s dedicated to analyzing medical results and answering health queries. (Axios)+ AI chatbots fail to give adequate advice for most questions relating to women’s health. (New Scientist $)+ AI companies have stopped warning you that their chatbots aren’t doctors. (MIT Technology Review) 4 Meta’s acquisition of Manus is being probed by ChinaHolding up the purchase gives it another bargaining chip in its dealings with the US. (CNBC)+ What happened when we put Manus to the test. (MIT Technology Review)5 China is building humanoid robot training centersTo address a major shortage of the data needed to make them more competent. (Rest of World)+ The robot race is fueling a fight for training data. (MIT Technology Review) 6 AI still isn’t close to automating our jobsThe technology just fundamentally isn’t good enough yet—for now. (WP $) 7 Weight regain seems to happen within two years of quitting the jabsThat’s the conclusion of a review of more than 40 studies. But dig into the details, and it’s not all bad news. (New Scientist $)8 This Silicon Valley community is betting on algorithms to find loveWhich feels like a bit of a fool’s errand. (NYT $)9 Hearing aids are about to get really goodYou can—of course—thank advances in AI. (IEEE Spectrum) 10 The first 100% AI-generated movie will hit our screen within three yearsThat’s according to Roku’s founder Anthony Wood. (Variety $)+ How do AI models generate videos? (MIT Technology Review)
Quote of the day “I’ve seen the video. Don’t believe this propaganda machine. ”  —Minnesota’s governor Tim Walz responds on X to Homeland Security’s claim that ICE’s shooting of a woman in Minneapolis was justified.
One more thing Inside the strange limbo facing millions of IVF embryosMillions of embryos created through IVF sit frozen in time, stored in cryopreservation tanks around the world. The number is only growing thanks to advances in technology, the rising popularity of IVF, and improvements in its success rates.At a basic level, an embryo is simply a tiny ball of a hundred or so cells. But unlike other types of body tissue, it holds the potential for life. Many argue that this endows embryos with a special moral status, one that requires special protections.The problem is that no one can really agree on what that status is. So while these embryos persist in suspended animation, patients, clinicians, embryologists, and legislators must grapple with the essential question of what we should do with them. What do these embryos mean to us? Who should be responsible for them? Read the full story. —Jessica Hamzelou We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + I love hearing about musicians’ favorite songs 🎶+ Here are some top tips for making the most of travelling on your own.+ Check out just some of the excellent-sounding new books due for publication this year.+ I could play this spherical version of Snake forever (thanks Rachel!)

Read More »

Using unstructured data to fuel enterprise AI success

In partnership withInvisible Enterprises are sitting on vast quantities of unstructured data, from call records and video footage to customer complaint histories and supply chain signals. Yet this invaluable business intelligence, estimated to make up as much as 90% of the data generated by organizations, historically remained dormant because its unstructured nature makes analysis extremely difficult. But if managed and centralized effectively, this messy and often voluminous data is not only a precious asset for training and optimizing next-generation AI systems, enhancing their accuracy, context, and adaptability, it can also deliver profound insights that drive real business outcomes. A compelling example of this can be seen in the US NBA basketball team the Charlotte Hornets who successfully leveraged untapped video footage of gameplay—previously too copious to watch and too unstructured to analyze—to identify a new competition-winning recruit. However, before that data could deliver results, analysts working for the team first had to overcome the critical challenge of preparing the raw, unstructured footage for interpretation. The challenges of organizing and contextualizing unstructured data Unstructured data presents inherent difficulties due to its widely varying format, quality, and reliability, requiring specialized tools like natural language processing and AI to make sense of it.
Every organization’s pool of unstructured data also contains domain-specific characteristics and terminology that generic AI models may not automatically understand. A financial services firm, for example, cannot simply use a general language model for fraud detection. Instead, it needs to adapt the model to understand regulatory language, transaction patterns, industry-specific risk indicators, and unique company context like data policies. The challenge intensifies when integrating multiple data sources with varying structures and quality standards, as teams may struggle to distinguish valuable data from noise.
How computer vision gave the Charlotte Hornets an edge  When the Charlotte Hornets set out to identify a new draft pick for their team, they turned to AI tools including computer vision to analyze raw game footage from smaller leagues, which exist outside the tiers of the game normally visible to NBA scouts and, therefore, are not as readily available for analysis. “Computer vision is a tool that has existed for some time, but I think the applicability in this age of AI is increasing rapidly,” says Jordan Cealey, senior vice president at AI company Invisible Technologies, which worked with the Charlotte Hornets on this project. “You can now take data sources that you’ve never been able to consume, and provide an analytical layer that’s never existed before.” By deploying a variety of computer vision techniques, including object and player tracking, movement pattern analysis, and geometrically mapping points on the court, the team was able to extract kinematic data, such as the coordinates of players during movement, and generate metrics like speed and explosiveness to acceleration.  This provided the team with rich, data-driven insights about individual players, helping them to identify and select a new draft whose skill and techniques filled a hole in the Charlotte Hornets’ own capabilities. The chosen athlete went on to be named the most valuable player at the 2025 NBA Summer League and helped the team win their first summer championship title. Annotation of a basketball match Before data from game footage can be used, it needs to be labeled so the model can interpret it. The x and y coordinates of the individual players, seen here in bounding boxes, as well as other features in the scene, are annotated so the model can identify individuals and track their movements through time.

[embedded content]

Taking AI pilot programs into production  From this successful example, several lessons can be learned. First, unstructured data must be prepared for AI models through intuitive forms of collection, and the right data pipelines and management records. “You can only utilize unstructured data once your structured data is consumable and ready for AI,” says Cealey. “You cannot just throw AI at a problem without doing the prep work.”  For many organizations, this might mean they need to find partners that offer the technical support to fine-tune models to the context of the business. The traditional technology consulting approach, in which an external vendor leads a digital transformation plan over a lengthy timeframe, is not fit for purpose here as AI is moving too fast and solutions need to be configured to a company’s current business reality.  Forward-deployed engineers (FDEs) are an emerging partnership model better suited to the AI era. Initially popularized by Palantir, the FDE model connects product and engineering capabilities directly to the customer’s operational environment. FDEs work closely with customers on-site to understand the context behind a technology initiative before a solution is built. 

“We couldn’t do what we do without our FDEs,” says Cealey. “They go out and fine-tune the models, working with our human annotation team to generate a ground truth dataset that can be used to validate or improve the performance of the model in production.” Second, data needs to be understood within its own context, which requires models to be carefully calibrated to the use case. “You can’t assume that an out-of-the-box computer vision model is going to give you better inventory management, for example, by taking that open source model and applying it to whatever your unstructured data feeds are,” says Cealey. “You need to fine-tune it so it gives you the data exports in the format you want and helps your aims. That’s where you start to see high-performative models that can then actually generate useful data insights.”  For the Hornets, Invisible used five foundation models, which the team fine-tuned to context-specific data. This included teaching the models to understand that they were “looking at” a basketball court as opposed to, say, a football field; to understand how a game of basketball works differently from any other sport the model might have knowledge of (including how many players are on each team); and to understand how to spot rules like “out of bounds.” Once fine-tuned, the models were able to capture subtle and complex visual scenarios, including highly accurate object detection, tracking, postures, and spatial mapping. Lastly, while the AI technology mix available to companies changes by the day, they cannot eschew old-fashioned commercial metrics: clear goals. Without clarity on the business purpose, AI pilot programs can easily turn into open-ended, meandering research projects that prove expensive in terms of compute, data costs, and staffing.  “The best engagements we have seen are when people know what they want,” Cealey observes. “The worst is when people say ‘we want AI’ but have no direction. In these situations, they are on an endless pursuit without a map.” This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read More »

Utilities under pressure: 6 power sector trends to watch in 2026

Listen to the article 10 min This audio is auto-generated. Please let us know if you have feedback. 2026 will be a year of reckoning for the electric power industry.  Major policy changes in the One Big Beautiful Bill Act, which axed most subsidies for clean energy and electric vehicles, are forcing utilities, manufacturers, developers and others to pivot fast. The impacts of those changes will become more pronounced over the coming months. Market forces will also have their say. Demand for power has never been greater. But some of the most aggressive predictions driving resource planning may not come to pass, leading some to fear the possibility of another tech bubble. At the same time, each passing day brings more distributed energy resources onto the grid, increasing the opportunities — and expectations — for utilities to harness those resources into a more dynamic, flexible and resilient system. Here are some of the top trends Utility Dive will be tracking over the coming year. Large loads — where are they, and who controls their interconnection — dominate industry concerns Across the United States, but particularly in markets like Texas and the Mid-Atlantic, large loads — mainly data centers designed to run artificial intelligence programs — are seeking to connect to the grid, driving up electricity demand forecasts and ballooning interconnection queues. That’s led some states to introduce new large load tariffs to weed out speculative requests, with more states expected to follow suit.  The Department of Energy is now pushing federal regulators to take a more active role in regulating how those loads get connected to the grid, setting the stage for a power struggle between state and federal authorities. The DOE asked the Federal Energy Regulatory Commission to issue rules by April 30, a deadline many say will be hard to meet. A

Read More »

China’s Top Oil Firms Turn to Beijing for Guidance on VEN

Leading Chinese oil companies with interests in Venezuela have asked Beijing for guidance on how to protect their investments as Washington cranks up pressure on the Latin American country to increase its economic ties with the US. State-owned firms led by China National Petroleum Corp. raised concerns this week with government agencies and sought advice from officials, in an effort to align their responses with Beijing’s diplomatic strategy and to salvage existing claims to some of the world’s largest oil reserves, according to people familiar with the situation. They asked not to be identified as the discussions are private. The companies, closely monitoring developments even before the US seized President Nicolas Maduro at the weekend, are also conducting their own assessments of the situation on the ground, the people said. Top Beijing officials are separately reviewing events and trying to better understand corporate exposure, while planning for scenarios including a worst case where China’s investments would go to zero, they added.  While it is typical for government-backed firms to maintain close ties with officials in Beijing, the emergency consultations underscore the stakes for Chinese majors, caught off-guard by Washington’s raid and by the rapid escalation of efforts to establish a US sphere of influence in the Americas. Beyond the immediate impact of US actions, all are concerned about long-term prospects, the people said. Chinese companies have established a significant footprint across Latin America over the past decades, including under the Belt and Road Initiative. Venezuela, with few other friends, has been among the most important beneficiaries of this largesse — in part because of its vast oil wealth. China first extended financing for infrastructure and oil projects in 2007, under former President Hugo Chavez. Public data supports estimates that Beijing had lent upwards of $60 billion in oil-backed loans through state-run banks by 2015. 

Read More »

America’s new dietary guidelines ignore decades of scientific research

The new year has barely begun, but the first days of 2026 have brought big news for health. On Monday, the US’s federal health agency upended its recommendations for routine childhood vaccinations—a move that health associations worry puts children at unnecessary risk of preventable disease. There was more news from the federal government on Wednesday, when health secretary Robert F. Kennedy Jr. and his colleagues at the Departments of Health and Human Services and Agriculture unveiled new dietary guidelines for Americans. And they are causing a bit of a stir. That’s partly because they recommend products like red meat, butter, and beef tallow—foods that have been linked to cardiovascular disease, and that nutrition experts have been recommending people limit in their diets. These guidelines are a big deal—they influence food assistance programs and school lunches, for example. So this week let’s look at the good, the bad, and the ugly advice being dished up to Americans by their government.
The government dietary guidelines have been around since the 1980s. They are updated every five years, in a process that typically involves a team of nutrition scientists who have combed over scientific research for years. That team will first publish its findings in a scientific report, and, around a year later, the finalized Dietary Guidelines for Americans are published. The last guidelines covered the period 2020 to 2025, and new guidelines were expected in the summer of 2025. Work had already been underway for years; the scientific report intended to inform them was published back in 2024. But the publication of the guidelines was delayed by last year’s government shutdown, Kennedy said last year. They were finally published yesterday.
Nutrition experts had been waiting with bated breath. Nutrition science has evolved slightly over the last five years, and some were expecting to see new recommendations. Research now suggests, for example, that there is no “safe” level of alcohol consumption. We are also beginning to learn more about health risks associated with some ultraprocessed foods (although we still don’t have a good understanding of what they might be, or what even counts as “ultraprocessed”.) And some scientists were expecting to see the new guidelines factor in environmental sustainability, says Gabby Headrick, the associate director of food and nutrition policy at George Washington University’s Institute for Food Safety & Nutrition Security in Washington DC. They didn’t. Many of the recommendations are sensible. The guidelines recommend a diet rich in whole foods, particularly fresh fruits and vegetables. They recommend avoiding highly processed foods and added sugars. They also highlight the importance of dietary protein, whole grains, and “healthy” fats. But not all of them are, says Headrick. The guidelines open with a “new pyramid” of foods. This inverted triangle is topped with “protein, dairy, and healthy fats” on one side and “vegetables and fruits” on the other. There are a few problems with this image. For starters, its shape—nutrition scientists have long moved on from the food pyramids of the 1990s, says Headrick. They’re confusing and make it difficult for people to understand what the contents of their plate should look like. That’s why scientists now use an image of a plate to depict a healthy diet. “We’ve been using MyPlate to describe the dietary guidelines in a very consumer-friendly, nutrition-education-friendly way for over the last decade now,” says Headrick. (The UK’s National Health Service takes a similar approach.) And then there’s the content of that food pyramid. It puts a significant focus on meat and whole-fat dairy produce. The top left image—the one most viewers will probably see first—is of a steak. Smack in the middle of the pyramid is a stick of butter. That’s new. And it’s not a good thing.

While both red meat and whole-fat dairy can certainly form part of a healthy diet, nutrition scientists have long been recommending that most people try to limit their consumption of these foods. Both can be high in saturated fat, which can increase the risk of cardiovascular disease—the leading cause of death in the US. In 2015, on the basis of limited evidence, the World Health Organization classified red meat as “probably carcinogenic to humans.”  Also concerning is the document’s definition of “healthy fats,” which includes butter and beef tallow (a MAHA favorite). Neither food is generally considered to be as healthy as olive oil, for example. While olive oil contains around two grams of saturated fat per tablespoon, a tablespoon of beef tallow has around six grams of saturated fat, and the same amount of butter contains around seven grams of saturated fat, says Headrick. “I think these are pretty harmful dietary recommendations to be making when we have established that those specific foods likely do not have health-promoting benefits,” she adds. Red meat is not exactly a sustainable food, and neither are dairy products. And the advice on alcohol is relatively vague, recommending that people “consume less alcohol for better overall health” (which might leave you wondering: Less than what?). There are other questionable recommendations in the guidelines. Americans are advised to include more protein in their diets—at levels between 1.2 and 1.6 grams daily per kilo of body weight, 50% to 100% more than recommended in previous guidelines. There’s a risk that increasing protein consumption to such levels could raise a person’s intake of both calories and saturated fats to unhealthy levels, says José Ordovás, a senior nutrition scientist at Tufts University. “I would err on the low side,” he says. Some nutrition scientists are questioning why these changes have been made. It’s not as though the new recommendations were in the 2024 scientific report. And the evidence on red meat and saturated fat hasn’t changed, says Headrick. In reporting this piece, I contacted many contributors to the previous guidelines, and some who had led research for 2024’s scientific report. None of them agreed to comment on the new guidelines on the record. Some seemed disgruntled. One merely told me that the process by which the new guidelines had been created was “opaque.” “These people invested a lot of their time, and they did a thorough job [over] a couple of years, identifying [relevant scientific studies],” says Ordovás. “I’m not surprised that when they see that [their] work was ignored and replaced with something [put together] quickly, that they feel a little bit disappointed,” he says. This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Read More »

USA Crude Oil Stocks Drop Nearly 4MM Barrels WoW

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 3.8 million barrels from the week ending December 26 to the week ending January 2, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. This report was released on January 7 and included data for the week ending January 2. According to the report, crude oil stocks, not including the SPR, stood at 419.1 million barrels on January 2, 422.9 million barrels on December 26, 2025, and 414.6 million barrels on January 3, 2025. Crude oil in the SPR stood at 413.5 million barrels on January 2, 413.2 million barrels on December 26, and 393.8 million barrels on January 3, 2025, the report showed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.707 billion barrels on January 2, the report revealed. Total petroleum stocks were up 8.4 million barrels week on week and up 78.7 million barrels year on year, the report pointed out. “At 419.1 million barrels, U.S. crude oil inventories are about three percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 7.7 million barrels from last week and are about three percent above the five year average for this time of year. Finished gasoline inventories decreased, while blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 5.6 million barrels last week and are about three percent below the five year average for this time of year. Propane/propylene inventories decreased 2.2 million barrels from last week and are about 29 percent above the five year

Read More »

The Download: mimicking pregnancy’s first moments in a lab, and AI parameters explained

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Researchers are getting organoids pregnant with human embryos At first glance, it looks like the start of a human pregnancy: A ball-shaped embryo presses into the lining of the uterus then grips tight, burrowing in as the first tendrils of a future placenta appear. This is implantation—the moment that pregnancy officially begins. Only none of it is happening inside a body. These images were captured in a Beijing laboratory, inside a microfluidic chip, as scientists watched the scene unfold.
In three recent papers published by Cell Press, scientists report what they call the most accurate efforts yet to mimic the first moments of pregnancy in the lab. They’ve taken human embryos from IVF centers and let these merge with “organoids” made of endometrial cells, which form the lining of the uterus. Read our story about their work, and what might come next. —Antonio Regalado
LLMs contain a LOT of parameters. But what’s a parameter? A large language model’s parameters are often said to be the dials and levers that control how it behaves. Think of a planet-size pinball machine that sends its balls pinging from one end to the other via billions of paddles and bumpers set just so. Tweak those settings and the balls will behave in a different way.   OpenAI’s GPT-3, released in 2020, had 175 billion parameters. Google DeepMind’s latest LLM, Gemini 3, may have at least a trillion—some think it’s probably more like 7 trillion—but the company isn’t saying. (With competition now fierce, AI firms no longer share information about how their models are built.) But the basics of what parameters are and how they make LLMs do the remarkable things that they do are the same across different models. Ever wondered what makes an LLM really tick—what’s behind the colorful pinball-machine metaphors? Let’s dive in.  —Will Douglas Heaven What new legal challenges mean for the future of US offshore wind

For offshore wind power in the US, the new year is bringing new legal battles. On December 22, the Trump administration announced it would pause the leases of five wind farms currently under construction off the US East Coast. Developers were ordered to stop work immediately. The cited reason? Concerns that turbines can cause radar interference. But that’s a known issue, and developers have worked with the government to deal with it for years. Companies have been quick to file lawsuits, and the court battles could begin as soon as this week. Here’s what the latest kerfuffle might mean for the US’s struggling offshore wind industry. —Casey Crownhart This story is from The Spark, our weekly newsletter that explains the tech that could combat the climate crisis. Sign up to receive it in your inbox every Wednesday. The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Google and Character.AI have agreed to settle a lawsuit over a teenager’s deathIt’s one of five lawsuits the companies have settled linked to young people’s deaths this week. (NYT $)+ AI companions are the final stage of digital addiction, and lawmakers are taking aim. (MIT Technology Review)2 The Trump administration’s chief output is online trollingWitness the Maduro memes. (The Atlantic $)
3 OpenAI has created a new ChatGPT Health feature It’s dedicated to analyzing medical results and answering health queries. (Axios)+ AI chatbots fail to give adequate advice for most questions relating to women’s health. (New Scientist $)+ AI companies have stopped warning you that their chatbots aren’t doctors. (MIT Technology Review) 4 Meta’s acquisition of Manus is being probed by ChinaHolding up the purchase gives it another bargaining chip in its dealings with the US. (CNBC)+ What happened when we put Manus to the test. (MIT Technology Review)5 China is building humanoid robot training centersTo address a major shortage of the data needed to make them more competent. (Rest of World)+ The robot race is fueling a fight for training data. (MIT Technology Review) 6 AI still isn’t close to automating our jobsThe technology just fundamentally isn’t good enough yet—for now. (WP $) 7 Weight regain seems to happen within two years of quitting the jabsThat’s the conclusion of a review of more than 40 studies. But dig into the details, and it’s not all bad news. (New Scientist $)8 This Silicon Valley community is betting on algorithms to find loveWhich feels like a bit of a fool’s errand. (NYT $)9 Hearing aids are about to get really goodYou can—of course—thank advances in AI. (IEEE Spectrum) 10 The first 100% AI-generated movie will hit our screen within three yearsThat’s according to Roku’s founder Anthony Wood. (Variety $)+ How do AI models generate videos? (MIT Technology Review)
Quote of the day “I’ve seen the video. Don’t believe this propaganda machine. ”  —Minnesota’s governor Tim Walz responds on X to Homeland Security’s claim that ICE’s shooting of a woman in Minneapolis was justified.
One more thing Inside the strange limbo facing millions of IVF embryosMillions of embryos created through IVF sit frozen in time, stored in cryopreservation tanks around the world. The number is only growing thanks to advances in technology, the rising popularity of IVF, and improvements in its success rates.At a basic level, an embryo is simply a tiny ball of a hundred or so cells. But unlike other types of body tissue, it holds the potential for life. Many argue that this endows embryos with a special moral status, one that requires special protections.The problem is that no one can really agree on what that status is. So while these embryos persist in suspended animation, patients, clinicians, embryologists, and legislators must grapple with the essential question of what we should do with them. What do these embryos mean to us? Who should be responsible for them? Read the full story. —Jessica Hamzelou We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + I love hearing about musicians’ favorite songs 🎶+ Here are some top tips for making the most of travelling on your own.+ Check out just some of the excellent-sounding new books due for publication this year.+ I could play this spherical version of Snake forever (thanks Rachel!)

Read More »

Using unstructured data to fuel enterprise AI success

In partnership withInvisible Enterprises are sitting on vast quantities of unstructured data, from call records and video footage to customer complaint histories and supply chain signals. Yet this invaluable business intelligence, estimated to make up as much as 90% of the data generated by organizations, historically remained dormant because its unstructured nature makes analysis extremely difficult. But if managed and centralized effectively, this messy and often voluminous data is not only a precious asset for training and optimizing next-generation AI systems, enhancing their accuracy, context, and adaptability, it can also deliver profound insights that drive real business outcomes. A compelling example of this can be seen in the US NBA basketball team the Charlotte Hornets who successfully leveraged untapped video footage of gameplay—previously too copious to watch and too unstructured to analyze—to identify a new competition-winning recruit. However, before that data could deliver results, analysts working for the team first had to overcome the critical challenge of preparing the raw, unstructured footage for interpretation. The challenges of organizing and contextualizing unstructured data Unstructured data presents inherent difficulties due to its widely varying format, quality, and reliability, requiring specialized tools like natural language processing and AI to make sense of it.
Every organization’s pool of unstructured data also contains domain-specific characteristics and terminology that generic AI models may not automatically understand. A financial services firm, for example, cannot simply use a general language model for fraud detection. Instead, it needs to adapt the model to understand regulatory language, transaction patterns, industry-specific risk indicators, and unique company context like data policies. The challenge intensifies when integrating multiple data sources with varying structures and quality standards, as teams may struggle to distinguish valuable data from noise.
How computer vision gave the Charlotte Hornets an edge  When the Charlotte Hornets set out to identify a new draft pick for their team, they turned to AI tools including computer vision to analyze raw game footage from smaller leagues, which exist outside the tiers of the game normally visible to NBA scouts and, therefore, are not as readily available for analysis. “Computer vision is a tool that has existed for some time, but I think the applicability in this age of AI is increasing rapidly,” says Jordan Cealey, senior vice president at AI company Invisible Technologies, which worked with the Charlotte Hornets on this project. “You can now take data sources that you’ve never been able to consume, and provide an analytical layer that’s never existed before.” By deploying a variety of computer vision techniques, including object and player tracking, movement pattern analysis, and geometrically mapping points on the court, the team was able to extract kinematic data, such as the coordinates of players during movement, and generate metrics like speed and explosiveness to acceleration.  This provided the team with rich, data-driven insights about individual players, helping them to identify and select a new draft whose skill and techniques filled a hole in the Charlotte Hornets’ own capabilities. The chosen athlete went on to be named the most valuable player at the 2025 NBA Summer League and helped the team win their first summer championship title. Annotation of a basketball match Before data from game footage can be used, it needs to be labeled so the model can interpret it. The x and y coordinates of the individual players, seen here in bounding boxes, as well as other features in the scene, are annotated so the model can identify individuals and track their movements through time.

[embedded content]

Taking AI pilot programs into production  From this successful example, several lessons can be learned. First, unstructured data must be prepared for AI models through intuitive forms of collection, and the right data pipelines and management records. “You can only utilize unstructured data once your structured data is consumable and ready for AI,” says Cealey. “You cannot just throw AI at a problem without doing the prep work.”  For many organizations, this might mean they need to find partners that offer the technical support to fine-tune models to the context of the business. The traditional technology consulting approach, in which an external vendor leads a digital transformation plan over a lengthy timeframe, is not fit for purpose here as AI is moving too fast and solutions need to be configured to a company’s current business reality.  Forward-deployed engineers (FDEs) are an emerging partnership model better suited to the AI era. Initially popularized by Palantir, the FDE model connects product and engineering capabilities directly to the customer’s operational environment. FDEs work closely with customers on-site to understand the context behind a technology initiative before a solution is built. 

“We couldn’t do what we do without our FDEs,” says Cealey. “They go out and fine-tune the models, working with our human annotation team to generate a ground truth dataset that can be used to validate or improve the performance of the model in production.” Second, data needs to be understood within its own context, which requires models to be carefully calibrated to the use case. “You can’t assume that an out-of-the-box computer vision model is going to give you better inventory management, for example, by taking that open source model and applying it to whatever your unstructured data feeds are,” says Cealey. “You need to fine-tune it so it gives you the data exports in the format you want and helps your aims. That’s where you start to see high-performative models that can then actually generate useful data insights.”  For the Hornets, Invisible used five foundation models, which the team fine-tuned to context-specific data. This included teaching the models to understand that they were “looking at” a basketball court as opposed to, say, a football field; to understand how a game of basketball works differently from any other sport the model might have knowledge of (including how many players are on each team); and to understand how to spot rules like “out of bounds.” Once fine-tuned, the models were able to capture subtle and complex visual scenarios, including highly accurate object detection, tracking, postures, and spatial mapping. Lastly, while the AI technology mix available to companies changes by the day, they cannot eschew old-fashioned commercial metrics: clear goals. Without clarity on the business purpose, AI pilot programs can easily turn into open-ended, meandering research projects that prove expensive in terms of compute, data costs, and staffing.  “The best engagements we have seen are when people know what they want,” Cealey observes. “The worst is when people say ‘we want AI’ but have no direction. In these situations, they are on an endless pursuit without a map.” This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read More »

Trump Orders Blockade of Sanctioned Oil Tankers in Venezuela

President Donald Trump ordered a blockade of sanctioned oil tankers going into and leaving Venezuela, ratcheting up pressure on Caracas as the US builds up its military presence in the region. “Venezuela is completely surrounded by the largest Armada ever assembled in the History of South America,” Trump wrote on social media Tuesday. “It will only get bigger, and the shock to them will be like nothing they have ever seen before.” The move threatens to choke off the economic lifeblood of a country that was already under severe financial pressure. But it will have a less profound impact on global markets due to the diminished status of Venezuela’s oil industry.  The OPEC member’s crude output has slumped about 70% through more than 25 years of socialist rule to less than 1 million barrels a day. It could potentially rebound if the governing regime were to change.  Even so, the move represents an escalation of Trump’s pressure on President Nicolas Maduro with the potential to further destabilize the country in the short term. Venezuela condemned the latest measures as a “reckless and serious” threat. US crude benchmark West Texas Intermediate climbed as much as 1.7% to trade near $56 a barrel, rebounding from the lowest level in almost five years. “Trump intends to impose, in an utterly irrational manner, a supposed military blockade of Venezuela with the aim of stealing the riches that belong to our homeland,” the government said in a statement published late Tuesday on Vice President Delcy Rodríguez’s Telegram account. “Venezuela reaffirms its sovereignty over all its natural resources.” Venezuela said in its statement that its ambassador to the United Nations would immediately denounce what it called a “grave” violation of international law. Trump said he was also designating the Maduro regime as a “FOREIGN TERRORIST ORGANIZATION.”

Read More »

USA Readies New Russia Sanctions If Putin Rejects Deal

The US is preparing a fresh round of sanctions on Russia’s energy sector to increase the pressure on Moscow should President Vladimir Putin reject a peace agreement with Ukraine, according to people familiar with the matter. The US is considering options, such as targeting vessels in Russia’s so-called shadow fleet of tankers used to transport Moscow’s oil, as well as traders who facilitate the transactions, said the people who spoke on condition of anonymity to discuss private deliberations. The new measures could be unveiled as early as this week, some of the people said.  Treasury Secretary Scott Bessent discussed the plans when he met a group of European ambassadors earlier this week, the people said. “President Trump is the President of Peace, and I reiterated that under his leadership, America will continue to prioritize ending the war in Ukraine,” he wrote in a post on the social media platform X after the meeting. The people cautioned that any final decision rests with President Donald Trump. A request for comment placed with the Department of Treasury outside of business hours wasn’t immediately returned. The Kremlin is aware that some US officials are mulling plans to introduce new sanctions against Russia, Putin’s spokesman Dmitry Peskov told reporters Wednesday, according to the Interfax news service. “It’s obvious that any sanctions are harmful for the process of rebuilding relations,” he said.  Oil briefly rose after the news. Brent futures advanced as much as 70 cents a barrel to trade as high as $60.33, before paring their advance. My thanks to @EUAmbUS Ambassador Neliupšienė for hosting discussions this morning with the 27 EU Ambassadors to the United States. President Trump is the President of Peace, and I reiterated that under his leadership, America will continue to prioritize ending the war in… pic.twitter.com/3SfQiL4lvw — Treasury Secretary Scott Bessent (@SecScottBessent) December

Read More »

Strategists Forecast Week on Week USA Crude Build

In an oil and gas report sent to Rigzone by the Macquarie team this week, Macquarie strategists, including Walt Chancellor, revealed that they are forecasting that U.S. crude inventories will be up by 2.5 million barrels for the week ending December 12. “This follows a 1.8 million barrel draw in the prior week, with the crude balance realizing quite loose relative to our expectations amidst an apparent surge in Canadian imports,” the strategists said in the report. “While our balances point to a much looser fundamental picture this week, we note some potential for a ‘catch-up’ to the tighter side in this week’s data,” they added. “For this week’s balance, from refineries, we look for a minimal reduction in crude runs. Among net imports, we model a small increase, with exports lower (-0.1 million barrels per day) and imports higher (+0.1 million barrels per day) on a nominal basis,” they continued. The strategists warned in the report that the timing of cargoes remains a source of potential volatility in this week’s crude balance. “From implied domestic supply (prod.+adj.+transfers), we look for an increase (+0.4 million barrels per day) on a nominal basis this week,” the strategists went on to note. “Rounding out the picture, we anticipate another small increase (+0.3 million barrels) in SPR [Strategic Petroleum Reserve] stocks this week,” they added. The analysts also stated in the report that, “among products”, they “again look for across the board builds (gasoline/ distillate/jet +5.2/+2.0/+1.5 million barrels)”. “We model implied demand for these three products at ~14.3 million barrels per day for the week ending December 12,” they said. In its latest weekly petroleum status report at the time of writing, which was released on December 10 and included data for the week ending December 5, the U.S. Energy Information Administration (EIA)

Read More »

SK On pivots to stationary energy storage after Ford joint venture ends

Dive Brief: Korean battery maker SK On says it remains committed to building out a Tennessee plant originally intended to supply electric vehicle batteries to Ford after a joint venture with the car maker was called off, the company said in a statement. The manufacturer will maintain its strategic partnership with Ford and continue to supply EV batteries for its future vehicles, SK Americas spokesperson Joe Guy Collier said in an email. However, going forward, SK On plans to focus more on “profitable and sustainable growth” in the U.S. by supplying batteries produced in the Tennessee plant to other customers, including for stationary energy storage systems, the company said. “This agreement allows SK On to strategically realign assets and production capacity to improve its operational efficiency,” the battery maker said in a statement. “It also enables the company to enhance productivity, operational flexibility, and respond more effectively to evolving market dynamics and diverse customer needs.” Dive Insight: Ford and SK On reached a mutual agreement to dissolve their electric vehicle battery joint venture, BlueOval SK, Collier confirmed in an email last week.  The joint venture was established in September 2021 as part of a planned $11.4 billion investment by the two companies to build three large-scale manufacturing plants — one in Tennessee and two in Kentucky —  to produce advanced batteries for Ford’s future EVs.  Under the terms of the dissolution agreement, each company will independently own and operate the joint venture’s former production facilities, Collier said. A Ford subsidiary will take full ownership of the two battery plants in Kentucky, and SK On will assume full ownership and operate the battery plant in Tennessee. “SK On is committed to the Tennessee plant long-term,” the company said. “We plan to make it a key part of our manufacturing base for advanced batteries

Read More »

Shell Adds New Gas Customer in Nigeria

Shell PLC, through Shell Nigeria Gas Ltd (SNG), has signed an agreement to supply natural gas to SG Industrial FZE. The new customer is “a leading steel company in the Guandong industrial zone in the state”, the British company said on its Nigerian website. “The agreement adds to a growing list of clients for SNG which has developed as a dependable supplier of gas through distribution pipelines of some 150 kilometers [93.21 miles], serving over 150 clients in Abia, Bayelsa, Ogun and Rivers states”, Shell said. Shell did not disclose the contract volume or value. SNG managing director Ralph Gbobo said, “Our commitment is clear – to build, operate and maintain a gas distribution system that is not only reliable but resilient, transparent and designed to fuel growth”. SG Industrial vice general manager Moya Shua said, “This collaboration marks a major step forward in securing reliable energy that will power our growth and long-term ambitions”. Shell said it had previously signed agreements to supply pipeline gas to Nigeria Distilleries Ltd III, Reliance Chemical Products Limited II, Rumbu Industries Nigeria Ltd and Ultimum Ltd. Expanding its gas operations in the West African country, Shell recently announced a final investment decision to develop the HI field to supply up to 350 million standard cubic feet of gas a day, equivalent to about 60,000 oil barrels per day, to Nigeria LNG. The project is part of a joint venture in which Shell owns 40 percent through Shell Nigeria Exploration and Production Co Ltd. Sunlink Energies and Resources Ltd holds 60 percent. At Nigeria LNG, which has a declared capacity of 22 million metric tons of liquefied natural gas a year, Shell owns 25.6 percent. “The increase in feedstock to NLNG, via the train VII project that aims to expand the Bonny Island terminal’s production capacity,

Read More »

Energy Secretary Ensures Washington Coal Plant Remains Open to Ensure Affordable, Reliable and Secure Power Heading into Winter

Emergency order addresses critical grid reliability issues, lowering risk of blackouts and ensuring affordable electricity access WASHINGTON—U.S. Secretary of Energy Chris Wright today issued an emergency order to ensure Americans in the Northwestern region of the United States have access to affordable, reliable and secure electricity heading into the cold winter months. The order directs TransAlta to keep Unit 2 of the Centralia Generating Station in Centralia, Washington available to operate. Unit 2 of the coal plant was scheduled to shut down at the end of 2025. The reliable supply of power from the Centralia coal plant is essential for grid stability in the Northwest. The order prioritizes minimizing the risk and costs of blackouts. “The last administration’s energy subtraction policies had the United States on track to experience significantly more blackouts in the coming years — thankfully, President Trump won’t let that happen,” said Energy Secretary Wright. “The Trump administration will continue taking action to keep America’s coal plants running so we can stop the price spikes and ensure we don’t lose critical generation sources. Americans deserve access to affordable, reliable, and secure energy to heat their homes all the time, regardless of whether the wind is blowing or the sun is shining.” According to DOE’s Resource Adequacy Report, blackouts were on track to potentially increase 100 times by 2030 if the U.S. continued to take reliable power offline as it did during the Biden administration. The North American Electric Reliability Corporation (NERC) determined in its 2025-2026 Winter Reliability Assessment that the WECC Northwest region is at elevated risk during periods of extreme weather, such as prolonged, far-reaching cold snaps.  This order is in effect beginning on December 16, 2025, and continuing until March 16, 2026.  Background:  The NERC Winter Reliability Assessment warns that “extreme winter conditions extending over

Read More »

West of Orkney developers helped support 24 charities last year

The developers of the 2GW West of Orkney wind farm paid out a total of £18,000 to 24 organisations from its small donations fund in 2024. The money went to projects across Caithness, Sutherland and Orkney, including a mental health initiative in Thurso and a scheme by Dunnet Community Forest to improve the quality of meadows through the use of traditional scythes. Established in 2022, the fund offers up to £1,000 per project towards programmes in the far north. In addition to the small donations fund, the West of Orkney developers intend to follow other wind farms by establishing a community benefit fund once the project is operational. West of Orkney wind farm project director Stuart McAuley said: “Our donations programme is just one small way in which we can support some of the many valuable initiatives in Caithness, Sutherland and Orkney. “In every case we have been immensely impressed by the passion and professionalism each organisation brings, whether their focus is on sport, the arts, social care, education or the environment, and we hope the funds we provide help them achieve their goals.” In addition to the local donations scheme, the wind farm developers have helped fund a £1 million research and development programme led by EMEC in Orkney and a £1.2m education initiative led by UHI. It also provided £50,000 to support the FutureSkills apprenticeship programme in Caithness, with funds going to employment and training costs to help tackle skill shortages in the North of Scotland. The West of Orkney wind farm is being developed by Corio Generation, TotalEnergies and Renewable Infrastructure Development Group (RIDG). The project is among the leaders of the ScotWind cohort, having been the first to submit its offshore consent documents in late 2023. In addition, the project’s onshore plans were approved by the

Read More »

Biden bans US offshore oil and gas drilling ahead of Trump’s return

US President Joe Biden has announced a ban on offshore oil and gas drilling across vast swathes of the country’s coastal waters. The decision comes just weeks before his successor Donald Trump, who has vowed to increase US fossil fuel production, takes office. The drilling ban will affect 625 million acres of federal waters across America’s eastern and western coasts, the eastern Gulf of Mexico and Alaska’s Northern Bering Sea. The decision does not affect the western Gulf of Mexico, where much of American offshore oil and gas production occurs and is set to continue. In a statement, President Biden said he is taking action to protect the regions “from oil and natural gas drilling and the harm it can cause”. “My decision reflects what coastal communities, businesses, and beachgoers have known for a long time: that drilling off these coasts could cause irreversible damage to places we hold dear and is unnecessary to meet our nation’s energy needs,” Biden said. “It is not worth the risks. “As the climate crisis continues to threaten communities across the country and we are transitioning to a clean energy economy, now is the time to protect these coasts for our children and grandchildren.” Offshore drilling ban The White House said Biden used his authority under the 1953 Outer Continental Shelf Lands Act, which allows presidents to withdraw areas from mineral leasing and drilling. However, the law does not give a president the right to unilaterally reverse a drilling ban without congressional approval. This means that Trump, who pledged to “unleash” US fossil fuel production during his re-election campaign, could find it difficult to overturn the ban after taking office. Sunset shot of the Shell Olympus platform in the foreground and the Shell Mars platform in the background in the Gulf of Mexico Trump

Read More »

The Download: our 10 Breakthrough Technologies for 2025

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Introducing: MIT Technology Review’s 10 Breakthrough Technologies for 2025 Each year, we spend months researching and discussing which technologies will make the cut for our 10 Breakthrough Technologies list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. It’s hard to think of another industry that has as much of a hype machine behind it as tech does, so the real secret of the TR10 is really what we choose to leave off the list.Check out the full list of our 10 Breakthrough Technologies for 2025, which is front and center in our latest print issue. It’s all about the exciting innovations happening in the world right now, and includes some fascinating stories, such as: + How digital twins of human organs are set to transform medical treatment and shake up how we trial new drugs.+ What will it take for us to fully trust robots? The answer is a complicated one.+ Wind is an underutilized resource that has the potential to steer the notoriously dirty shipping industry toward a greener future. Read the full story.+ After decades of frustration, machine-learning tools are helping ecologists to unlock a treasure trove of acoustic bird data—and to shed much-needed light on their migration habits. Read the full story. 
+ How poop could help feed the planet—yes, really. Read the full story.
Roundtables: Unveiling the 10 Breakthrough Technologies of 2025 Last week, Amy Nordrum, our executive editor, joined our news editor Charlotte Jee to unveil our 10 Breakthrough Technologies of 2025 in an exclusive Roundtable discussion. Subscribers can watch their conversation back here. And, if you’re interested in previous discussions about topics ranging from mixed reality tech to gene editing to AI’s climate impact, check out some of the highlights from the past year’s events. This international surveillance project aims to protect wheat from deadly diseases For as long as there’s been domesticated wheat (about 8,000 years), there has been harvest-devastating rust. Breeding efforts in the mid-20th century led to rust-resistant wheat strains that boosted crop yields, and rust epidemics receded in much of the world.But now, after decades, rusts are considered a reemerging disease in Europe, at least partly due to climate change.  An international initiative hopes to turn the tide by scaling up a system to track wheat diseases and forecast potential outbreaks to governments and farmers in close to real time. And by doing so, they hope to protect a crop that supplies about one-fifth of the world’s calories. Read the full story. —Shaoni Bhattacharya

The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Meta has taken down its creepy AI profiles Following a big backlash from unhappy users. (NBC News)+ Many of the profiles were likely to have been live from as far back as 2023. (404 Media)+ It also appears they were never very popular in the first place. (The Verge) 2 Uber and Lyft are racing to catch up with their robotaxi rivalsAfter abandoning their own self-driving projects years ago. (WSJ $)+ China’s Pony.ai is gearing up to expand to Hong Kong.  (Reuters)3 Elon Musk is going after NASA He’s largely veered away from criticising the space agency publicly—until now. (Wired $)+ SpaceX’s Starship rocket has a legion of scientist fans. (The Guardian)+ What’s next for NASA’s giant moon rocket? (MIT Technology Review) 4 How Sam Altman actually runs OpenAIFeaturing three-hour meetings and a whole lot of Slack messages. (Bloomberg $)+ ChatGPT Pro is a pricey loss-maker, apparently. (MIT Technology Review) 5 The dangerous allure of TikTokMigrants’ online portrayal of their experiences in America aren’t always reflective of their realities. (New Yorker $) 6 Demand for electricity is skyrocketingAnd AI is only a part of it. (Economist $)+ AI’s search for more energy is growing more urgent. (MIT Technology Review) 7 The messy ethics of writing religious sermons using AISkeptics aren’t convinced the technology should be used to channel spirituality. (NYT $)
8 How a wildlife app became an invaluable wildfire trackerWatch Duty has become a safeguarding sensation across the US west. (The Guardian)+ How AI can help spot wildfires. (MIT Technology Review) 9 Computer scientists just love oracles 🔮 Hypothetical devices are a surprisingly important part of computing. (Quanta Magazine)
10 Pet tech is booming 🐾But not all gadgets are made equal. (FT $)+ These scientists are working to extend the lifespan of pet dogs—and their owners. (MIT Technology Review) Quote of the day “The next kind of wave of this is like, well, what is AI doing for me right now other than telling me that I have AI?” —Anshel Sag, principal analyst at Moor Insights and Strategy, tells Wired a lot of companies’ AI claims are overblown.
The big story Broadband funding for Native communities could finally connect some of America’s most isolated places September 2022 Rural and Native communities in the US have long had lower rates of cellular and broadband connectivity than urban areas, where four out of every five Americans live. Outside the cities and suburbs, which occupy barely 3% of US land, reliable internet service can still be hard to come by.
The covid-19 pandemic underscored the problem as Native communities locked down and moved school and other essential daily activities online. But it also kicked off an unprecedented surge of relief funding to solve it. Read the full story. —Robert Chaney We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Rollerskating Spice Girls is exactly what your Monday morning needs.+ It’s not just you, some people really do look like their dogs!+ I’m not sure if this is actually the world’s healthiest meal, but it sure looks tasty.+ Ah, the old “bitten by a rabid fox chestnut.”

Read More »

Equinor Secures $3 Billion Financing for US Offshore Wind Project

Equinor ASA has announced a final investment decision on Empire Wind 1 and financial close for $3 billion in debt financing for the under-construction project offshore Long Island, expected to power 500,000 New York homes. The Norwegian majority state-owned energy major said in a statement it intends to farm down ownership “to further enhance value and reduce exposure”. Equinor has taken full ownership of Empire Wind 1 and 2 since last year, in a swap transaction with 50 percent co-venturer BP PLC that allowed the former to exit the Beacon Wind lease, also a 50-50 venture between the two. Equinor has yet to complete a portion of the transaction under which it would also acquire BP’s 50 percent share in the South Brooklyn Marine Terminal lease, according to the latest transaction update on Equinor’s website. The lease involves a terminal conversion project that was intended to serve as an interconnection station for Beacon Wind and Empire Wind, as agreed on by the two companies and the state of New York in 2022.  “The expected total capital investments, including fees for the use of the South Brooklyn Marine Terminal, are approximately $5 billion including the effect of expected future tax credits (ITCs)”, said the statement on Equinor’s website announcing financial close. Equinor did not disclose its backers, only saying, “The final group of lenders includes some of the most experienced lenders in the sector along with many of Equinor’s relationship banks”. “Empire Wind 1 will be the first offshore wind project to connect into the New York City grid”, the statement added. “The redevelopment of the South Brooklyn Marine Terminal and construction of Empire Wind 1 will create more than 1,000 union jobs in the construction phase”, Equinor said. On February 22, 2024, the Bureau of Ocean Energy Management (BOEM) announced

Read More »

USA Crude Oil Stocks Drop Week on Week

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 1.2 million barrels from the week ending December 20 to the week ending December 27, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report, which was released on January 2. Crude oil stocks, excluding the SPR, stood at 415.6 million barrels on December 27, 416.8 million barrels on December 20, and 431.1 million barrels on December 29, 2023, the report revealed. Crude oil in the SPR came in at 393.6 million barrels on December 27, 393.3 million barrels on December 20, and 354.4 million barrels on December 29, 2023, the report showed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.623 billion barrels on December 27, the report revealed. This figure was up 9.6 million barrels week on week and up 17.8 million barrels year on year, the report outlined. “At 415.6 million barrels, U.S. crude oil inventories are about five percent below the five year average for this time of year,” the EIA said in its latest report. “Total motor gasoline inventories increased by 7.7 million barrels from last week and are slightly below the five year average for this time of year. Finished gasoline inventories decreased last week while blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 6.4 million barrels last week and are about six percent below the five year average for this time of year. Propane/propylene inventories decreased by 0.6 million barrels from last week and are 10 percent above the five year average for this time of year,” it went on to state. In the report, the EIA noted

Read More »

More telecom firms were breached by Chinese hackers than previously reported

Broader implications for US infrastructure The Salt Typhoon revelations follow a broader pattern of state-sponsored cyber operations targeting the US technology ecosystem. The telecom sector, serving as a backbone for industries including finance, energy, and transportation, remains particularly vulnerable to such attacks. While Chinese officials have dismissed the accusations as disinformation, the recurring breaches underscore the pressing need for international collaboration and policy enforcement to deter future attacks. The Salt Typhoon campaign has uncovered alarming gaps in the cybersecurity of US telecommunications firms, with breaches now extending to over a dozen networks. Federal agencies and private firms must act swiftly to mitigate risks as adversaries continue to evolve their attack strategies. Strengthening oversight, fostering industry-wide collaboration, and investing in advanced defense mechanisms are essential steps toward safeguarding national security and public trust.

Read More »

The great AI hype correction of 2025

Some disillusionment was inevitable. When OpenAI released a free web app called ChatGPT in late 2022, it changed the course of an entire industry—and several world economies. Millions of people started talking to their computers, and their computers started talking back. We were enchanted, and we expected more. We got it. Technology companies scrambled to stay ahead, putting out rival products that outdid one another with each new release: voice, images, video. With nonstop one-upmanship, AI companies have presented each new product drop as a major breakthrough, reinforcing a widespread faith that this technology would just keep getting better. Boosters told us that progress was exponential. They posted charts plotting how far we’d come since last year’s models: Look how the line goes up! Generative AI could do anything, it seemed. Well, 2025 has been a year of reckoning.  This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next. For a start, the heads of the top AI companies made promises they couldn’t keep. They told us that generative AI would replace the white-collar workforce, bring about an age of abundance, make scientific discoveries, and help find new cures for disease. FOMO across the world’s economies, at least in the Global North, made CEOs tear up their playbooks and try to get in on the action. That’s when the shine started to come off. Though the technology may have been billed as a universal multitool that could revamp outdated business processes and cut costs, a number of studies published this year suggest that firms are failing to make the AI pixie dust work its magic. Surveys and trackers from a range of sources, including the US Census Bureau and Stanford University, have found that business uptake of AI tools is stalling. And when the tools do get tried out, many projects stay stuck in the pilot stage. Without broad buy-in across the economy it is not clear how the big AI companies will ever recoup the incredible amounts they’ve already spent in this race. 
At the same time, updates to the core technology are no longer the step changes they once were. The highest-profile example of this was the botched launch of GPT-5 in August. Here was OpenAI, the firm that had ignited (and to a large extent sustained) the current boom, set to release a brand-new generation of its technology. OpenAI had been hyping GPT-5 for months: “PhD-level expert in anything,” CEO Sam Altman crowed. On another occasion Altman posted, without comment, an image of the Death Star from Star Wars, which OpenAI stans took to be a symbol of ultimate power: Coming soon! Expectations were huge.
And yet, when it landed, GPT-5 seemed to be—more of the same? What followed was the biggest vibe shift since ChatGPT first appeared three years ago. “The era of boundary-breaking advancements is over,” Yannic Kilcher, an AI researcher and popular YouTuber, announced in a video posted two days after GPT-5 came out: “AGI is not coming. It seems very much that we’re in the Samsung Galaxy era of LLMs.” A lot of people (me included) have made the analogy with phones. For a decade or so, smartphones were the most exciting consumer tech in the world. Today, new products drop from Apple or Samsung with little fanfare. While superfans pore over small upgrades, to most people this year’s iPhone now looks and feels a lot like last year’s iPhone. Is that where we are with generative AI? And is it a problem? Sure, smartphones have become the new normal. But they changed the way the world works, too. To be clear, the last few years have been filled with genuine “Wow” moments, from the stunning leaps in the quality of video generation models to the problem-solving chops of so-called reasoning models to the world-class competition wins of the latest coding and math models. But this remarkable technology is only a few years old, and in many ways it is still experimental. Its successes come with big caveats. Perhaps we need to readjust our expectations. The big reset Let’s be careful here: The pendulum from hype to anti-hype can swing too far. It would be rash to dismiss this technology just because it has been oversold. The knee-jerk response when AI fails to live up to its hype is to say that progress has hit a wall. But that misunderstands how research and innovation in tech work. Progress has always moved in fits and starts. There are ways over, around, and under walls. Take a step back from the GPT-5 launch. It came hot on the heels of a series of remarkable models that OpenAI had shipped in the previous months, including o1 and o3 (first-of-their-kind reasoning models that introduced the industry to a whole new paradigm) and Sora 2, which raised the bar for video generation once again. That doesn’t sound like hitting a wall to me.
AI is really good! Look at Nano Banana Pro, the new image generation model from Google DeepMind that can turn a book chapter into an infographic, and much more. It’s just there—for free—on your phone. And yet you can’t help but wonder: When the wow factor is gone, what’s left? How will we view this technology a year or five from now? Will we think it was worth the colossal costs, both financial and environmental?  With that in mind, here are four ways to think about the state of AI at the end of 2025: The start of a much-needed hype correction. 01: LLMs are not everything In some ways, it is the hype around large language models, not AI as a whole, that needs correcting. It has become obvious that LLMs are not the doorway to artificial general intelligence, or AGI, a hypothetical technology that some insist will one day be able to do any (cognitive) task a human can.
Even an AGI evangelist like Ilya Sutskever, chief scientist and cofounder at the AI startup Safe Superintelligence and former chief scientist and cofounder at OpenAI, now highlights the limitations of LLMs, a technology he had a huge hand in creating. LLMs are very good at learning how to do a lot of specific tasks, but they do not seem to learn the principles behind those tasks, Sutskever said in an interview with Dwarkesh Patel in November. It’s the difference between learning how to solve a thousand different algebra problems and learning how to solve any algebra problem. “The thing which I think is the most fundamental is that these models somehow just generalize dramatically worse than people,” Sutskever said. It’s easy to imagine that LLMs can do anything because their use of language is so compelling. It is astonishing how well this technology can mimic the way people write and speak. And we are hardwired to see intelligence in things that behave in certain ways—whether it’s there or not. In other words, we have built machines with humanlike behavior and cannot resist seeing a humanlike mind behind them. That’s understandable. LLMs have been part of mainstream life for only a few years. But in that time, marketers have preyed on our shaky sense of what the technology can really do, pumping up expectations and turbocharging the hype. As we live with this technology and come to understand it better, those expectations should fall back down to earth.  
02: AI is not a quick fix to all your problems In July, researchers at MIT published a study that became a tentpole talking point in the disillusionment camp. The headline result was that a whopping 95% of businesses that had tried using AI had found zero value in it.   The general thrust of that claim was echoed by other research, too. In November, a study by researchers at Upwork, a company that runs an online marketplace for freelancers, found that agents powered by top LLMs from OpenAI, Google DeepMind, and Anthropic failed to complete many straightforward workplace tasks by themselves. This is miles off Altman’s prediction: “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies,” he wrote on his personal blog in January. But what gets missed in that MIT study is that the researchers’ measure of success was pretty narrow. That 95% failure rate accounts for companies that had tried to implement bespoke AI systems but had not yet scaled them beyond the pilot stage after six months. It shouldn’t be too surprising that a lot of experiments with experimental technology don’t pan out straight away. That number also does not include the use of LLMs by employees outside of official pilots. The MIT researchers found that around 90% of the companies they surveyed had a kind of AI shadow economy where workers were using personal chatbot accounts. But the value of that shadow economy was not measured.   When the Upwork study looked at how well agents completed tasks together with people who knew what they were doing, success rates shot up. The takeaway seems to be that a lot of people are figuring out for themselves how AI might help them with their jobs.
That fits with something the AI researcher and influencer (and coiner of the term “vibe coding”) Andrej Karpathy has noted: Chatbots are better than the average human at a lot of different things (think of giving legal advice, fixing bugs, doing high school math), but they are not better than an expert human. Karpathy suggests this may be why chatbots have proved popular with individual consumers, helping non-experts with everyday questions and tasks, but they have not upended the economy, which would require outperforming skilled employees at their jobs. That may change. For now, don’t be surprised that AI has not (yet) had the impact on jobs that boosters said it would. AI is not a quick fix, and it cannot replace humans. But there’s a lot to play for. The ways in which AI could be integrated into everyday workflows and business pipelines are still being tried out.   
03: Are we in a bubble? (If so, what kind of bubble?) If AI is a bubble, is it like the subprime mortgage bubble of 2008 or the internet bubble of 2000? Because there’s a big difference. The subprime bubble wiped out a big part of the economy, because when it burst it left nothing behind except debt and overvalued real estate. The dot-com bubble wiped out a lot of companies, which sent ripples across the world, but it left behind the infant internet—an international network of cables and a handful of startups, like Google and Amazon, that became the tech giants of today.   Then again, maybe we’re in a bubble unlike either of those. After all, there’s no real business model for LLMs right now. We don’t yet know what the killer app will be, or if there will even be one.  And many economists are concerned about the unprecedented amounts of money being sunk into the infrastructure required to build capacity and serve the projected demand. But what if that demand doesn’t materialize? Add to that the weird circularity of many of those deals—with Nvidia paying OpenAI to pay Nvidia, and so on—and it’s no surprise everybody’s got a different take on what’s coming.  Some investors remain sanguine. In an interview with the Technology Business Programming Network podcast in November, Glenn Hutchins, cofounder of Silver Lake Partners, a major international private equity firm, gave a few reasons not to worry. “Every one of these data centers—almost all of them—has a solvent counterparty that is contracted to take all the output they’re built to suit,” he said. In other words, it’s not a case of “Build it and they’ll come”—the customers are already locked in.  And, he pointed out, one of the biggest of those solvent counterparties is Microsoft. “Microsoft has the world’s best credit rating,” Hutchins said. “If you sign a deal with Microsoft to take the output from your data center, Satya is good for it.” Many CEOs will be looking back at the dot-com bubble and trying to learn its lessons. Here’s one way to see it: The companies that went bust back then didn’t have the money to last the distance. Those that survived the crash thrived. With that lesson in mind, AI companies today are trying to pay their way through what may or may not be a bubble. Stay in the race; don’t get left behind. Even so, it’s a desperate gamble.
But there’s another lesson too. Companies that might look like sideshows can turn into unicorns fast. Take Synthesia, which makes avatar generation tools for businesses. Nathan Benaich, cofounder of the VC firm Air Street Capital, admits that when he first heard about the company a few years ago, back when fear of deepfakes was rife, he wasn’t sure what its tech was for and thought there was no market for it. “We didn’t know who would pay for lip-synching and voice cloning,” he says. “Turns out there’s a lot of people who wanted to pay for it.” Synthesia now has around 55,000 corporate customers and brings in around $150 million a year. In October, the company was valued at $4 billion. 04: ChatGPT was not the beginning, and it won’t be the end ChatGPT was the culmination of a decade’s worth of progress in deep learning, the technology that underpins all of modern AI. The seeds of deep learning itself were planted in the 1980s. The field as a whole goes back at least to the 1950s. If progress is measured against that backdrop, generative AI has barely got going. Meanwhile, research is at a fever pitch. There are more high-quality submissions to the world’s major AI conferences than ever before. This year, organizers of some of those conferences resorted to turning down papers that reviewers had already approved, just to manage numbers. (At the same time, preprint servers like arXiv have been flooded with AI-generated research slop.) “It’s back to the age of research again,” Sutskever said in that Dwarkesh interview, talking about the current bottleneck with LLMs. That’s not a setback; that’s the start of something new. “There’s always a lot of hype beasts,” says Benaich. But he thinks there’s an upside to that: Hype attracts the money and talent needed to make real progress. “You know, it was only like two or three years ago that the people who built these models were basically research nerds that just happened on something that kind of worked,” he says. “Now everybody who’s good at anything in technology is working on this.” Where do we go from here? The relentless hype hasn’t come just from companies drumming up business for their vastly expensive new technologies. There’s a large cohort of people—inside and outside the industry—who want to believe in the promise of machines that can read, write, and think. It’s a wild decades-old dream.  But the hype was never sustainable—and that’s a good thing. We now have a chance to reset expectations and see this technology for what it really is—assess its true capabilities, understand its flaws, and take the time to learn how to apply it in valuable (and beneficial) ways. “We’re still trying to figure out how to invoke certain behaviors from this insanely high-dimensional black box of information and skills,” says Benaich. This hype correction was long overdue. But know that AI isn’t going anywhere. We don’t even fully understand what we’ve built so far, let alone what’s coming next.

Read More »

Generative AI hype distracts us from AI’s more important breakthroughs

On April 28, 2022, at a highly anticipated concert in Spokane, Washington, the musician Paul McCartney astonished his audience with a groundbreaking application of AI: He began to perform with a lifelike depiction of his long-deceased musical partner, John Lennon.  Using recent advances in audio and video processing, engineers had taken the pair’s final performance (London, 1969), separated Lennon’s voice and image from the original mix and restored them with lifelike clarity. This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next. For years, researchers like me had taught machines to “see” and “hear” in order to make such a moment possible. As McCartney and Lennon appeared to reunite across time and space, the arena fell silent; many in the crowd began to cry. As an AI scientist and lifelong Beatles fan, I felt profound gratitude that we could experience this truly life-changing moment.  Later that year, the world was captivated by another major breakthrough: AI conversation. For the first time in history, systems capable of generating new, contextually relevant comments in real time, on virtually any subject, were widely accessible owing to the release of ChatGPT. Billions of people were suddenly able to interact with AI. This ignited the public’s imagination about what AI could be, bringing an explosion of creative ideas, hopes, and fears. Having done my PhD on AI language generation (long considered niche), I was thrilled we had come this far. But the awe I felt was rivaled by my growing rage at the flood of media takes and self-appointed experts insisting that generative AI could do things it simply can’t, and warning that anyone who didn’t adopt it would be left behind.
This kind of hype has contributed to a frenzy of misunderstandings about what AI actually is and what it can and cannot do. Crucially, generative AI is a seductive distraction from the type of AI that is most likely to make your life better, or even save it: Predictive AI. In contrast to AI designed for generative tasks, predictive AI involves tasks with a finite, known set of answers; the system just has to process information to say which answer is right. A basic example is plant recognition: Point your phone camera at a plant and learn that it’s a Western sword fern. Generative tasks, in contrast, have no finite set of correct answers: The system must blend snippets of information it’s been trained on to create, for example, a novel picture of a fern.  The generative AI technology involved in chatbots, face-swaps, and synthetic video makes for stunning demos, driving clicks and sales as viewers run wild with ideas that superhuman AI will be capable of bringing us abundance or extinction. Yet predictive AI has quietly been improving weather prediction and food safety, enabling higher-quality music production, helping to organize photos, and accurately predicting the fastest driving routes. We incorporate predictive AI into our everyday lives without evening thinking about it, a testament to its indispensable utility.
To get a sense of the immense progress on predictive AI and its future potential, we can look at the trajectory of the past 20 years. In 2005, we couldn’t get AI to tell the difference between a person and a pencil. By 2013, AI still couldn’t reliably detect a bird in a photo, and the difference between a pedestrian and a Coke bottle was massively confounding (this is how I learned that bottles do kind of look like people, if people had no heads). The thought of deploying these systems in the real world was the stuff of science fiction.  Yet over the past 10 years, predictive AI has not only nailed bird detection down to the specific species; it has rapidly improved life-critical medical services like identifying problematic lesions and heart arrhythmia. Because of this technology, seismologists can predict earthquakes and meteorologists can predict flooding more reliably than ever before. Accuracy has skyrocketed for consumer-facing tech that detects and classifies everything from what song you’re thinking of when you hum a tune to which objects to avoid while you’re driving—making self-driving cars a reality.  In the very near future, we should be able to accurately detect tumors and forecast hurricanes long before they can hurt anyone, realizing the lifelong hopes of people all over the world. That might not be as flashy as generating your own Studio Ghibli–ish film, but it’s definitely hype-worthy.  Predictive AI systems have also been shown to be incredibly useful when they leverage certain generative techniques within a constrained set of options. Systems of this type are diverse, spanning everything from outfit visualization to cross-language translation. Soon, predictive-generative hybrid systems will make it possible to clone your own voice speaking another language in real time, an extraordinary aid for travel (with serious impersonation risks). There’s considerable room for growth here, but generative AI delivers real value when anchored by strong predictive methods. To understand the difference between these two broad classes of AI, imagine yourself as an AI system tasked with showing someone what a cat looks like. You could adopt a generative approach, cutting and pasting small fragments from various cat images (potentially from sources that object) to construct a seemingly perfect depiction. The ability of modern generative AI to produce such a flawless collage is what makes it so astonishing. Alternatively, you could take the predictive approach: Simply locate and point to an existing picture of a cat. That method is much less glamorous but more energy-efficient and more likely to be accurate, and it properly acknowledges the original source. Generative AI is designed to create things that look real; predictive AI identifies what is real. A misunderstanding that generative systems are retrieving things when they are actually creating them has led to grave consequences when text is involved, requiring the withdrawal of legal rulings and the retraction of scientific articles.
Driving this confusion is a tendency for people to hype AI without making it clear what kind of AI they’re talking about (I reckon many don’t know). It’s very easy to equate “AI” with generative AI, or even just language-generating AI, and assume that all other capabilities fall out from there. That fallacy makes a ton of sense: The term literally references “intelligence,” and our human understanding of what “intelligence” might be is often mediated by the use of language. (Spoiler: No one actually knows what intelligence is.) But the phrase “artificial intelligence” was intentionally designed in the 1950s to inspire awe and allude to something humanlike. Today, it just refers to a set of disparate technologies for processing digital data. Some of my friends find it helpful to call it “mathy maths” instead. The bias toward treating generative AI as the most powerful and real form of AI is troubling given that it consumes considerably more energy than predictive AI systems. It also means using existing human work in AI products against the original creators’ wishes and replacing human jobs with AI systems whose capabilities their work made possible in the first place—without compensation. AI can be amazingly powerful, but that doesn’t mean creators should be ripped off.  Watching this unfold as an AI developer within the tech industry, I’ve drawn important lessons for next steps. The widespread appeal of AI is clearly linked to the intuitive nature of conversation-based interactions. But this method of engagement currently overuses generative methods where predictive ones would suffice, resulting in an awkward situation that’s confusing for users while imposing heavy costs in energy consumption, exploitation, and job displacement.  We have witnessed just a glimpse of AI’s full potential: The current excitement around AI reflects what it could be, not what it is. Generation-based approaches strain resources while still falling short on representation, accuracy, and the wishes of people whose work is folded into the system. 
If we can shift the spotlight from the hype around generative technologies to the predictive advances already transforming daily life, we can build AI that is genuinely useful, equitable, and sustainable. The systems that help doctors catch diseases earlier, help scientists forecast disasters sooner, and help everyday people navigate their lives more safely are the ones poised to deliver the greatest impact.  The future of beneficial AI will not be defined by the flashiest demos but by the quiet, rigorous progress that makes technology trustworthy. And if we build on that foundation—pairing predictive strength with more mature data practices and intuitive natural-language interfaces—AI can finally start living up to the promise that many people perceive today. Dr. Margaret Mitchell is a computer science researcher and chief ethics scientist at AI startup Hugging Face. She has worked in the technology industry for 15 years, and has published over 100 papers on natural language generation, assistive technology, computer vision, and AI ethics. Her work has received numerous awards and has been implemented by multiple technology companies.

Read More »

What even is the AI bubble?

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. In July, a widely cited MIT study claimed that 95% of organizations that invested in generative AI were getting “zero return.” Tech stocks briefly plunged. While the study itself was more nuanced than the headlines, for many it still felt like the first hard data point confirming what skeptics had muttered for months: Hype around AI might be outpacing reality. Then, in August, OpenAI CEO Sam Altman said what everyone in Silicon Valley had been whispering. “Are we in a phase where investors as a whole are overexcited about AI?” he said during a press dinner I attended. “My opinion is yes.”  This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.
He compared the current moment to the dot-com bubble. “When bubbles happen, smart people get overexcited about a kernel of truth,” he explained. “Tech was really important. The internet was a really big deal. People got overexcited.”  With those comments, it was off to the races. The next day’s stock market dip was attributed to the sentiment he shared. The question “Are we in an AI bubble?” became inescapable.
Who thinks it is a bubble?  The short answer: Lots of people. But not everyone agrees on who or what is overinflated. Tech leaders are using this moment of fear to take shots at their rivals and position themselves as clear winners on the other side. How they describe the bubble depends on where their company sits. When I asked Meta CEO Mark Zuckerberg about the AI bubble in September, he ran through the historical analogies of past bubbles—railroads, fiber for the internet, the dot-com boom—and noted that in each case, “the infrastructure gets built out, people take on too much debt, and then you hit some blip … and then a lot of the companies end up going out of business.” But Zuckerberg’s prescription wasn’t for Meta to pump the brakes. It was to keep spending: “If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate, obviously. But I’d say the risk is higher on the other side.” Bret Taylor, the chairman of OpenAI and CEO of the AI startup Sierra, uses a mental model from the late ’90s to help navigate this AI bubble. “I think the closest analogue to this AI wave is the dot-com boom or bubble, depending on your level of pessimism,” he recently told me. Back then, he explained, everyone knew e-commerce was going to be big, but there was a massive difference between Buy.com and Amazon. Taylor and others have been trying to position themselves as today’s Amazon. Still others are arguing that the pain will be widespread. Google CEO Sundar Pichai told the BBC this month that there’s “some irrationality” in the current boom. Asked whether Google would be immune to a bubble bursting, he warned, “I think no company is going to be immune, including us.” What’s inflating the bubble? Companies are raising enormous sums of money and seeing unprecedented valuations. Much of that money, in turn, is going toward the buildout of massive data centers—on which both private companies like OpenAI and Elon Musk’s xAI and public ones such as Meta and Google are spending heavily. OpenAI has pledged that it will spend $500 billion to build AI data centers, more than 15 times what was spent on the Manhattan Project. This eye-popping spending on AI data centers isn’t entirely detached from reality. The leaders of the top AI companies all stress that they’re bottlenecked by their limited access to computing power. You hear it constantly when you talk to them. Startups can’t get the GPU allocations they need. Hyperscalers are rationing compute, saving it for their best customers. If today’s AI market is as brutally supply-constrained as tech leaders claim, perhaps aggressive infrastructure buildouts are warranted. But some of the numbers are too large to comprehend. Sam Altman has told employees that OpenAI’s moonshot goal is to build 250 gigawatts of computing capacity by 2033, roughly equaling India’s total national electricity demand. Such a plan would cost more than $12 trillion by today’s standards.

“I do think there’s real execution risk,” OpenAI president and cofounder Greg Brockman recently told me about the company’s aggressive infrastructure goals. “Everything we say about the future, we see that it’s a possibility. It is not a certainty, but I don’t think the uncertainty comes from scientific questions. It’s a lot of hard work.” Who is exposed, and who is to blame? It depends on who you ask. During the August press dinner, where he made his market-moving comments, Altman was blunt about where he sees the excess. He said it’s “insane” that some AI startups with “three people and an idea” are receiving funding at such high valuations. “That’s not rational behavior,” he said. “Someone’s gonna get burned there, I think.” As Safe Superintelligence cofounder (and former OpenAI chief scientist and cofounder) Ilya Sutskever put it on a recent podcast: Silicon Valley has “more companies than ideas.” Demis Hassabis, the CEO of Google DeepMind, offered a similar diagnosis when I spoke with him in November. “It feels like there’s obviously a bubble in the private market,” he said. “You look at seed rounds with just nothing being tens of billions of dollars. That seems a little unsustainable.” Anthropic CEO Dario Amodei also struck at his competition during the New York Times DealBook Summit in early December. He said he feels confident about the technology itself but worries about how others are behaving on the business side: “On the economic side, I have my concerns where, even if the technology fulfills all its promises, I think there are players in the ecosystem who, if they just make a timing error, they just get it off by a little bit, bad things could happen.” Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters He stopped short of naming Sam Altman and OpenAI, but the implication was clear. “There are some players who are YOLOing,” he said. “Let’s say you’re a person who just kind of constitutionally wants to YOLO things or just likes big numbers. Then you may turn the dial too far.” Amodei also flagged “circular deals,” or the increasingly common arrangements where chip suppliers like Nvidia invest in AI companies that then turn around and spend those funds on their chips. Anthropic has done some of these, he said, though “not at the same scale as some other players.” (OpenAI is at the center of a number of such deals, as are Nvidia, CoreWeave, and a roster of other players.)  The danger, he explained, comes when the numbers get too big: “If you start stacking these where they get to huge amounts of money, and you’re saying, ’By 2027 or 2028 I need to make $200 billion a year,’ then yeah, you can overextend yourself.” Zuckerberg shared a similar message at an internal employee Q&A session after Meta’s last earnings call. He noted that unprofitable startups like OpenAI and Anthropic risk bankruptcy if they misjudge the timing of their investments, but Meta has the advantage of strong cash flow, he reassured staff.
How could a bubble burst? My conversations with tech executives and investors suggest that the bubble will be most likely to pop if overfunded startups can’t turn a profit or grow into their lofty valuations. This bubble could last longer than than past ones, given that private markets aren’t traded on public markets and therefore move more slowly, but the ripple effects will still be profound when the end comes.  If companies making grand commitments to data center buildouts no longer have the revenue growth to support them, the headline deals that have propped up the stock market come into question. Anthropic’s Amodei illustrated the problem during his DealBook Summit appearance, where he said the multi-year data center commitments he has to make combine with the company’s rapid, unpredictable revenue growth rate to create a “cone of uncertainty” about how much to spend.
The two most prominent private players in AI, OpenAI and Anthropic, have yet to turn a profit. A recent Deutsche Bank chart put the situation in stark historical context. Amazon burned through $3 billion before becoming profitable. Tesla, around $4 billion. Uber, $30 billion. OpenAI is projected to burn through $140 billion by 2029, while Anthropic is expected to burn $20 billion by 2027. Consultants at Bain estimate that the wave of AI infrastructure spending will require $2 trillion in annual AI revenue by 2030 just to justify the investment. That’s more than the combined 2024 revenue of Amazon, Apple, Alphabet, Microsoft, Meta, and Nvidia. When I talk to leaders of these large tech companies, they all agree that their sprawling businesses can absorb an expensive miscalculation about the returns from their AI infrastructure buildouts. It’s all the other companies that are either highly leveraged with debt or just unprofitable—even OpenAI and Anthropic—that they worry about.  Still, given the level of spending on AI, it still needs a viable business model beyond subscriptions, which won’t be able to  drive profits from billions of people’s eyeballs like the ad-driven businesses that have defined the last 20 years of the internet. Even the largest tech companies know they need to ship the world-changing agents they keep hyping: AI that can fully replace coworkers and complete tasks in the real world. For now, investors are mostly buying into the hype of the powerful AI systems that these data center buildouts will supposedly unlock in the future. At some point the biggest spenders, like OpenAI, will need to show investors that the money spent on the infrastructure buildout was worth it. There’s also still a lot of uncertainty about the technical direction that AI is heading in. LLMs are expected to remain critical to more advanced AI systems, but industry leaders can’t seem to agree on which additional breakthroughs are needed to achieve artificial general intelligence, or AGI. Some are betting on new kinds of AI that can understand the physical world, while others are focused on training AI to learn in a general way, like a human. In other words, what if all this unprecedented spending turns out to have been backing the wrong horse? The question now What makes this moment surreal is the honesty. The same people pouring billions into AI will openly tell you it might all come crashing down. 
Taylor framed it as two truths existing at once. “I think it is both true that AI will transform the economy,” he told me, “and I think we’re also in a bubble, and a lot of people will lose a lot of money. I think both are absolutely true at the same time.” He compared it to the internet. Webvan failed, but Instacart succeeded years later with essentially the same idea. If you were an Amazon shareholder from its IPO to now, you’re looking pretty good. If you were a Webvan shareholder, you probably feel differently.  “When the dust settles and you see who the winners are, society benefits from those inventions,” Amazon founder Jeff Bezos said in October. “This is real. The benefit to society from AI is going to be gigantic.” Goldman Sachs says the AI boom now looks the way tech stocks did in 1997, several years before the dot-com bubble actually burst. The bank flagged five warning signs seen in the late 1990s that investors should watch now: peak investment spending, falling corporate profits, rising corporate debt, Fed rate cuts, and widening credit spreads. We’re probably not at 1999 levels yet. But the imbalances are building fast. Michael Burry, who famously called the 2008 housing bubble collapse (as seen in the film The Big Short), recently compared the AI boom to the 1990s dot-com bubble too. Maybe AI will save us from our own irrational exuberance. But for now, we’re living in an in-between moment when everyone knows what’s coming but keeps blowing more air into the balloon anyway. As Altman put it that night at dinner: “Someone is going to lose a phenomenal amount of money. We don’t know who.” Alex Heath is the author of Sources, a newsletter about the AI race, and the cohost of ACCESS, a podcast about the tech industry’s inside conversations. Previously, he was deputy editor at The Verge.

Read More »

AI might not be coming for lawyers’ jobs anytime soon

When the generative AI boom took off in 2022, Rudi Miller and her law school classmates were suddenly gripped with anxiety. “Before graduating, there was discussion about what the job market would look like for us if AI became adopted,” she recalls.  So when it came time to choose a speciality, Miller—now a junior associate at the law firm Orrick—decided to become a litigator, the kind of lawyer who represents clients in court. She hoped the courtroom would be the last human stage. “Judges haven’t allowed ChatGPT-enabled robots to argue in court yet,” she says. This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next. She had reason to be worried. The artificial-intelligence job apocalypse seemed to be coming for lawyers. In March 2023, researchers reported that GPT-4 had smashed the Uniform Bar Exam. That same month, an industry report predicted that 44% of legal work could be automated. The legal tech industry entered a boom as law firms began adopting generative AI to mine mountains of documents and draft contracts, work ordinarily done by junior associates. Last month, the law firm Clifford Chance axed 10% of its staff in London, citing increased use of AI as a reason.
But for all the hype, LLMs are still far from thinking like lawyers—let alone replacing them. The models continue to hallucinate case citations, struggle to navigate gray areas of the law and reason about novel questions, and stumble when they attempt to synthesize information scattered across statutes, regulations, and court cases. And there are deeper institutional reasons to think the models could struggle to supplant legal jobs. While AI is reshaping the grunt work of the profession, the end of lawyers may not be arriving anytime soon. The big experiment The legal industry has long been defined by long hours and grueling workloads, so the promise of superhuman efficiency is appealing. Law firms are experimenting with general-purpose tools like ChatGPT and Microsoft Copilot and specialized legal tools like Harvey and Thomson Reuters’ CoCounsel, with some building their own in-house tools on top of frontier models. They’re rolling out AI boot camps and letting associates bill hundreds of hours to AI experimentation. As of 2024, 47.8% of attorneys at law firms employing 500 or more lawyers used AI, according to the American Bar Association. 
But lawyers say that LLMs are a long way from reasoning well enough to replace them. Lucas Hale, a junior associate at McDermott Will & Schulte, has been embracing AI for many routine chores. He uses Relativity to sift through long documents and Microsoft Copilot for drafting legal citations. But when he turns to ChatGPT with a complex legal question, he finds the chatbot spewing hallucinations, rambling off topic, or drawing a blank. “In the case where we have a very narrow question or a question of first impression for the court,” he says, referring to a novel legal question that a court has never decided before, “that’s the kind of thinking that the tool can’t do.” Much of Lucas’s work involves creatively applying the law to new fact patterns. “Right now, I don’t think very much of the work that litigators do, at least not the work that I do, can be outsourced to an AI utility,” he says. Allison Douglis, a senior associate at Jenner & Block, uses an LLM to kick off her legal research. But the tools only take her so far. “When it comes to actually fleshing out and developing an argument as a litigator, I don’t think they’re there,” she says. She has watched the models hallucinate case citations and fumble through ambiguous areas of the law. “Right now, I would much rather work with a junior associate than an AI tool,” she says. “Unless they get extraordinarily good very quickly, I can’t imagine that changing in the near future.” Beyond the bar The legal industry has seemed ripe for an AI takeover ever since ChatGPT’s triumph on the bar exam. But passing a standardized test isn’t the same as practicing law. The exam tests whether people can memorize legal rules and apply them to hypothetical situations—not whether they can exercise strategic judgment in complicated realities or craft arguments in uncharted legal territory. And models can be trained to ace benchmarks without genuinely improving their reasoning. Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters But new benchmarks are aiming to better measure the models’ ability to do legal work in the real world. The Professional Reasoning Benchmark, published by ScaleAI in November, evaluated leading LLMs on legal and financial tasks designed by professionals in the field. The study found that the models have critical gaps in their reliability for professional adoption, with the best-performing model scoring only 37% on the most difficult legal problems, meaning it met just over a third of possible points on the evaluation criteria. The models frequently made inaccurate legal judgments, and if they did reach correct conclusions, they did so through incomplete or opaque reasoning processes.  “The tools actually are not there to basically substitute [for] your lawyer,” says Afra Feyza Akyurek, the lead author of the paper. “Even though a lot of people think that LLMs have a good grasp of the law, it’s still lagging behind.” 

The paper builds on other benchmarks measuring the models’ performance on economically valuable work. The AI Productivity Index, published by the data firm Mercor in September and updated in December, found that the models have “substantial limitations” in performing legal work. The best-performing model scored 77.9% on legal tasks, meaning it satisfied roughly four out of five evaluation criteria. A model with such a score might generate substantial economic value in some industries, but in fields where errors are costly, it may not be useful at all, the early version of the study noted.   Professional benchmarks are a big step forward in evaluating the LLMs’ real-world capabilities, but they may still not capture what lawyers actually do. “These questions, although more challenging than those in past benchmarks, still don’t fully reflect the kinds of subjective, extremely challenging questions lawyers tackle in real life,” says Jon Choi, a law professor at the University of Washington School of Law, who coauthored a study on legal benchmarks in 2023.  Unlike math or coding, in which LLMs have made significant progress, legal reasoning may be challenging for the models to learn. The law deals with messy real-world problems, riddled with ambiguity and subjectivity, that often have no right answer, says Choi. Making matters worse, a lot of legal work isn’t recorded in ways that can be used to train the models, he says. When it is, documents can span hundreds of pages, scattered across statutes, regulations, and court cases that exist in a complex hierarchy.   But a more fundamental limitation might be that LLMs are simply not trained to think like lawyers. “The reasoning models still don’t fully reason about problems like we humans do,” says Julian Nyarko, a law professor at Stanford Law School. The models may lack a mental model of the world—the ability to simulate a scenario and predict what will happen—and that capability could be at the heart of complex legal reasoning, he says. It’s possible that the current paradigm of LLMs trained on next-word prediction gets us only so far.   The jobs remain Despite early signs that AI is beginning to affect entry-level workers, labor statistics have yet to show that lawyers are being displaced. 93.4% of law school graduates in 2024 were employed within 10 months of graduation—the highest rate on record—according to the National Association for Law Placement. The number of graduates working in law firms rose by 13% from 2023 to 2024.  For now, law firms are slow to shrink their ranks. “We’re not reducing headcounts at this point,” said Amy Ross, the chief of attorney talent at the law firm Ropes & Gray.  Even looking ahead, the effects could be incremental. “I will expect some impact on the legal profession’s labor market, but not major,” says Mert Demirer, an economist at MIT. “AI is going to be very useful in terms of information discovery and summary,” he says, but for complex legal tasks, “the law’s low risk tolerance, plus the current capabilities of AI, are going to make that case less automatable at this point.” Capabilities may evolve over time, but that’s a big unknown. It’s not just that the models themselves are not ready to replace junior lawyers. Institutional barriers may also shape how AI is deployed. Higher productivity reduces billable hours, challenging the dominant business model of law firms. Liability looms large for lawyers, and clients may still want a human on the hook. Regulations could also constrain how lawyers use the technology.
Still, as AI takes on some associate work, law firms may need to reinvent their training system. “When junior work dries up, you have to have a more formal way of teaching than hoping that an apprenticeship works,” says Ethan Mollick, a management professor at the Wharton School of the University of Pennsylvania. Zach Couger, a junior associate at McDermott Will & Schulte, leans on ChatGPT to comb through piles of contracts he once slogged through by hand. He can’t imagine going back to doing the job himself, but he wonders what he’s missing.  “I’m worried that I’m not getting the same reps that senior attorneys got,” he says, referring to the repetitive training that has long defined the early experiences of lawyers. “On the other hand, it is very nice to have a semi–knowledge expert to just ask questions to that’s not a partner who’s also very busy.”  Even though an AI job apocalypse looks distant, the uncertainty sticks with him. Lately, Couger finds himself staying up late, wondering if he could be part of the last class of associates at big law firms: “I may be the last plane out.”

Read More »

AI materials discovery now needs to move into the real world

.cst-large,
.cst-default {
width: 100%;
}

@media (max-width: 767px) {
.cst-block {
overflow-x: hidden;
}
}

@media (min-width: 630px) {
.cst-large {
margin-left: -25%;
width: 150%;
}

@media (min-width: 960px) {
.cst-large {
margin-left: -16.666666666666664%;
width: 140.26%;
}
}

@media (min-width: 1312px) {
.cst-large {
width: 145.13%;
}
}
}
@media (min-width: 60rem) {
.flourish-embed {
width: 60vw;
transform: translateX(-50%);
left: 50%;
position: relative;
}
}
The microwave-size instrument at Lila Sciences in Cambridge, Massachusetts, doesn’t look all that different from others that I’ve seen in state-of-the-art materials labs. Inside its vacuum chamber, the machine zaps a palette of different elements to create vaporized particles, which then fly through the chamber and land to create a thin film, using a technique called sputtering. What sets this instrument apart is that artificial intelligence is running the experiment; an AI agent, trained on vast amounts of scientific literature and data, has determined the recipe and is varying the combination of elements.  Later, a person will walk the samples, each containing multiple potential catalysts, over to a different part of the lab for testing. Another AI agent will scan and interpret the data, using it to suggest another round of experiments to try to optimize the materials’ performance.   This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next. For now, a human scientist keeps a close eye on the experiments and will approve the next steps on the basis of the AI’s suggestions and the test results. But the startup is convinced this AI-controlled machine is a peek into the future of materials discovery—one in which autonomous labs could make it far cheaper and faster to come up with novel and useful compounds.  Flush with hundreds of millions of dollars in new funding, Lila Sciences is one of AI’s latest unicorns. The company is on a larger mission to use AI-run autonomous labs for scientific discovery—the goal is to achieve what it calls scientific superintelligence. But I’m here this morning to learn specifically about the discovery of new materials.  Lila Sciences’ John Gregoire (background) and Rafael Gómez-Bombarelli watch as an AI-guided sputtering instrument makes samples of thin-film alloys.CODY O’LOUGHLIN We desperately need better materials to solve our problems. We’ll need improved electrodes and other parts for more powerful batteries; compounds to more cheaply suck carbon dioxide out of the air; and better catalysts to make green hydrogen and other clean fuels and chemicals. And we will likely need novel materials like higher-temperature superconductors, improved magnets, and different types of semiconductors for a next generation of breakthroughs in everything from quantum computing to fusion power to AI hardware. 
But materials science has not had many commercial wins in the last few decades. In part because of its complexity and the lack of successes, the field has become something of an innovation backwater, overshadowed by the more glamorous—and lucrative—search for new drugs and insights into biology. The idea of using AI for materials discovery is not exactly new, but it got a huge boost in 2020 when DeepMind showed that its AlphaFold2 model could accurately predict the three-dimensional structure of proteins. Then, in 2022, came the success and popularity of ChatGPT. The hope that similar AI models using deep learning could aid in doing science captivated tech insiders. Why not use our new generative AI capabilities to search the vast chemical landscape and help simulate atomic structures, pointing the way to new substances with amazing properties?
“Simulations can be super powerful for framing problems and understanding what is worth testing in the lab. But there’s zero problems we can ever solve in the real world with simulation alone.” John Gregoire, Lila Sciences, chief autonomous science officer Researchers touted an AI model that had reportedly discovered “millions of new materials.” The money began pouring in, funding a host of startups. But so far there has been no “eureka” moment, no ChatGPT-like breakthrough—no discovery of new miracle materials or even slightly better ones. The startups that want to find useful new compounds face a common bottleneck: By far the most time-consuming and expensive step in materials discovery is not imagining new structures but making them in the real world. Before trying to synthesize a material, you don’t know if, in fact, it can be made and is stable, and many of its properties remain unknown until you test it in the lab. “Simulations can be super powerful for kind of framing problems and understanding what is worth testing in the lab,” says John Gregoire, Lila Sciences’ chief autonomous science officer. “But there’s zero problems we can ever solve in the real world with simulation alone.”  Startups like Lila Sciences have staked their strategies on using AI to transform experimentation and are building labs that use agents to plan, run, and interpret the results of experiments to synthesize new materials. Automation in laboratories already exists. But the idea is to have AI agents take it to the next level by directing autonomous labs, where their tasks could include designing experiments and controlling the robotics used to shuffle samples around. And, most important, companies want to use AI to vacuum up and analyze the vast amount of data produced by such experiments in the search for clues to better materials. If they succeed, these companies could shorten the discovery process from decades to a few years or less, helping uncover new materials and optimize existing ones. But it’s a gamble. Even though AI is already taking over many laboratory chores and tasks, finding new—and useful—materials on its own is another matter entirely.  Innovation backwater I have been reporting about materials discovery for nearly 40 years, and to be honest, there have been only a few memorable commercial breakthroughs, such as lithium-­ion batteries, over that time. There have been plenty of scientific advances to write about, from perovskite solar cells to graphene transistors to metal-­organic frameworks (MOFs), materials based on an intriguing type of molecular architecture that recently won its inventors a Nobel Prize. But few of those advances—including MOFs—have made it far out of the lab. Others, like quantum dots, have found some commercial uses, but in general, the kinds of life-changing inventions created in earlier decades have been lacking. 
Blame the amount of time (typically 20 years or more) and the hundreds of millions of dollars it takes to make, test, optimize, and manufacture a new material—and the industry’s lack of interest in spending that kind of time and money in low-margin commodity markets. Or maybe we’ve just run out of ideas for making stuff. The need to both speed up that process and find new ideas is the reason researchers have turned to AI. For decades, scientists have used computers to design potential materials, calculating where to place atoms to form structures that are stable and have predictable characteristics. It’s worked—but only kind of. Advances in AI have made that computational modeling far faster and have promised the ability to quickly explore a vast number of possible structures. Google DeepMind, Meta, and Microsoft have all launched efforts to bring AI tools to the problem of designing new materials.  But the limitations that have always plagued computational modeling of new materials remain. With many types of materials, such as crystals, useful characteristics often can’t be predicted solely by calculating atomic structures. To uncover and optimize those properties, you need to make something real. Or as Rafael Gómez-Bombarelli, one of Lila’s cofounders and an MIT professor of materials science, puts it: “Structure helps us think about the problem, but it’s neither necessary nor sufficient for real materials problems.”
Perhaps no advance exemplified the gap between the virtual and physical worlds more than DeepMind’s announcement in late 2023 that it had used deep learning to discover “millions of new materials,” including 380,000 crystals that it declared “the most stable, making them promising candidates for experimental synthesis.” In technical terms, the arrangement of atoms represented a minimum energy state where they were content to stay put. This was “an order-of-magnitude expansion in stable materials known to humanity,” the DeepMind researchers proclaimed. To the AI community, it appeared to be the breakthrough everyone had been waiting for. The DeepMind research not only offered a gold mine of possible new materials, it also created powerful new computational methods for predicting a large number of structures. But some materials scientists had a far different reaction. After closer scrutiny, researchers at the University of California, Santa Barbara, said they’d found “scant evidence for compounds that fulfill the trifecta of novelty, credibility, and utility.” In fact, the scientists reported, they didn’t find any truly novel compounds among the ones they looked at; some were merely “trivial” variations of known ones. The scientists appeared particularly peeved that the potential compounds were labeled materials. They wrote: “We would respectfully suggest that the work does not report any new materials but reports a list of proposed compounds. In our view, a compound can be called a material when it exhibits some functionality and, therefore, has potential utility.” Some of the imagined crystals simply defied the conditions of the real world. To do computations on so many possible structures, DeepMind researchers simulated them at absolute zero, where atoms are well ordered; they vibrate a bit but don’t move around. At higher temperatures—the kind that would exist in the lab or anywhere in the world—the atoms fly about in complex ways, often creating more disorderly crystal structures. A number of the so-called novel materials predicted by DeepMind appeared to be well-ordered versions of disordered ones that were already known. 
More generally, the DeepMind paper was simply another reminder of how challenging it is to capture physical realities in virtual simulations—at least for now. Because of the limitations of computational power, researchers typically perform calculations on relatively few atoms. Yet many desirable properties are determined by the microstructure of the materials—at a scale much larger than the atomic world. And some effects, like high-temperature superconductivity or even the catalysis that is key to many common industrial processes, are far too complex or poorly understood to be explained by atomic simulations alone. A common language Even so, there are signs that the divide between simulations and experimental work is beginning to narrow. DeepMind, for one, says that since the release of the 2023 paper it has been working with scientists in labs around the world to synthesize AI-identified compounds and has achieved some success. Meanwhile, a number of the startups entering the space are looking to combine computational and experimental expertise in one organization.  One such startup is Periodic Labs, cofounded by Ekin Dogus Cubuk, a physicist who led the scientific team that generated the 2023 DeepMind headlines, and by Liam Fedus, a co-creator of ChatGPT at OpenAI. Despite its founders’ background in computational modeling and AI software, the company is building much of its materials discovery strategy around synthesis done in automated labs.  The vision behind the startup is to link these different fields of expertise by using large language models that are trained on scientific literature and able to learn from ongoing experiments. An LLM might suggest the recipe and conditions to make a compound; it can also interpret test data and feed additional suggestions to the startup’s chemists and physicists. In this strategy, simulations might suggest possible material candidates, but they are also used to help explain the experimental results and suggest possible structural tweaks. The grand prize would be a room-temperature superconductor, a material that could transform computing and electricity but that has eluded scientists for decades. Periodic Labs, like Lila Sciences, has ambitions beyond designing and making new materials. It wants to “create an AI scientist”—specifically, one adept at the physical sciences. “LLMs have gotten quite good at distilling chemistry information, physics information,” says Cubuk, “and now we’re trying to make it more advanced by teaching it how to do science—for example, doing simulations, doing experiments, doing theoretical modeling.” The approach, like that of Lila Sciences, is based on the expectation that a better understanding of the science behind materials and their synthesis will lead to clues that could help researchers find a broad range of new ones. One target for Periodic Labs is materials whose properties are defined by quantum effects, such as new types of magnets. The grand prize would be a room-temperature superconductor, a material that could transform computing and electricity but that has eluded scientists for decades.
Superconductors are materials in which electricity flows without any resistance and, thus, without producing heat. So far, the best of these materials become superconducting only at relatively low temperatures and require significant cooling. If they can be made to work at or close to room temperature, they could lead to far more efficient power grids, new types of quantum computers, and even more practical high-speed magnetic-levitation trains.  Lila staff scientist Natalie Page (right), Gómez- Bombarelli, and Gregoire inspect thin-film samples after they come out of the sputtering machine and before they undergo testing.CODY O’LOUGHLIN The failure to find a room-­temperature superconductor is one of the great disappointments in materials science over the last few decades. I was there when President Reagan spoke about the technology in 1987, during the peak hype over newly made ceramics that became superconducting at the relatively balmy temperature of 93 Kelvin (that’s −292 °F), enthusing that they “bring us to the threshold of a new age.” There was a sense of optimism among the scientists and businesspeople in that packed ballroom at the Washington Hilton as Reagan anticipated “a host of benefits, not least among them a reduced dependence on foreign oil, a cleaner environment, and a stronger national economy.” In retrospect, it might have been one of the last times that we pinned our economic and technical aspirations on a breakthrough in materials.
The promised new age never came. Scientists still have not found a material that becomes superconducting at room temperatures, or anywhere close, under normal conditions. The best existing superconductors are brittle and tend to make lousy wires. One of the reasons that finding higher-­temperature superconductors has been so difficult is that no theory explains the effect at relatively high temperatures—or can predict it simply from the placement of atoms in the structure. It will ultimately fall to lab scientists to synthesize any interesting candidates, test them, and search the resulting data for clues to understanding the still puzzling phenomenon. Doing so, says Cubuk, is one of the top priorities of Periodic Labs.  AI in charge It can take a researcher a year or more to make a crystal structure for the first time. Then there are typically years of further work to test its properties and figure out how to make the larger quantities needed for a commercial product.  Startups like Lila Sciences and Periodic Labs are pinning their hopes largely on the prospect that AI-directed experiments can slash those times. One reason for the optimism is that many labs have already incorporated a lot of automation, for everything from preparing samples to shuttling test items around. Researchers routinely use robotic arms, software, automated versions of microscopes and other analytical instruments, and mechanized tools for manipulating lab equipment. The automation allows, among other things, for high-throughput synthesis, in which multiple samples with various combinations of ingredients are rapidly created and screened in large batches, greatly speeding up the experiments. The idea is that using AI to plan and run such automated synthesis can make it far more systematic and efficient. AI agents, which can collect and analyze far more data than any human possibly could, can use real-time information to vary the ingredients and synthesis conditions until they get a sample with the optimal properties. Such AI-directed labs could do far more experiments than a person and could be far smarter than existing systems for high-throughput synthesis.  But so-called self-driving labs for materials are still a work in progress. Many types of materials require solid-­state synthesis, a set of processes that are far more difficult to automate than the liquid-­handling activities that are commonplace in making drugs. You need to prepare and mix powders of multiple inorganic ingredients in the right combination for making, say, a catalyst and then decide how to process the sample to create the desired structure—for example, identifying the right temperature and pressure at which to carry out the synthesis. Even determining what you’ve made can be tricky.
In 2023, the A-Lab at Lawrence Berkeley National Laboratory claimed to be the first fully automated lab to use inorganic powders as starting ingredients. Subsequently, scientists reported that the autonomous lab had used robotics and AI to synthesize and test 41 novel materials, including some predicted in the DeepMind database. Some critics questioned the novelty of what was produced and complained that the automated analysis of the materials was not up to experimental standards, but the Berkeley researchers defended the effort as simply a demonstration of the autonomous system’s potential. “How it works today and how we envision it are still somewhat different. There’s just a lot of tool building that needs to be done,” says Gerbrand Ceder, the principal scientist behind the A-Lab.  AI agents are already getting good at doing many laboratory chores, from preparing recipes to interpreting some kinds of test data—finding, for example, patterns in a micrograph that might be hidden to the human eye. But Ceder is hoping the technology could soon “capture human decision-making,” analyzing ongoing experiments to make strategic choices on what to do next. For example, his group is working on an improved synthesis agent that would better incorporate what he calls scientists’ “diffused” knowledge—the kind gained from extensive training and experience. “I imagine a world where people build agents around their expertise, and then there’s sort of an uber-model that puts it together,” he says. “The uber-model essentially needs to know what agents it can call on and what they know, or what their expertise is.” “In one field that I work in, solid-state batteries, there are 50 papers published every day. And that is just one field that I work in. The A I revolution is about finally gathering all the scientific data we have.” Gerbrand Ceder, principal scientist, A-Lab One of the strengths of AI agents is their ability to devour vast amounts of scientific literature. “In one field that I work in, solid-­state batteries, there are 50 papers published every day. And that is just one field that I work in,” says Ceder. It’s impossible for anyone to keep up. “The AI revolution is about finally gathering all the scientific data we have,” he says.  Last summer, Ceder became the chief science officer at an AI materials discovery startup called Radical AI and took a sabbatical from the University of California, Berkeley, to help set up its self-driving labs in New York City. A slide deck shows the portfolio of different AI agents and generative models meant to help realize Ceder’s vision. If you look closely, you can spot an LLM called the “orchestrator”—it’s what CEO Joseph Krause calls the “head honcho.”  New hope So far, despite the hype around the use of AI to discover new materials and the growing momentum—and money—behind the field, there still has not been a convincing big win. There is no example like the 2016 victory of DeepMind’s AlphaGo over a Go world champion. Or like AlphaFold’s achievement in mastering one of biomedicine’s hardest and most time-consuming chores, predicting 3D structures of proteins.  The field of materials discovery is still waiting for its moment. It could come if AI agents can dramatically speed the design or synthesis of practical materials, similar to but better than what we have today. Or maybe the moment will be the discovery of a truly novel one, such as a room-­temperature superconductor. A small window provides a view of the inside workings of Lila’s sputtering instrument.The startup uses the machine to create a wide variety of experimental samples, including potential materials that could be useful for coatings and catalysts.CODY O’LOUGHLIN With or without such a breakthrough moment, startups face the challenge of trying to turn their scientific achievements into useful materials. The task is particularly difficult because any new materials would likely have to be commercialized in an industry dominated by large incumbents that are not particularly prone to risk-taking. Susan Schofer, a tech investor and partner at the venture capital firm SOSV, is cautiously optimistic about the field. But Schofer, who spent several years in the mid-2000s as a catalyst researcher at one of the first startups using automation and high-throughput screening for materials discovery (it didn’t survive), wants to see some evidence that the technology can translate into commercial successes when she evaluates startups to invest in.   In particular, she wants to see evidence that the AI startups are already “finding something new, that’s different, and know how they are going to iterate from there.” And she wants to see a business model that captures the value of new materials. She says, “I think the ideal would be: I got a spec from the industry. I know what their problem is. We’ve defined it. Now we’re going to go build it. Now we have a new material that we can sell, that we have scaled up enough that we’ve proven it. And then we partner somehow to manufacture it, but we get revenue off selling the material.” Schofer says that while she gets the vision of trying to redefine science, she’d advise startups to “show us how you’re going to get there.” She adds, “Let’s see the first steps.” Demonstrating those first steps could be essential in enticing large existing materials companies to embrace AI technologies more fully. Corporate researchers in the industry have been burned before—by the promise over the decades that increasingly powerful computers will magically design new materials; by combinatorial chemistry, a fad that raced through materials R&D labs in the early 2000s with little tangible result; and by the promise that synthetic biology would make our next generation of chemicals and materials. More recently, the materials community has been blanketed by a new hype cycle around AI. Some of that hype was fueled by the 2023 DeepMind announcement of the discovery of “millions of new materials,” a claim that, in retrospect, clearly overpromised. And it was further fueled when an MIT economics student posted a paper in late 2024 claiming that a large, unnamed corporate R&D lab had used AI to efficiently invent a slew of new materials. AI, it seemed, was already revolutionizing the industry. A few months later, the MIT economics department concluded that “the paper should be withdrawn from public discourse.” Two prominent MIT economists who are acknowledged in a footnote in the paper added that they had “no confidence in the provenance, reliability or validity of the data and the veracity of the research.” Can AI move beyond the hype and false hopes and truly transform materials discovery? Maybe. There is ample evidence that it’s changing how materials scientists work, providing them—if nothing else—with useful lab tools. Researchers are increasingly using LLMs to query the scientific literature and spot patterns in experimental data.  But it’s still early days in turning those AI tools into actual materials discoveries. The use of AI to run autonomous labs, in particular, is just getting underway; making and testing stuff takes time and lots of money. The morning I visited Lila Sciences, its labs were largely empty, and it’s now preparing to move into a much larger space a few miles away. Periodic Labs is just beginning to set up its lab in San Francisco. It’s starting with manual synthesis guided by AI predictions; its robotic high-throughput lab will come soon. Radical AI reports that its lab is almost fully autonomous but plans to soon move to a larger space. Prominent AI researchers Liam Fedus (left) and Ekin Dogus Cubuk are the cofounders of Periodic Labs. The San Francisco–based startup aims to build an AI scientist that’s adept at the physical sciences.JASON HENRY When I talk to the scientific founders of these startups, I hear a renewed excitement about a field that long operated in the shadows of drug discovery and genomic medicine. For one thing, there is the money. “You see this enormous enthusiasm to put AI and materials together,” says Ceder. “I’ve never seen this much money flow into materials.” Reviving the materials industry is a challenge that goes beyond scientific advances, however. It means selling companies on a whole new way of doing R&D. But the startups benefit from a huge dose of confidence borrowed from the rest of the AI industry. And maybe that, after years of playing it safe, is just what the materials business needs. This story is part of an online package on resetting expectations around AI. For more see technologyreview.com/hypecorrection.

Read More »

Improved Gemini audio models for powerful voice experiences

What customers are sayingGoogle Cloud customers are already using Gemini’s native audio capabilities to drive real business results, from mortgage processing to customer calls.“Users often forget they’re talking to AI within a minute of using Sidekick, and in some cases have thanked the bot after a long chat…New Live API AI capabilities offered through Gemini [2.5 Flash Native Audio] empower our merchants to win.” – David Wurtz, VP of Product, Shopify”By integrating the Gemini 2.5 Flash Native Audio model…we’ve significantly enhanced Mia’s capabilities since launching in May 2025. This powerful combination has enabled us to generate over 14,000 loans for our broker partners.” – Jason Bressler, Chief Technology Officer, United Wholesale Mortgage (UWM)“Working with the Gemini 2.5 Flash Native Audio model through Vertex AI allows Newo.ai AI Receptionists to achieve unmatched conversational intelligence … .They can identify the main speaker even in noisy settings, switch languages mid-conversation, and sound remarkably natural and emotionally expressive.” – David Yang, Co-founder, Newo.aiLive Speech TranslationGemini now natively supports new live speech-to-speech translation capabilities designed to handle both continuous listening and two-way conversation.With continuous listening, Gemini automatically translates speech in multiple languages into a single target language. This allows you to put headphones in and hear the world around you in your language.For two-way conversation, Gemini’s live speech translation handles translation between two languages in real-time, automatically switching the output language based on who is speaking. For example, if you speak English and want to chat with a Hindi speaker, you’ll hear English translations in real-time in your headphones, while your phone broadcasts Hindi when you’re done speaking.Gemini’s live speech translation has a number of key capabilities that help in the real world:Language coverage: Translate speech in over 70 languages and 2000 language pairs by combining Gemini model’s world knowledge and multilingual capabilities with its native audio capabilitiesStyle transfer: Captures the nuance of human speech, preserving the speaker’s intonation, pacing and pitch so the translation sounds natural.Multilingual input: Understands multiple languages simultaneously in a single session, helping you follow multilingual conversations without needing to fiddle around with language settings.Auto detection: Identifies the spoken language and begins translation, so you don’t even need to know what language is being spoken to start translating.Noise robustness: Filters out ambient noise so you can converse comfortably even in loud, outdoor environments.

Read More »

Utilities under pressure: 6 power sector trends to watch in 2026

Listen to the article 10 min This audio is auto-generated. Please let us know if you have feedback. 2026 will be a year of reckoning for the electric power industry.  Major policy changes in the One Big Beautiful Bill Act, which axed most subsidies for clean energy and electric vehicles, are forcing utilities, manufacturers, developers and others to pivot fast. The impacts of those changes will become more pronounced over the coming months. Market forces will also have their say. Demand for power has never been greater. But some of the most aggressive predictions driving resource planning may not come to pass, leading some to fear the possibility of another tech bubble. At the same time, each passing day brings more distributed energy resources onto the grid, increasing the opportunities — and expectations — for utilities to harness those resources into a more dynamic, flexible and resilient system. Here are some of the top trends Utility Dive will be tracking over the coming year. Large loads — where are they, and who controls their interconnection — dominate industry concerns Across the United States, but particularly in markets like Texas and the Mid-Atlantic, large loads — mainly data centers designed to run artificial intelligence programs — are seeking to connect to the grid, driving up electricity demand forecasts and ballooning interconnection queues. That’s led some states to introduce new large load tariffs to weed out speculative requests, with more states expected to follow suit.  The Department of Energy is now pushing federal regulators to take a more active role in regulating how those loads get connected to the grid, setting the stage for a power struggle between state and federal authorities. The DOE asked the Federal Energy Regulatory Commission to issue rules by April 30, a deadline many say will be hard to meet. A

Read More »

China’s Top Oil Firms Turn to Beijing for Guidance on VEN

Leading Chinese oil companies with interests in Venezuela have asked Beijing for guidance on how to protect their investments as Washington cranks up pressure on the Latin American country to increase its economic ties with the US. State-owned firms led by China National Petroleum Corp. raised concerns this week with government agencies and sought advice from officials, in an effort to align their responses with Beijing’s diplomatic strategy and to salvage existing claims to some of the world’s largest oil reserves, according to people familiar with the situation. They asked not to be identified as the discussions are private. The companies, closely monitoring developments even before the US seized President Nicolas Maduro at the weekend, are also conducting their own assessments of the situation on the ground, the people said. Top Beijing officials are separately reviewing events and trying to better understand corporate exposure, while planning for scenarios including a worst case where China’s investments would go to zero, they added.  While it is typical for government-backed firms to maintain close ties with officials in Beijing, the emergency consultations underscore the stakes for Chinese majors, caught off-guard by Washington’s raid and by the rapid escalation of efforts to establish a US sphere of influence in the Americas. Beyond the immediate impact of US actions, all are concerned about long-term prospects, the people said. Chinese companies have established a significant footprint across Latin America over the past decades, including under the Belt and Road Initiative. Venezuela, with few other friends, has been among the most important beneficiaries of this largesse — in part because of its vast oil wealth. China first extended financing for infrastructure and oil projects in 2007, under former President Hugo Chavez. Public data supports estimates that Beijing had lent upwards of $60 billion in oil-backed loans through state-run banks by 2015. 

Read More »

America’s new dietary guidelines ignore decades of scientific research

The new year has barely begun, but the first days of 2026 have brought big news for health. On Monday, the US’s federal health agency upended its recommendations for routine childhood vaccinations—a move that health associations worry puts children at unnecessary risk of preventable disease. There was more news from the federal government on Wednesday, when health secretary Robert F. Kennedy Jr. and his colleagues at the Departments of Health and Human Services and Agriculture unveiled new dietary guidelines for Americans. And they are causing a bit of a stir. That’s partly because they recommend products like red meat, butter, and beef tallow—foods that have been linked to cardiovascular disease, and that nutrition experts have been recommending people limit in their diets. These guidelines are a big deal—they influence food assistance programs and school lunches, for example. So this week let’s look at the good, the bad, and the ugly advice being dished up to Americans by their government.
The government dietary guidelines have been around since the 1980s. They are updated every five years, in a process that typically involves a team of nutrition scientists who have combed over scientific research for years. That team will first publish its findings in a scientific report, and, around a year later, the finalized Dietary Guidelines for Americans are published. The last guidelines covered the period 2020 to 2025, and new guidelines were expected in the summer of 2025. Work had already been underway for years; the scientific report intended to inform them was published back in 2024. But the publication of the guidelines was delayed by last year’s government shutdown, Kennedy said last year. They were finally published yesterday.
Nutrition experts had been waiting with bated breath. Nutrition science has evolved slightly over the last five years, and some were expecting to see new recommendations. Research now suggests, for example, that there is no “safe” level of alcohol consumption. We are also beginning to learn more about health risks associated with some ultraprocessed foods (although we still don’t have a good understanding of what they might be, or what even counts as “ultraprocessed”.) And some scientists were expecting to see the new guidelines factor in environmental sustainability, says Gabby Headrick, the associate director of food and nutrition policy at George Washington University’s Institute for Food Safety & Nutrition Security in Washington DC. They didn’t. Many of the recommendations are sensible. The guidelines recommend a diet rich in whole foods, particularly fresh fruits and vegetables. They recommend avoiding highly processed foods and added sugars. They also highlight the importance of dietary protein, whole grains, and “healthy” fats. But not all of them are, says Headrick. The guidelines open with a “new pyramid” of foods. This inverted triangle is topped with “protein, dairy, and healthy fats” on one side and “vegetables and fruits” on the other. There are a few problems with this image. For starters, its shape—nutrition scientists have long moved on from the food pyramids of the 1990s, says Headrick. They’re confusing and make it difficult for people to understand what the contents of their plate should look like. That’s why scientists now use an image of a plate to depict a healthy diet. “We’ve been using MyPlate to describe the dietary guidelines in a very consumer-friendly, nutrition-education-friendly way for over the last decade now,” says Headrick. (The UK’s National Health Service takes a similar approach.) And then there’s the content of that food pyramid. It puts a significant focus on meat and whole-fat dairy produce. The top left image—the one most viewers will probably see first—is of a steak. Smack in the middle of the pyramid is a stick of butter. That’s new. And it’s not a good thing.

While both red meat and whole-fat dairy can certainly form part of a healthy diet, nutrition scientists have long been recommending that most people try to limit their consumption of these foods. Both can be high in saturated fat, which can increase the risk of cardiovascular disease—the leading cause of death in the US. In 2015, on the basis of limited evidence, the World Health Organization classified red meat as “probably carcinogenic to humans.”  Also concerning is the document’s definition of “healthy fats,” which includes butter and beef tallow (a MAHA favorite). Neither food is generally considered to be as healthy as olive oil, for example. While olive oil contains around two grams of saturated fat per tablespoon, a tablespoon of beef tallow has around six grams of saturated fat, and the same amount of butter contains around seven grams of saturated fat, says Headrick. “I think these are pretty harmful dietary recommendations to be making when we have established that those specific foods likely do not have health-promoting benefits,” she adds. Red meat is not exactly a sustainable food, and neither are dairy products. And the advice on alcohol is relatively vague, recommending that people “consume less alcohol for better overall health” (which might leave you wondering: Less than what?). There are other questionable recommendations in the guidelines. Americans are advised to include more protein in their diets—at levels between 1.2 and 1.6 grams daily per kilo of body weight, 50% to 100% more than recommended in previous guidelines. There’s a risk that increasing protein consumption to such levels could raise a person’s intake of both calories and saturated fats to unhealthy levels, says José Ordovás, a senior nutrition scientist at Tufts University. “I would err on the low side,” he says. Some nutrition scientists are questioning why these changes have been made. It’s not as though the new recommendations were in the 2024 scientific report. And the evidence on red meat and saturated fat hasn’t changed, says Headrick. In reporting this piece, I contacted many contributors to the previous guidelines, and some who had led research for 2024’s scientific report. None of them agreed to comment on the new guidelines on the record. Some seemed disgruntled. One merely told me that the process by which the new guidelines had been created was “opaque.” “These people invested a lot of their time, and they did a thorough job [over] a couple of years, identifying [relevant scientific studies],” says Ordovás. “I’m not surprised that when they see that [their] work was ignored and replaced with something [put together] quickly, that they feel a little bit disappointed,” he says. This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Read More »

USA Crude Oil Stocks Drop Nearly 4MM Barrels WoW

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 3.8 million barrels from the week ending December 26 to the week ending January 2, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. This report was released on January 7 and included data for the week ending January 2. According to the report, crude oil stocks, not including the SPR, stood at 419.1 million barrels on January 2, 422.9 million barrels on December 26, 2025, and 414.6 million barrels on January 3, 2025. Crude oil in the SPR stood at 413.5 million barrels on January 2, 413.2 million barrels on December 26, and 393.8 million barrels on January 3, 2025, the report showed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.707 billion barrels on January 2, the report revealed. Total petroleum stocks were up 8.4 million barrels week on week and up 78.7 million barrels year on year, the report pointed out. “At 419.1 million barrels, U.S. crude oil inventories are about three percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 7.7 million barrels from last week and are about three percent above the five year average for this time of year. Finished gasoline inventories decreased, while blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 5.6 million barrels last week and are about three percent below the five year average for this time of year. Propane/propylene inventories decreased 2.2 million barrels from last week and are about 29 percent above the five year

Read More »

The Download: mimicking pregnancy’s first moments in a lab, and AI parameters explained

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Researchers are getting organoids pregnant with human embryos At first glance, it looks like the start of a human pregnancy: A ball-shaped embryo presses into the lining of the uterus then grips tight, burrowing in as the first tendrils of a future placenta appear. This is implantation—the moment that pregnancy officially begins. Only none of it is happening inside a body. These images were captured in a Beijing laboratory, inside a microfluidic chip, as scientists watched the scene unfold.
In three recent papers published by Cell Press, scientists report what they call the most accurate efforts yet to mimic the first moments of pregnancy in the lab. They’ve taken human embryos from IVF centers and let these merge with “organoids” made of endometrial cells, which form the lining of the uterus. Read our story about their work, and what might come next. —Antonio Regalado
LLMs contain a LOT of parameters. But what’s a parameter? A large language model’s parameters are often said to be the dials and levers that control how it behaves. Think of a planet-size pinball machine that sends its balls pinging from one end to the other via billions of paddles and bumpers set just so. Tweak those settings and the balls will behave in a different way.   OpenAI’s GPT-3, released in 2020, had 175 billion parameters. Google DeepMind’s latest LLM, Gemini 3, may have at least a trillion—some think it’s probably more like 7 trillion—but the company isn’t saying. (With competition now fierce, AI firms no longer share information about how their models are built.) But the basics of what parameters are and how they make LLMs do the remarkable things that they do are the same across different models. Ever wondered what makes an LLM really tick—what’s behind the colorful pinball-machine metaphors? Let’s dive in.  —Will Douglas Heaven What new legal challenges mean for the future of US offshore wind

For offshore wind power in the US, the new year is bringing new legal battles. On December 22, the Trump administration announced it would pause the leases of five wind farms currently under construction off the US East Coast. Developers were ordered to stop work immediately. The cited reason? Concerns that turbines can cause radar interference. But that’s a known issue, and developers have worked with the government to deal with it for years. Companies have been quick to file lawsuits, and the court battles could begin as soon as this week. Here’s what the latest kerfuffle might mean for the US’s struggling offshore wind industry. —Casey Crownhart This story is from The Spark, our weekly newsletter that explains the tech that could combat the climate crisis. Sign up to receive it in your inbox every Wednesday. The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Google and Character.AI have agreed to settle a lawsuit over a teenager’s deathIt’s one of five lawsuits the companies have settled linked to young people’s deaths this week. (NYT $)+ AI companions are the final stage of digital addiction, and lawmakers are taking aim. (MIT Technology Review)2 The Trump administration’s chief output is online trollingWitness the Maduro memes. (The Atlantic $)
3 OpenAI has created a new ChatGPT Health feature It’s dedicated to analyzing medical results and answering health queries. (Axios)+ AI chatbots fail to give adequate advice for most questions relating to women’s health. (New Scientist $)+ AI companies have stopped warning you that their chatbots aren’t doctors. (MIT Technology Review) 4 Meta’s acquisition of Manus is being probed by ChinaHolding up the purchase gives it another bargaining chip in its dealings with the US. (CNBC)+ What happened when we put Manus to the test. (MIT Technology Review)5 China is building humanoid robot training centersTo address a major shortage of the data needed to make them more competent. (Rest of World)+ The robot race is fueling a fight for training data. (MIT Technology Review) 6 AI still isn’t close to automating our jobsThe technology just fundamentally isn’t good enough yet—for now. (WP $) 7 Weight regain seems to happen within two years of quitting the jabsThat’s the conclusion of a review of more than 40 studies. But dig into the details, and it’s not all bad news. (New Scientist $)8 This Silicon Valley community is betting on algorithms to find loveWhich feels like a bit of a fool’s errand. (NYT $)9 Hearing aids are about to get really goodYou can—of course—thank advances in AI. (IEEE Spectrum) 10 The first 100% AI-generated movie will hit our screen within three yearsThat’s according to Roku’s founder Anthony Wood. (Variety $)+ How do AI models generate videos? (MIT Technology Review)
Quote of the day “I’ve seen the video. Don’t believe this propaganda machine. ”  —Minnesota’s governor Tim Walz responds on X to Homeland Security’s claim that ICE’s shooting of a woman in Minneapolis was justified.
One more thing Inside the strange limbo facing millions of IVF embryosMillions of embryos created through IVF sit frozen in time, stored in cryopreservation tanks around the world. The number is only growing thanks to advances in technology, the rising popularity of IVF, and improvements in its success rates.At a basic level, an embryo is simply a tiny ball of a hundred or so cells. But unlike other types of body tissue, it holds the potential for life. Many argue that this endows embryos with a special moral status, one that requires special protections.The problem is that no one can really agree on what that status is. So while these embryos persist in suspended animation, patients, clinicians, embryologists, and legislators must grapple with the essential question of what we should do with them. What do these embryos mean to us? Who should be responsible for them? Read the full story. —Jessica Hamzelou We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + I love hearing about musicians’ favorite songs 🎶+ Here are some top tips for making the most of travelling on your own.+ Check out just some of the excellent-sounding new books due for publication this year.+ I could play this spherical version of Snake forever (thanks Rachel!)

Read More »

Using unstructured data to fuel enterprise AI success

In partnership withInvisible Enterprises are sitting on vast quantities of unstructured data, from call records and video footage to customer complaint histories and supply chain signals. Yet this invaluable business intelligence, estimated to make up as much as 90% of the data generated by organizations, historically remained dormant because its unstructured nature makes analysis extremely difficult. But if managed and centralized effectively, this messy and often voluminous data is not only a precious asset for training and optimizing next-generation AI systems, enhancing their accuracy, context, and adaptability, it can also deliver profound insights that drive real business outcomes. A compelling example of this can be seen in the US NBA basketball team the Charlotte Hornets who successfully leveraged untapped video footage of gameplay—previously too copious to watch and too unstructured to analyze—to identify a new competition-winning recruit. However, before that data could deliver results, analysts working for the team first had to overcome the critical challenge of preparing the raw, unstructured footage for interpretation. The challenges of organizing and contextualizing unstructured data Unstructured data presents inherent difficulties due to its widely varying format, quality, and reliability, requiring specialized tools like natural language processing and AI to make sense of it.
Every organization’s pool of unstructured data also contains domain-specific characteristics and terminology that generic AI models may not automatically understand. A financial services firm, for example, cannot simply use a general language model for fraud detection. Instead, it needs to adapt the model to understand regulatory language, transaction patterns, industry-specific risk indicators, and unique company context like data policies. The challenge intensifies when integrating multiple data sources with varying structures and quality standards, as teams may struggle to distinguish valuable data from noise.
How computer vision gave the Charlotte Hornets an edge  When the Charlotte Hornets set out to identify a new draft pick for their team, they turned to AI tools including computer vision to analyze raw game footage from smaller leagues, which exist outside the tiers of the game normally visible to NBA scouts and, therefore, are not as readily available for analysis. “Computer vision is a tool that has existed for some time, but I think the applicability in this age of AI is increasing rapidly,” says Jordan Cealey, senior vice president at AI company Invisible Technologies, which worked with the Charlotte Hornets on this project. “You can now take data sources that you’ve never been able to consume, and provide an analytical layer that’s never existed before.” By deploying a variety of computer vision techniques, including object and player tracking, movement pattern analysis, and geometrically mapping points on the court, the team was able to extract kinematic data, such as the coordinates of players during movement, and generate metrics like speed and explosiveness to acceleration.  This provided the team with rich, data-driven insights about individual players, helping them to identify and select a new draft whose skill and techniques filled a hole in the Charlotte Hornets’ own capabilities. The chosen athlete went on to be named the most valuable player at the 2025 NBA Summer League and helped the team win their first summer championship title. Annotation of a basketball match Before data from game footage can be used, it needs to be labeled so the model can interpret it. The x and y coordinates of the individual players, seen here in bounding boxes, as well as other features in the scene, are annotated so the model can identify individuals and track their movements through time.

[embedded content]

Taking AI pilot programs into production  From this successful example, several lessons can be learned. First, unstructured data must be prepared for AI models through intuitive forms of collection, and the right data pipelines and management records. “You can only utilize unstructured data once your structured data is consumable and ready for AI,” says Cealey. “You cannot just throw AI at a problem without doing the prep work.”  For many organizations, this might mean they need to find partners that offer the technical support to fine-tune models to the context of the business. The traditional technology consulting approach, in which an external vendor leads a digital transformation plan over a lengthy timeframe, is not fit for purpose here as AI is moving too fast and solutions need to be configured to a company’s current business reality.  Forward-deployed engineers (FDEs) are an emerging partnership model better suited to the AI era. Initially popularized by Palantir, the FDE model connects product and engineering capabilities directly to the customer’s operational environment. FDEs work closely with customers on-site to understand the context behind a technology initiative before a solution is built. 

“We couldn’t do what we do without our FDEs,” says Cealey. “They go out and fine-tune the models, working with our human annotation team to generate a ground truth dataset that can be used to validate or improve the performance of the model in production.” Second, data needs to be understood within its own context, which requires models to be carefully calibrated to the use case. “You can’t assume that an out-of-the-box computer vision model is going to give you better inventory management, for example, by taking that open source model and applying it to whatever your unstructured data feeds are,” says Cealey. “You need to fine-tune it so it gives you the data exports in the format you want and helps your aims. That’s where you start to see high-performative models that can then actually generate useful data insights.”  For the Hornets, Invisible used five foundation models, which the team fine-tuned to context-specific data. This included teaching the models to understand that they were “looking at” a basketball court as opposed to, say, a football field; to understand how a game of basketball works differently from any other sport the model might have knowledge of (including how many players are on each team); and to understand how to spot rules like “out of bounds.” Once fine-tuned, the models were able to capture subtle and complex visual scenarios, including highly accurate object detection, tracking, postures, and spatial mapping. Lastly, while the AI technology mix available to companies changes by the day, they cannot eschew old-fashioned commercial metrics: clear goals. Without clarity on the business purpose, AI pilot programs can easily turn into open-ended, meandering research projects that prove expensive in terms of compute, data costs, and staffing.  “The best engagements we have seen are when people know what they want,” Cealey observes. “The worst is when people say ‘we want AI’ but have no direction. In these situations, they are on an endless pursuit without a map.” This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE