Your Gateway to Power, Energy, Datacenters, Bitcoin and AI
Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.
Discover What Matters Most to You

AI
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Discover What Matter Most to You
Featured Articles

The Download: introducing the AI Hype Correction package
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Introducing: the AI Hype Correction package AI is going to reproduce human intelligence. AI will eliminate disease. AI is the single biggest, most important invention in human history. You’ve likely heard it all—but probably none of these things are true.AI is changing our world, but we don’t yet know the real winners, or how this will all shake out.After a few years of out-of-control hype, people are now starting to re-calibrate what AI is, what it can do, and how we should think about its ultimate impact.Here, at the end of 2025, we’re starting the post-hype phase. This new package of stories, called Hype Correction, is a way to reset expectations—a critical look at where we are, what AI makes possible, and where we go next.Here’s a sneak peek at what you can expect: + An introduction to four ways of thinking about the great AI hype correction of 2025.+ While it’s safe to say we’re definitely in an AI bubble right now, what’s less clear is what it really looks like—and what comes after it pops. Read the full story.+ Why OpenAI’s Sam Altman can be traced back to so many of the more outlandish proclamations about AI doing the rounds these days. Read the full story.+ It’s a weird time to be an AI doomer. But they’re not giving up.+ AI coding is now everywhere—but despite the billions of dollars being poured into improving AI models’ coding abilities, not everyone is convinced. Read the full story.+ If we really want to start finding new kinds of materials faster, AI materials discovery needs to make it out of the lab and move into the real world. Read the full story.+ Why reports of AI’s potential to replace trained human lawyers are greatly exaggerated.+ Dr. Margaret Mitchell, chief ethics scientist at AI startup Hugging Face, explains why the generative AI hype train is distracting us from what AI actually is and what it can—and crucially, cannot—do. Read the full story.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 iRobot has filed for bankruptcyThe Roomba maker is considering handing over control to its main Chinese supplier. (Bloomberg $)+ A proposed Amazon acquisition fell through close to two years ago. (FT $)+ How the company lost its way. (TechCrunch)+ A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook? (MIT Technology Review) 2 Meta’s 2025 has been a total rollercoaster rideFrom its controversial AI team to Mark Zuckerberg’s newfound appreciation for masculine energy. (Insider $) 3 The Trump administration is giving the crypto industry a much easier rideIt’s dismissed crypto lawsuits involving many firms with financial ties to Trump. (NYT $)+ Celebrities are feeling emboldened to flog crypto once again. (The Guardian)+ A bitcoin investor wants to set up a crypto libertarian community in the Caribbean. (FT $) 4 There’s a new weight-loss drug in townAnd people are already taking it, even though it’s unapproved. (Wired $)+ What we still don’t know about weight-loss drugs. (MIT Technology Review)5 Chinese billionaires are having dozens of US-born surrogate babiesAn entire industry has sprung up to support them. (WSJ $)+ A controversial Chinese CRISPR scientist is still hopeful about embryo gene editing. (MIT Technology Review) 6 Trump’s “big beautiful bill” funding hinges on states integrating AI into healthcareExperts fear it’ll be used as a cost-cutting measure, even if it doesn’t work. (The Guardian)+ Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions. (MIT Technology Review) 7 Extreme rainfall is wreaking havoc in the desertOman and the UAE are unaccustomed to increasingly common torrential downpours. (WP $) 8 Data centers are being built in countries that are too hot for themWhich makes it a lot harder to cool them sufficiently. (Rest of World)
9 Why AI image generators are getting deliberately worseTheir makers are pursuing realism—not that overly polished, Uncanny Valley look. (The Verge)+ Inside the AI attention economy wars. (NY Mag $) 10 How a tiny Swedish city became a major video game hubSkövde has formed an unlikely community of cutting-edge developers. (The Guardian)+ Google DeepMind is using Gemini to train agents inside one of Skövde’s biggest franchises. (MIT Technology Review) Quote of the day “They don’t care about the games. They don’t care about the art. They just want their money.” —Anna C Webster, chair of the freelancing committee of the United Videogame Workers union, tells the Guardian why their members are protesting the prestigious 2025 Game Awards in the wake of major layoffs. One more thing
Recapturing early internet whimsy with HTMLWebsites weren’t always slick digital experiences.There was a time when surfing the web involved opening tabs that played music against your will and sifting through walls of text on a colored background. In the 2000s, before Squarespace and social media, websites were manifestations of individuality—built from scratch using HTML, by users who had some knowledge of code.Scattered across the web are communities of programmers working to revive this seemingly outdated approach. And the movement is anything but a superficial appeal to retro aesthetics—it’s about celebrating the human touch in digital experiences. Read the full story. —Tiffany Ng
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Here’s how a bit of math can help you wrap your presents much more neatly this year.+ It seems that humans mastered making fire way, way earlier than we realized.+ The Arab-owned cafes opening up across the US sound warm and welcoming.+ How to give a gift the recipient will still be using and loving for decades to come.

Uniper Approves Its First Scottish Solar Project
Uniper SE said it has decided to proceed constructing its first solar project in Scotland, the 45-megawatt (MW) Berryhill Solar Farm just north of Dundee. The German power and gas utility expects to start construction “early 2026” and fire up the facility later in the year. Berryhill’s output, from about 150,000 solar panels, would be enough “to power the equivalent of over 12,500 UK households each year, 1/5th the population of Angus – contributing to the UK’s net zero targets”, Uniper said in an online statement. “The project has been developed jointly with partner Solar2 and Uniper plans to start the construction process as its sole owner”, Uniper said. Scottish and Southern Electricity Networks, the district network operator, is responsible for installing the nine-kilometer (5.59 miles) medium-voltage cable to connect the solar farm to the grid’s point of connection at the Charleston substation on the Kingsway in Dundee, according to Uniper. The project had obtained planning permission from the Angus Council in 2022, which was modified 2024, it said. “Once operational, there will be a community benefit fund associated with the project which will be agreed with the applicable local community council – Muirhead, Birkhill and Liff Council”, Uniper said. In another Scottish renewable project, Uniper earlier this year greenlighted the construction of a 46.2-MW wind farm in East Ayrshire. It expects to put the facility into production 2028 with enough generation for 66,000 United Kingdom homes each year, it said in a press release June 11. Uniper had announced two other UK renewable power projects in 2025: the Tamworth Solar Farm with a capacity of around 44.2 MWp and the 21.33-MWp Totmonslow Solar Farm. The two projects’ combined capacity can power about 23,300 homes a year, according to Uniper. Uniper aims to connect the projects, which received approval from local

Strategists Say Oil’s Fermi Paradox Nearing an End
In an oil and gas report sent to Rigzone recently by the Macquarie team, Macquarie strategists, including Vikas Dwivedi, noted that oil’s “Fermi Paradox [is] nearing an end”, adding that “onshore stocks [are] starting to build”. “We continue to expect a heavily oversupplied market,” the strategists said in the report. “We estimate a 1Q26 peak supply-demand surplus of over four million barrels per day. Signs of the surplus are showing with continued offshore builds, increasing onshore builds, and extremely strong freight rates,” they added. “We estimate that approximately one-third of the offshore build is long-haul shipments from the Americas to Asia,” they continued. In the report, the strategists revealed that they expect onshore builds to accelerate through year-end 2025 and into early 2026, a process which they said “should drive Brent towards the low $50 range, with a possibility of reaching $45 per barrel”. “Since the end of August, offshore inventories have increased by roughly 250 million barrels and onshore storage up by ~30 million barrels,” the strategists highlighted in the report. “In the past month, the trend has accelerated with onshore … [plus] offshore stocks building by ~ three million barrels per day. Yet, structure remains backwardated, as AB barrels continued clearing East,” they added. A separate report sent to Rigzone by the Macquarie team on December 5 showed that Macquarie was projecting that the Brent price will average $68.21 per barrel overall in 2025 and $60.75 per barrel overall in 2026. According to that report, Macquarie expects the Brent price to average $63.00 per barrel in the fourth quarter of this year, $57.00 per barrel in the first quarter of 2026, $59.00 per barrel in the second quarter, $60.00 per barrel in the third quarter, and $67.00 per barrel in the fourth quarter. In that report, Macquarie

BP, Eni Exit Two Angola Producing Assets
A consortium between BW Energy Ltd and Maurel & Prom SA has signed an agreement with Azule Energy Holdings Ltd, a 50-50 venture between BP PLC and Eni SpA, to acquire Azule’s non-operating stakes in Block 14 and Block 14K offshore Angola. Chevron Corp-operated Block 14, a mature deepwater area, produces about 40,000 barrels of oil per day (bopd) gross from nine fields, while Block 14K is a tieback to Block 14 with current production of 2,000 bopd gross, BW Energy and Maurel & Prom noted in separate press releases. BW Energy, part of Singapore-based energy shipping operator BW Group Ltd, and Maurel & Prom, a Paris-based exploration and production company majority-owned by Indonesia’s national oil and gas company PT Pertamina (Persero), will each acquire 10 percent in Block 14 and five percent in Block 14K. Azule said separately its combined share from the blocks averaged 9,600 bopd in 2024. “This transaction is aligned with Azule Energy’s strategy to concentrate our efforts on our core assets in Angola”, said Azule chief executive Joseph Murphy. Last year in Angola, Azule sold its 12 percent interest in Block 3/05 and 16 percent stake in Lower Congo Basin to Afentra, as confirmed by Azule May 23, 2024. The transaction with BW Energy and Maurel & Prom is valued up to $310 million including up to $115 million of deferred payments, Azule said. BW Energy said of the assets, “Current producing reserves are estimated at 9.3 million barrels net to BW Energy, with several identified opportunities to further increase recoverable volumes”. It added, “Abandonment and decommissioning costs are covered by existing provisions”. BW Energy chief executive Carl K. Arnet said, “The entry to Angola is a key step in BW Energy’s West Africa growth strategy and provides further diversification of our resource base”. “Angola is a mature hydrocarbon basin

WoodMac Says Eni Find Reinforces Kutei as One of Hottest Plays
Eni’s latest discovery in Indonesia reinforces the Kutei Basin’s reputation as one of the hottest global exploration plays of recent years. That’s what Andrew Harwood, Wood Mackenzie (WoodMac) Vice President, Corporate Research, said in a statement sent to Rigzone, adding that the find “will add to Indonesia’s gas resources when the country increasingly focuses on gas availability”. “It provides options for Indonesia as the nation balances domestic demand needs with future export opportunities,” Harwood said. Harwood noted that the Konta-1 discovery “adds momentum to Eni’s existing plans to invest in and develop new gas sources for the currently underutilized Bontang LNG plant”. “The Konta-1 discovery lies in the northern Muara Bakau area, close to Eni’s pre-FID Kutei North Hub. It provides future tie-back upside and offers Plan B for Eni if the un-appraised Geng North underperforms initial expectations,” he added. Harwood also said Eni’s latest find encourages the company’s ongoing exploration campaign, which he pointed out runs into 2026. “Wood Mackenzie’s pick of prospects in line for drilling is Geliga, which holds multi trillion cubic foot potential,” he stated. Harwood went on to note that 2026 “looks exciting for Eni’s Indonesian portfolio with several major milestones ahead”. “These include exploration campaign results, a final investment decision on the Northern hub development, and the launch of ‘NewCo’ – the strategic satellite venture between Eni and Petronas,” he highlighted. In a statement sent to Rigzone recently, Eni announced a “significant gas discovery” in the Konta-1 exploration well off the coast of East Kalimantan in Indonesia. “Estimates indicate 600 billion cubic feet of gas initially in place (GIIP) with a potential upside beyond one trillion cubic feet,” Eni said in the statement. “Konta-1 was drilled to a depth of 4,575 meters [15,009 feet] in 570 meters [1,870 feet] water depth, encountering gas in

China Fossil Fuel Generation Set for First Drop in Decade
China’s fossil fuel power plants are on track to chart their first annual drop in generation in a decade as renewables flood the grid to meet rising demand. Thermal electricity output fell 4.2 percent in November, according to data published by the National Bureau of Statistics on Monday. Generation from coal and gas-fired plants is down 0.7 percent this year, on track for the first annual decline since 2015 unless there’s a sharp jump in December. China’s massive fleet of coal power stations is the world’s leading source of greenhouse gases fueling global warming. Even though the nation is continuing to build more of the plants, their use is plateauing as huge investments in renewables meet growing consumption needs. Wind power jumped 22 percent in November from the previous year, while large solar farms saw a 23 percent rise in generation, additional data released Monday showed. Even as power-sector emissions in China drop, they’ve been largely offset by rising pollution from a growing fleet of chemicals and plastics factories, according to the Centre for Research on Energy and Clean Air. The nation’s coal output fell on an annual basis for a fifth month, while oil and natural gas continued to rise toward annual production records. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.

The Download: introducing the AI Hype Correction package
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Introducing: the AI Hype Correction package AI is going to reproduce human intelligence. AI will eliminate disease. AI is the single biggest, most important invention in human history. You’ve likely heard it all—but probably none of these things are true.AI is changing our world, but we don’t yet know the real winners, or how this will all shake out.After a few years of out-of-control hype, people are now starting to re-calibrate what AI is, what it can do, and how we should think about its ultimate impact.Here, at the end of 2025, we’re starting the post-hype phase. This new package of stories, called Hype Correction, is a way to reset expectations—a critical look at where we are, what AI makes possible, and where we go next.Here’s a sneak peek at what you can expect: + An introduction to four ways of thinking about the great AI hype correction of 2025.+ While it’s safe to say we’re definitely in an AI bubble right now, what’s less clear is what it really looks like—and what comes after it pops. Read the full story.+ Why OpenAI’s Sam Altman can be traced back to so many of the more outlandish proclamations about AI doing the rounds these days. Read the full story.+ It’s a weird time to be an AI doomer. But they’re not giving up.+ AI coding is now everywhere—but despite the billions of dollars being poured into improving AI models’ coding abilities, not everyone is convinced. Read the full story.+ If we really want to start finding new kinds of materials faster, AI materials discovery needs to make it out of the lab and move into the real world. Read the full story.+ Why reports of AI’s potential to replace trained human lawyers are greatly exaggerated.+ Dr. Margaret Mitchell, chief ethics scientist at AI startup Hugging Face, explains why the generative AI hype train is distracting us from what AI actually is and what it can—and crucially, cannot—do. Read the full story.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 iRobot has filed for bankruptcyThe Roomba maker is considering handing over control to its main Chinese supplier. (Bloomberg $)+ A proposed Amazon acquisition fell through close to two years ago. (FT $)+ How the company lost its way. (TechCrunch)+ A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook? (MIT Technology Review) 2 Meta’s 2025 has been a total rollercoaster rideFrom its controversial AI team to Mark Zuckerberg’s newfound appreciation for masculine energy. (Insider $) 3 The Trump administration is giving the crypto industry a much easier rideIt’s dismissed crypto lawsuits involving many firms with financial ties to Trump. (NYT $)+ Celebrities are feeling emboldened to flog crypto once again. (The Guardian)+ A bitcoin investor wants to set up a crypto libertarian community in the Caribbean. (FT $) 4 There’s a new weight-loss drug in townAnd people are already taking it, even though it’s unapproved. (Wired $)+ What we still don’t know about weight-loss drugs. (MIT Technology Review)5 Chinese billionaires are having dozens of US-born surrogate babiesAn entire industry has sprung up to support them. (WSJ $)+ A controversial Chinese CRISPR scientist is still hopeful about embryo gene editing. (MIT Technology Review) 6 Trump’s “big beautiful bill” funding hinges on states integrating AI into healthcareExperts fear it’ll be used as a cost-cutting measure, even if it doesn’t work. (The Guardian)+ Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions. (MIT Technology Review) 7 Extreme rainfall is wreaking havoc in the desertOman and the UAE are unaccustomed to increasingly common torrential downpours. (WP $) 8 Data centers are being built in countries that are too hot for themWhich makes it a lot harder to cool them sufficiently. (Rest of World)
9 Why AI image generators are getting deliberately worseTheir makers are pursuing realism—not that overly polished, Uncanny Valley look. (The Verge)+ Inside the AI attention economy wars. (NY Mag $) 10 How a tiny Swedish city became a major video game hubSkövde has formed an unlikely community of cutting-edge developers. (The Guardian)+ Google DeepMind is using Gemini to train agents inside one of Skövde’s biggest franchises. (MIT Technology Review) Quote of the day “They don’t care about the games. They don’t care about the art. They just want their money.” —Anna C Webster, chair of the freelancing committee of the United Videogame Workers union, tells the Guardian why their members are protesting the prestigious 2025 Game Awards in the wake of major layoffs. One more thing
Recapturing early internet whimsy with HTMLWebsites weren’t always slick digital experiences.There was a time when surfing the web involved opening tabs that played music against your will and sifting through walls of text on a colored background. In the 2000s, before Squarespace and social media, websites were manifestations of individuality—built from scratch using HTML, by users who had some knowledge of code.Scattered across the web are communities of programmers working to revive this seemingly outdated approach. And the movement is anything but a superficial appeal to retro aesthetics—it’s about celebrating the human touch in digital experiences. Read the full story. —Tiffany Ng
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Here’s how a bit of math can help you wrap your presents much more neatly this year.+ It seems that humans mastered making fire way, way earlier than we realized.+ The Arab-owned cafes opening up across the US sound warm and welcoming.+ How to give a gift the recipient will still be using and loving for decades to come.

Uniper Approves Its First Scottish Solar Project
Uniper SE said it has decided to proceed constructing its first solar project in Scotland, the 45-megawatt (MW) Berryhill Solar Farm just north of Dundee. The German power and gas utility expects to start construction “early 2026” and fire up the facility later in the year. Berryhill’s output, from about 150,000 solar panels, would be enough “to power the equivalent of over 12,500 UK households each year, 1/5th the population of Angus – contributing to the UK’s net zero targets”, Uniper said in an online statement. “The project has been developed jointly with partner Solar2 and Uniper plans to start the construction process as its sole owner”, Uniper said. Scottish and Southern Electricity Networks, the district network operator, is responsible for installing the nine-kilometer (5.59 miles) medium-voltage cable to connect the solar farm to the grid’s point of connection at the Charleston substation on the Kingsway in Dundee, according to Uniper. The project had obtained planning permission from the Angus Council in 2022, which was modified 2024, it said. “Once operational, there will be a community benefit fund associated with the project which will be agreed with the applicable local community council – Muirhead, Birkhill and Liff Council”, Uniper said. In another Scottish renewable project, Uniper earlier this year greenlighted the construction of a 46.2-MW wind farm in East Ayrshire. It expects to put the facility into production 2028 with enough generation for 66,000 United Kingdom homes each year, it said in a press release June 11. Uniper had announced two other UK renewable power projects in 2025: the Tamworth Solar Farm with a capacity of around 44.2 MWp and the 21.33-MWp Totmonslow Solar Farm. The two projects’ combined capacity can power about 23,300 homes a year, according to Uniper. Uniper aims to connect the projects, which received approval from local

Strategists Say Oil’s Fermi Paradox Nearing an End
In an oil and gas report sent to Rigzone recently by the Macquarie team, Macquarie strategists, including Vikas Dwivedi, noted that oil’s “Fermi Paradox [is] nearing an end”, adding that “onshore stocks [are] starting to build”. “We continue to expect a heavily oversupplied market,” the strategists said in the report. “We estimate a 1Q26 peak supply-demand surplus of over four million barrels per day. Signs of the surplus are showing with continued offshore builds, increasing onshore builds, and extremely strong freight rates,” they added. “We estimate that approximately one-third of the offshore build is long-haul shipments from the Americas to Asia,” they continued. In the report, the strategists revealed that they expect onshore builds to accelerate through year-end 2025 and into early 2026, a process which they said “should drive Brent towards the low $50 range, with a possibility of reaching $45 per barrel”. “Since the end of August, offshore inventories have increased by roughly 250 million barrels and onshore storage up by ~30 million barrels,” the strategists highlighted in the report. “In the past month, the trend has accelerated with onshore … [plus] offshore stocks building by ~ three million barrels per day. Yet, structure remains backwardated, as AB barrels continued clearing East,” they added. A separate report sent to Rigzone by the Macquarie team on December 5 showed that Macquarie was projecting that the Brent price will average $68.21 per barrel overall in 2025 and $60.75 per barrel overall in 2026. According to that report, Macquarie expects the Brent price to average $63.00 per barrel in the fourth quarter of this year, $57.00 per barrel in the first quarter of 2026, $59.00 per barrel in the second quarter, $60.00 per barrel in the third quarter, and $67.00 per barrel in the fourth quarter. In that report, Macquarie

BP, Eni Exit Two Angola Producing Assets
A consortium between BW Energy Ltd and Maurel & Prom SA has signed an agreement with Azule Energy Holdings Ltd, a 50-50 venture between BP PLC and Eni SpA, to acquire Azule’s non-operating stakes in Block 14 and Block 14K offshore Angola. Chevron Corp-operated Block 14, a mature deepwater area, produces about 40,000 barrels of oil per day (bopd) gross from nine fields, while Block 14K is a tieback to Block 14 with current production of 2,000 bopd gross, BW Energy and Maurel & Prom noted in separate press releases. BW Energy, part of Singapore-based energy shipping operator BW Group Ltd, and Maurel & Prom, a Paris-based exploration and production company majority-owned by Indonesia’s national oil and gas company PT Pertamina (Persero), will each acquire 10 percent in Block 14 and five percent in Block 14K. Azule said separately its combined share from the blocks averaged 9,600 bopd in 2024. “This transaction is aligned with Azule Energy’s strategy to concentrate our efforts on our core assets in Angola”, said Azule chief executive Joseph Murphy. Last year in Angola, Azule sold its 12 percent interest in Block 3/05 and 16 percent stake in Lower Congo Basin to Afentra, as confirmed by Azule May 23, 2024. The transaction with BW Energy and Maurel & Prom is valued up to $310 million including up to $115 million of deferred payments, Azule said. BW Energy said of the assets, “Current producing reserves are estimated at 9.3 million barrels net to BW Energy, with several identified opportunities to further increase recoverable volumes”. It added, “Abandonment and decommissioning costs are covered by existing provisions”. BW Energy chief executive Carl K. Arnet said, “The entry to Angola is a key step in BW Energy’s West Africa growth strategy and provides further diversification of our resource base”. “Angola is a mature hydrocarbon basin

WoodMac Says Eni Find Reinforces Kutei as One of Hottest Plays
Eni’s latest discovery in Indonesia reinforces the Kutei Basin’s reputation as one of the hottest global exploration plays of recent years. That’s what Andrew Harwood, Wood Mackenzie (WoodMac) Vice President, Corporate Research, said in a statement sent to Rigzone, adding that the find “will add to Indonesia’s gas resources when the country increasingly focuses on gas availability”. “It provides options for Indonesia as the nation balances domestic demand needs with future export opportunities,” Harwood said. Harwood noted that the Konta-1 discovery “adds momentum to Eni’s existing plans to invest in and develop new gas sources for the currently underutilized Bontang LNG plant”. “The Konta-1 discovery lies in the northern Muara Bakau area, close to Eni’s pre-FID Kutei North Hub. It provides future tie-back upside and offers Plan B for Eni if the un-appraised Geng North underperforms initial expectations,” he added. Harwood also said Eni’s latest find encourages the company’s ongoing exploration campaign, which he pointed out runs into 2026. “Wood Mackenzie’s pick of prospects in line for drilling is Geliga, which holds multi trillion cubic foot potential,” he stated. Harwood went on to note that 2026 “looks exciting for Eni’s Indonesian portfolio with several major milestones ahead”. “These include exploration campaign results, a final investment decision on the Northern hub development, and the launch of ‘NewCo’ – the strategic satellite venture between Eni and Petronas,” he highlighted. In a statement sent to Rigzone recently, Eni announced a “significant gas discovery” in the Konta-1 exploration well off the coast of East Kalimantan in Indonesia. “Estimates indicate 600 billion cubic feet of gas initially in place (GIIP) with a potential upside beyond one trillion cubic feet,” Eni said in the statement. “Konta-1 was drilled to a depth of 4,575 meters [15,009 feet] in 570 meters [1,870 feet] water depth, encountering gas in

China Fossil Fuel Generation Set for First Drop in Decade
China’s fossil fuel power plants are on track to chart their first annual drop in generation in a decade as renewables flood the grid to meet rising demand. Thermal electricity output fell 4.2 percent in November, according to data published by the National Bureau of Statistics on Monday. Generation from coal and gas-fired plants is down 0.7 percent this year, on track for the first annual decline since 2015 unless there’s a sharp jump in December. China’s massive fleet of coal power stations is the world’s leading source of greenhouse gases fueling global warming. Even though the nation is continuing to build more of the plants, their use is plateauing as huge investments in renewables meet growing consumption needs. Wind power jumped 22 percent in November from the previous year, while large solar farms saw a 23 percent rise in generation, additional data released Monday showed. Even as power-sector emissions in China drop, they’ve been largely offset by rising pollution from a growing fleet of chemicals and plastics factories, according to the Centre for Research on Energy and Clean Air. The nation’s coal output fell on an annual basis for a fifth month, while oil and natural gas continued to rise toward annual production records. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.

Uniper Approves Its First Scottish Solar Project
Uniper SE said it has decided to proceed constructing its first solar project in Scotland, the 45-megawatt (MW) Berryhill Solar Farm just north of Dundee. The German power and gas utility expects to start construction “early 2026” and fire up the facility later in the year. Berryhill’s output, from about 150,000 solar panels, would be enough “to power the equivalent of over 12,500 UK households each year, 1/5th the population of Angus – contributing to the UK’s net zero targets”, Uniper said in an online statement. “The project has been developed jointly with partner Solar2 and Uniper plans to start the construction process as its sole owner”, Uniper said. Scottish and Southern Electricity Networks, the district network operator, is responsible for installing the nine-kilometer (5.59 miles) medium-voltage cable to connect the solar farm to the grid’s point of connection at the Charleston substation on the Kingsway in Dundee, according to Uniper. The project had obtained planning permission from the Angus Council in 2022, which was modified 2024, it said. “Once operational, there will be a community benefit fund associated with the project which will be agreed with the applicable local community council – Muirhead, Birkhill and Liff Council”, Uniper said. In another Scottish renewable project, Uniper earlier this year greenlighted the construction of a 46.2-MW wind farm in East Ayrshire. It expects to put the facility into production 2028 with enough generation for 66,000 United Kingdom homes each year, it said in a press release June 11. Uniper had announced two other UK renewable power projects in 2025: the Tamworth Solar Farm with a capacity of around 44.2 MWp and the 21.33-MWp Totmonslow Solar Farm. The two projects’ combined capacity can power about 23,300 homes a year, according to Uniper. Uniper aims to connect the projects, which received approval from local

BP, Eni Exit Two Angola Producing Assets
A consortium between BW Energy Ltd and Maurel & Prom SA has signed an agreement with Azule Energy Holdings Ltd, a 50-50 venture between BP PLC and Eni SpA, to acquire Azule’s non-operating stakes in Block 14 and Block 14K offshore Angola. Chevron Corp-operated Block 14, a mature deepwater area, produces about 40,000 barrels of oil per day (bopd) gross from nine fields, while Block 14K is a tieback to Block 14 with current production of 2,000 bopd gross, BW Energy and Maurel & Prom noted in separate press releases. BW Energy, part of Singapore-based energy shipping operator BW Group Ltd, and Maurel & Prom, a Paris-based exploration and production company majority-owned by Indonesia’s national oil and gas company PT Pertamina (Persero), will each acquire 10 percent in Block 14 and five percent in Block 14K. Azule said separately its combined share from the blocks averaged 9,600 bopd in 2024. “This transaction is aligned with Azule Energy’s strategy to concentrate our efforts on our core assets in Angola”, said Azule chief executive Joseph Murphy. Last year in Angola, Azule sold its 12 percent interest in Block 3/05 and 16 percent stake in Lower Congo Basin to Afentra, as confirmed by Azule May 23, 2024. The transaction with BW Energy and Maurel & Prom is valued up to $310 million including up to $115 million of deferred payments, Azule said. BW Energy said of the assets, “Current producing reserves are estimated at 9.3 million barrels net to BW Energy, with several identified opportunities to further increase recoverable volumes”. It added, “Abandonment and decommissioning costs are covered by existing provisions”. BW Energy chief executive Carl K. Arnet said, “The entry to Angola is a key step in BW Energy’s West Africa growth strategy and provides further diversification of our resource base”. “Angola is a mature hydrocarbon basin

WoodMac Says Eni Find Reinforces Kutei as One of Hottest Plays
Eni’s latest discovery in Indonesia reinforces the Kutei Basin’s reputation as one of the hottest global exploration plays of recent years. That’s what Andrew Harwood, Wood Mackenzie (WoodMac) Vice President, Corporate Research, said in a statement sent to Rigzone, adding that the find “will add to Indonesia’s gas resources when the country increasingly focuses on gas availability”. “It provides options for Indonesia as the nation balances domestic demand needs with future export opportunities,” Harwood said. Harwood noted that the Konta-1 discovery “adds momentum to Eni’s existing plans to invest in and develop new gas sources for the currently underutilized Bontang LNG plant”. “The Konta-1 discovery lies in the northern Muara Bakau area, close to Eni’s pre-FID Kutei North Hub. It provides future tie-back upside and offers Plan B for Eni if the un-appraised Geng North underperforms initial expectations,” he added. Harwood also said Eni’s latest find encourages the company’s ongoing exploration campaign, which he pointed out runs into 2026. “Wood Mackenzie’s pick of prospects in line for drilling is Geliga, which holds multi trillion cubic foot potential,” he stated. Harwood went on to note that 2026 “looks exciting for Eni’s Indonesian portfolio with several major milestones ahead”. “These include exploration campaign results, a final investment decision on the Northern hub development, and the launch of ‘NewCo’ – the strategic satellite venture between Eni and Petronas,” he highlighted. In a statement sent to Rigzone recently, Eni announced a “significant gas discovery” in the Konta-1 exploration well off the coast of East Kalimantan in Indonesia. “Estimates indicate 600 billion cubic feet of gas initially in place (GIIP) with a potential upside beyond one trillion cubic feet,” Eni said in the statement. “Konta-1 was drilled to a depth of 4,575 meters [15,009 feet] in 570 meters [1,870 feet] water depth, encountering gas in

China Fossil Fuel Generation Set for First Drop in Decade
China’s fossil fuel power plants are on track to chart their first annual drop in generation in a decade as renewables flood the grid to meet rising demand. Thermal electricity output fell 4.2 percent in November, according to data published by the National Bureau of Statistics on Monday. Generation from coal and gas-fired plants is down 0.7 percent this year, on track for the first annual decline since 2015 unless there’s a sharp jump in December. China’s massive fleet of coal power stations is the world’s leading source of greenhouse gases fueling global warming. Even though the nation is continuing to build more of the plants, their use is plateauing as huge investments in renewables meet growing consumption needs. Wind power jumped 22 percent in November from the previous year, while large solar farms saw a 23 percent rise in generation, additional data released Monday showed. Even as power-sector emissions in China drop, they’ve been largely offset by rising pollution from a growing fleet of chemicals and plastics factories, according to the Centre for Research on Energy and Clean Air. The nation’s coal output fell on an annual basis for a fifth month, while oil and natural gas continued to rise toward annual production records. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.

Smart growth, lower costs: How fuel cells support utility expansion
As utilities work to expand capacity and modernize aging infrastructure to meet growing demand, they face a new imperative: doing more with every dollar invested. Analysts project capital expenditures by U.S. investor-owned electric utilities will reach $1.4 trillion between 2025 and 2030, nearly twice the amount spent during the entire previous decade. To maintain today’s investment momentum and strengthen reliability and resilience, utilities have an opportunity to look beyond cost control and pursue strategies that deliver broader long-term value. That means seeking systems that maximize output, efficiency and uptime. In today’s energy landscape, fuel cells are becoming increasingly relevant. They provide modular, reliable power that helps utilities extract more value from their investments while addressing rising demand and aging infrastructure. With high electrical efficiency, modular design and exceptional reliability, advanced fuel cell systems enable utilities to generate more value from their assets and streamline their day-to-day operations. Powering More with Less: Fuel Cells Redefine Efficiency Fuel cells outperform traditional combustion-based generators by converting fuel into electricity through an electrochemical reaction, rather than by burning it. This translates into roughly 15% to 20% higher efficiency than most open-cycle gas turbines or reciprocating engines. That improved conversion efficiency means each kilowatt-hour requires less fuel, increasing energy productivity and reducing exposure to fuel-price swings. Among the various types of fuel cells, solid oxide fuel cells(SOFCs) offer the greatest advantages. Operating at high temperatures and utilizing a solid ceramic electrolyte, rather than relying on precious metals, corrosive acids or molten materials, SOFCs are a modern technology that converts fuels such as natural gas or hydrogen into electricity with exceptional efficiency and durability. Conversion efficiencies can reach up to 65% and when integrated with combined heat and power (CHP) configurations, the total system efficiency can exceed 90%. Meeting Demand Faster with Fuel Cells With demand surging,

What’s ahead for utilities: Navigating demand, AI and customer affordability
Utilities are entering a transformative year, with surging demand, affordability concerns, cybersecurity challenges and the increasing integration of artificial intelligence reshaping the industry. Utilities that thrive in this complex environment will need to adopt disciplined, analytics-driven strategies to ensure resilience, reliability and affordability. The forces driving change are significant and utilities must act decisively to navigate these challenges while building trust with customers and regulators. For a comprehensive analysis of the trends and strategies driving the future of utilities, download the full report. Surging Demand Requires Proactive Grid Management One of the most pressing issues is the unprecedented demand growth fueled by data centers, AI workloads and advanced manufacturing. Global power demand from data centers alone is expected to rise by 165% by 2030, with AI-driven workloads accounting for nearly a third of that increase. This surge in demand is straining transmission and distribution grids, which are already hampered by regulatory and permitting delays. Utilities must rethink traditional planning cycles and adopt predictive load forecasting tools to anticipate new energy use patterns with greater accuracy. Advanced transmission technologies, such as dynamic line ratings and topology optimization, can help increase grid capacity and efficiency, ensuring utilities remain competitive. Modernizing interconnection processes is also vital, as delays in connecting new loads to the grid can hinder progress. By deploying digital workflow tools and creating public-facing hosting capacity maps, utilities can streamline interconnection requests and enable developers to make informed decisions about project siting. Customer Affordability at a Tipping Point Massive grid investments to support electrification, data centers and climate resilience are driving rates higher, while inflation continues to strain household budgets. Since 2021, electricity prices have risen by 30%, leaving nearly 80 million Americans struggling to pay their utility bills. Utilities must adopt customer-centric solutions to address these concerns. Predictive analytics can

LG rolls out new AI services to help consumers with daily tasks
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More LG kicked off the AI bandwagon today with a new set of AI services to help consumers in their daily tasks at home, in the car and in the office. The aim of LG’s CES 2025 press event was to show how AI will work in a day of someone’s life, with the goal of redefining the concept of space, said William Joowan Cho, CEO of LG Electronics at the event. The presentation showed LG is fully focused on bringing AI into just about all of its products and services. Cho referred to LG’s AI efforts as “affectionate intelligence,” and he said it stands out from other strategies with its human-centered focus. The strategy focuses on three things: connected devices, capable AI agents and integrated services. One of things the company announced was a strategic partnership with Microsoft on AI innovation, where the companies pledged to join forces to shape the future of AI-powered spaces. One of the outcomes is that Microsoft’s Xbox Ultimate Game Pass will appear via Xbox Cloud on LG’s TVs, helping LG catch up with Samsung in offering cloud gaming natively on its TVs. LG Electronics will bring the Xbox App to select LG smart TVs. That means players with LG Smart TVs will be able to explore the Gaming Portal for direct access to hundreds of games in the Game Pass Ultimate catalog, including popular titles such as Call of Duty: Black Ops 6, and upcoming releases like Avowed (launching February 18, 2025). Xbox Game Pass Ultimate members will be able to play games directly from the Xbox app on select LG Smart TVs through cloud gaming. With Xbox Game Pass Ultimate and a compatible Bluetooth-enabled

Big tech must stop passing the cost of its spiking energy needs onto the public
Julianne Malveaux is an MIT-educated economist, author, educator and political commentator who has written extensively about the critical relationship between public policy, corporate accountability and social equity. The rapid expansion of data centers across the U.S. is not only reshaping the digital economy but also threatening to overwhelm our energy infrastructure. These data centers aren’t just heavy on processing power — they’re heavy on our shared energy infrastructure. For Americans, this could mean serious sticker shock when it comes to their energy bills. Across the country, many households are already feeling the pinch as utilities ramp up investments in costly new infrastructure to power these data centers. With costs almost certain to rise as more data centers come online, state policymakers and energy companies must act now to protect consumers. We need new policies that ensure the cost of these projects is carried by the wealthy big tech companies that profit from them, not by regular energy consumers such as family households and small businesses. According to an analysis from consulting firm Bain & Co., data centers could require more than $2 trillion in new energy resources globally, with U.S. demand alone potentially outpacing supply in the next few years. This unprecedented growth is fueled by the expansion of generative AI, cloud computing and other tech innovations that require massive computing power. Bain’s analysis warns that, to meet this energy demand, U.S. utilities may need to boost annual generation capacity by as much as 26% by 2028 — a staggering jump compared to the 5% yearly increases of the past two decades. This poses a threat to energy affordability and reliability for millions of Americans. Bain’s research estimates that capital investments required to meet data center needs could incrementally raise consumer bills by 1% each year through 2032. That increase may

Final 45V hydrogen tax credit guidance draws mixed response
Dive Brief: The final rule for the 45V clean hydrogen production tax credit, which the U.S. Treasury Department released Friday morning, drew mixed responses from industry leaders and environmentalists. Clean hydrogen development within the U.S. ground to a halt following the release of the initial guidance in December 2023, leading industry participants to call for revisions that would enable more projects to qualify for the tax credit. While the final rule makes “significant improvements” to Treasury’s initial proposal, the guidelines remain “extremely complex,” according to the Fuel Cell and Hydrogen Energy Association. FCHEA President and CEO Frank Wolak and other industry leaders said they look forward to working with the Trump administration to refine the rule. Dive Insight: Friday’s release closed what Wolak described as a “long chapter” for the hydrogen industry. But industry reaction to the final rule was decidedly mixed, and it remains to be seen whether the rule — which could be overturned as soon as Trump assumes office — will remain unchanged. “The final 45V rule falls short,” Marty Durbin, president of the U.S. Chamber’s Global Energy Institute, said in a statement. “While the rule provides some of the additional flexibility we sought, … we believe that it still will leave billions of dollars of announced projects in limbo. The incoming Administration will have an opportunity to improve the 45V rules to ensure the industry will attract the investments necessary to scale the hydrogen economy and help the U.S. lead the world in clean manufacturing.” But others in the industry felt the rule would be sufficient for ending hydrogen’s year-long malaise. “With this added clarity, many projects that have been delayed may move forward, which can help unlock billions of dollars in investments across the country,” Kim Hedegaard, CEO of Topsoe’s Power-to-X, said in a statement. Topsoe

Texas, Utah, Last Energy challenge NRC’s ‘overburdensome’ microreactor regulations
Dive Brief: A 69-year-old Nuclear Regulatory Commission rule underpinning U.S. nuclear reactor licensing exceeds the agency’s statutory authority and creates an unreasonable burden for microreactor developers, the states of Texas and Utah and advanced nuclear technology company Last Energy said in a lawsuit filed Dec. 30 in federal court in Texas. The plaintiffs asked the Eastern District of Texas court to exempt Last Energy’s 20-MW reactor design and research reactors located in the plaintiff states from the NRC’s definition of nuclear “utilization facilities,” which subjects all U.S. commercial and research reactors to strict regulatory scrutiny, and order the NRC to develop a more flexible definition for use in future licensing proceedings. Regardless of its merits, the lawsuit underscores the need for “continued discussion around proportional regulatory requirements … that align with the hazards of the reactor and correspond to a safety case,” said Patrick White, research director at the Nuclear Innovation Alliance. Dive Insight: Only three commercial nuclear reactors have been built in the United States in the past 28 years, and none are presently under construction, according to a World Nuclear Association tracker cited in the lawsuit. “Building a new commercial reactor of any size in the United States has become virtually impossible,” the plaintiffs said. “The root cause is not lack of demand or technology — but rather the [NRC], which, despite its name, does not really regulate new nuclear reactor construction so much as ensure that it almost never happens.” More than a dozen advanced nuclear technology developers have engaged the NRC in pre-application activities, which the agency says help standardize the content of advanced reactor applications and expedite NRC review. Last Energy is not among them. The pre-application process can itself stretch for years and must be followed by a formal application that can take two

Qualcomm unveils AI chips for PCs, cars, smart homes and enterprises
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Qualcomm unveiled AI technologies and collaborations for PCs, cars, smart homes and enterprises at CES 2025. At the big tech trade show in Las Vegas, Qualcomm Technologies showed how it’s using AI capabilities in its chips to drive the transformation of user experiences across diverse device categories, including PCs, automobiles, smart homes and into enterprises. The company unveiled the Snapdragon X platform, the fourth platform in its high-performance PC portfolio, the Snapdragon X Series, bringing industry-leading performance, multi-day battery life, and AI leadership to more of the Windows ecosystem. Qualcomm has talked about how its processors are making headway grabbing share from the x86-based AMD and Intel rivals through better efficiency. Qualcomm’s neural processing unit gets about 45 TOPS, a key benchmark for AI PCs. The Snapdragon X family of AI PC processors. Additionally, Qualcomm Technologies showcased continued traction of the Snapdragon X Series, with over 60 designs in production or development and more than 100 expected by 2026. Snapdragon for vehicles Qualcomm demoed chips that are expanding its automotive collaborations. It is working with Alpine, Amazon, Leapmotor, Mobis, Royal Enfield, and Sony Honda Mobility, who look to Snapdragon Digital Chassis solutions to drive AI-powered in-cabin and advanced driver assistance systems (ADAS). Qualcomm also announced continued traction for its Snapdragon Elite-tier platforms for automotive, highlighting its work with Desay, Garmin, and Panasonic for Snapdragon Cockpit Elite. Throughout the show, Qualcomm will highlight its holistic approach to improving comfort and focusing on safety with demonstrations on the potential of the convergence of AI, multimodal contextual awareness, and cloudbased services. Attendees will also get a first glimpse of the new Snapdragon Ride Platform with integrated automated driving software stack and system definition jointly

Oil, Gas Execs Reveal Where They Expect WTI Oil Price to Land in the Future
Executives from oil and gas firms have revealed where they expect the West Texas Intermediate (WTI) crude oil price to be at various points in the future as part of the fourth quarter Dallas Fed Energy Survey, which was released recently. The average response executives from 131 oil and gas firms gave when asked what they expect the WTI crude oil price to be at the end of 2025 was $71.13 per barrel, the survey showed. The low forecast came in at $53 per barrel, the high forecast was $100 per barrel, and the spot price during the survey was $70.66 per barrel, the survey pointed out. This question was not asked in the previous Dallas Fed Energy Survey, which was released in the third quarter. That survey asked participants what they expect the WTI crude oil price to be at the end of 2024. Executives from 134 oil and gas firms answered this question, offering an average response of $72.66 per barrel, that survey showed. The latest Dallas Fed Energy Survey also asked participants where they expect WTI prices to be in six months, one year, two years, and five years. Executives from 124 oil and gas firms answered this question and gave a mean response of $69 per barrel for the six month mark, $71 per barrel for the year mark, $74 per barrel for the two year mark, and $80 per barrel for the five year mark, the survey showed. Executives from 119 oil and gas firms answered this question in the third quarter Dallas Fed Energy Survey and gave a mean response of $73 per barrel for the six month mark, $76 per barrel for the year mark, $81 per barrel for the two year mark, and $87 per barrel for the five year mark, that

Generative AI hype distracts us from AI’s more important breakthroughs
On April 28, 2022, at a highly anticipated concert in Spokane, Washington, the musician Paul McCartney astonished his audience with a groundbreaking application of AI: He began to perform with a lifelike depiction of his long-deceased musical partner, John Lennon. Using recent advances in audio and video processing, engineers had taken the pair’s final performance (London, 1969), separated Lennon’s voice and image from the original mix and restored them with lifelike clarity. This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next. For years, researchers like me had taught machines to “see” and “hear” in order to make such a moment possible. As McCartney and Lennon appeared to reunite across time and space, the arena fell silent; many in the crowd began to cry. As an AI scientist and lifelong Beatles fan, I felt profound gratitude that we could experience this truly life-changing moment. Later that year, the world was captivated by another major breakthrough: AI conversation. For the first time in history, systems capable of generating new, contextually relevant comments in real time, on virtually any subject, were widely accessible owing to the release of ChatGPT. Billions of people were suddenly able to interact with AI. This ignited the public’s imagination about what AI could be, bringing an explosion of creative ideas, hopes, and fears. Having done my PhD on AI language generation (long considered niche), I was thrilled we had come this far. But the awe I felt was rivaled by my growing rage at the flood of media takes and self-appointed experts insisting that generative AI could do things it simply can’t, and warning that anyone who didn’t adopt it would be left behind.
This kind of hype has contributed to a frenzy of misunderstandings about what AI actually is and what it can and cannot do. Crucially, generative AI is a seductive distraction from the type of AI that is most likely to make your life better, or even save it: Predictive AI. In contrast to AI designed for generative tasks, predictive AI involves tasks with a finite, known set of answers; the system just has to process information to say which answer is right. A basic example is plant recognition: Point your phone camera at a plant and learn that it’s a Western sword fern. Generative tasks, in contrast, have no finite set of correct answers: The system must blend snippets of information it’s been trained on to create, for example, a novel picture of a fern. The generative AI technology involved in chatbots, face-swaps, and synthetic video makes for stunning demos, driving clicks and sales as viewers run wild with ideas that superhuman AI will be capable of bringing us abundance or extinction. Yet predictive AI has quietly been improving weather prediction and food safety, enabling higher-quality music production, helping to organize photos, and accurately predicting the fastest driving routes. We incorporate predictive AI into our everyday lives without evening thinking about it, a testament to its indispensable utility.
To get a sense of the immense progress on predictive AI and its future potential, we can look at the trajectory of the past 20 years. In 2005, we couldn’t get AI to tell the difference between a person and a pencil. By 2013, AI still couldn’t reliably detect a bird in a photo, and the difference between a pedestrian and a Coke bottle was massively confounding (this is how I learned that bottles do kind of look like people, if people had no heads). The thought of deploying these systems in the real world was the stuff of science fiction. Yet over the past 10 years, predictive AI has not only nailed bird detection down to the specific species; it has rapidly improved life-critical medical services like identifying problematic lesions and heart arrhythmia. Because of this technology, seismologists can predict earthquakes and meteorologists can predict flooding more reliably than ever before. Accuracy has skyrocketed for consumer-facing tech that detects and classifies everything from what song you’re thinking of when you hum a tune to which objects to avoid while you’re driving—making self-driving cars a reality. In the very near future, we should be able to accurately detect tumors and forecast hurricanes long before they can hurt anyone, realizing the lifelong hopes of people all over the world. That might not be as flashy as generating your own Studio Ghibli–ish film, but it’s definitely hype-worthy. Predictive AI systems have also been shown to be incredibly useful when they leverage certain generative techniques within a constrained set of options. Systems of this type are diverse, spanning everything from outfit visualization to cross-language translation. Soon, predictive-generative hybrid systems will make it possible to clone your own voice speaking another language in real time, an extraordinary aid for travel (with serious impersonation risks). There’s considerable room for growth here, but generative AI delivers real value when anchored by strong predictive methods. To understand the difference between these two broad classes of AI, imagine yourself as an AI system tasked with showing someone what a cat looks like. You could adopt a generative approach, cutting and pasting small fragments from various cat images (potentially from sources that object) to construct a seemingly perfect depiction. The ability of modern generative AI to produce such a flawless collage is what makes it so astonishing. Alternatively, you could take the predictive approach: Simply locate and point to an existing picture of a cat. That method is much less glamorous but more energy-efficient and more likely to be accurate, and it properly acknowledges the original source. Generative AI is designed to create things that look real; predictive AI identifies what is real. A misunderstanding that generative systems are retrieving things when they are actually creating them has led to grave consequences when text is involved, requiring the withdrawal of legal rulings and the retraction of scientific articles.
Driving this confusion is a tendency for people to hype AI without making it clear what kind of AI they’re talking about (I reckon many don’t know). It’s very easy to equate “AI” with generative AI, or even just language-generating AI, and assume that all other capabilities fall out from there. That fallacy makes a ton of sense: The term literally references “intelligence,” and our human understanding of what “intelligence” might be is often mediated by the use of language. (Spoiler: No one actually knows what intelligence is.) But the phrase “artificial intelligence” was intentionally designed in the 1950s to inspire awe and allude to something humanlike. Today, it just refers to a set of disparate technologies for processing digital data. Some of my friends find it helpful to call it “mathy maths” instead. The bias toward treating generative AI as the most powerful and real form of AI is troubling given that it consumes considerably more energy than predictive AI systems. It also means using existing human work in AI products against the original creators’ wishes and replacing human jobs with AI systems whose capabilities their work made possible in the first place—without compensation. AI can be amazingly powerful, but that doesn’t mean creators should be ripped off. Watching this unfold as an AI developer within the tech industry, I’ve drawn important lessons for next steps. The widespread appeal of AI is clearly linked to the intuitive nature of conversation-based interactions. But this method of engagement currently overuses generative methods where predictive ones would suffice, resulting in an awkward situation that’s confusing for users while imposing heavy costs in energy consumption, exploitation, and job displacement. We have witnessed just a glimpse of AI’s full potential: The current excitement around AI reflects what it could be, not what it is. Generation-based approaches strain resources while still falling short on representation, accuracy, and the wishes of people whose work is folded into the system.
If we can shift the spotlight from the hype around generative technologies to the predictive advances already transforming daily life, we can build AI that is genuinely useful, equitable, and sustainable. The systems that help doctors catch diseases earlier, help scientists forecast disasters sooner, and help everyday people navigate their lives more safely are the ones poised to deliver the greatest impact. The future of beneficial AI will not be defined by the flashiest demos but by the quiet, rigorous progress that makes technology trustworthy. And if we build on that foundation—pairing predictive strength with more mature data practices and intuitive natural-language interfaces—AI can finally start living up to the promise that many people perceive today. Dr. Margaret Mitchell is a computer science researcher and chief ethics scientist at AI startup Hugging Face. She has worked in the technology industry for 15 years, and has published over 100 papers on natural language generation, assistive technology, computer vision, and AI ethics. Her work has received numerous awards and has been implemented by multiple technology companies.

What even is the AI bubble?
MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. In July, a widely cited MIT study claimed that 95% of organizations that invested in generative AI were getting “zero return.” Tech stocks briefly plunged. While the study itself was more nuanced than the headlines, for many it still felt like the first hard data point confirming what skeptics had muttered for months: Hype around AI might be outpacing reality. Then, in August, OpenAI CEO Sam Altman said what everyone in Silicon Valley had been whispering. “Are we in a phase where investors as a whole are overexcited about AI?” he said during a press dinner I attended. “My opinion is yes.” This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.
He compared the current moment to the dot-com bubble. “When bubbles happen, smart people get overexcited about a kernel of truth,” he explained. “Tech was really important. The internet was a really big deal. People got overexcited.” With those comments, it was off to the races. The next day’s stock market dip was attributed to the sentiment he shared. The question “Are we in an AI bubble?” became inescapable.
Who thinks it is a bubble? The short answer: Lots of people. But not everyone agrees on who or what is overinflated. Tech leaders are using this moment of fear to take shots at their rivals and position themselves as clear winners on the other side. How they describe the bubble depends on where their company sits. When I asked Meta CEO Mark Zuckerberg about the AI bubble in September, he ran through the historical analogies of past bubbles—railroads, fiber for the internet, the dot-com boom—and noted that in each case, “the infrastructure gets built out, people take on too much debt, and then you hit some blip … and then a lot of the companies end up going out of business.” But Zuckerberg’s prescription wasn’t for Meta to pump the brakes. It was to keep spending: “If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate, obviously. But I’d say the risk is higher on the other side.” Bret Taylor, the chairman of OpenAI and CEO of the AI startup Sierra, uses a mental model from the late ’90s to help navigate this AI bubble. “I think the closest analogue to this AI wave is the dot-com boom or bubble, depending on your level of pessimism,” he recently told me. Back then, he explained, everyone knew e-commerce was going to be big, but there was a massive difference between Buy.com and Amazon. Taylor and others have been trying to position themselves as today’s Amazon. Still others are arguing that the pain will be widespread. Google CEO Sundar Pichai told the BBC this month that there’s “some irrationality” in the current boom. Asked whether Google would be immune to a bubble bursting, he warned, “I think no company is going to be immune, including us.” What’s inflating the bubble? Companies are raising enormous sums of money and seeing unprecedented valuations. Much of that money, in turn, is going toward the buildout of massive data centers—on which both private companies like OpenAI and Elon Musk’s xAI and public ones such as Meta and Google are spending heavily. OpenAI has pledged that it will spend $500 billion to build AI data centers, more than 15 times what was spent on the Manhattan Project. This eye-popping spending on AI data centers isn’t entirely detached from reality. The leaders of the top AI companies all stress that they’re bottlenecked by their limited access to computing power. You hear it constantly when you talk to them. Startups can’t get the GPU allocations they need. Hyperscalers are rationing compute, saving it for their best customers. If today’s AI market is as brutally supply-constrained as tech leaders claim, perhaps aggressive infrastructure buildouts are warranted. But some of the numbers are too large to comprehend. Sam Altman has told employees that OpenAI’s moonshot goal is to build 250 gigawatts of computing capacity by 2033, roughly equaling India’s total national electricity demand. Such a plan would cost more than $12 trillion by today’s standards.
“I do think there’s real execution risk,” OpenAI president and cofounder Greg Brockman recently told me about the company’s aggressive infrastructure goals. “Everything we say about the future, we see that it’s a possibility. It is not a certainty, but I don’t think the uncertainty comes from scientific questions. It’s a lot of hard work.” Who is exposed, and who is to blame? It depends on who you ask. During the August press dinner, where he made his market-moving comments, Altman was blunt about where he sees the excess. He said it’s “insane” that some AI startups with “three people and an idea” are receiving funding at such high valuations. “That’s not rational behavior,” he said. “Someone’s gonna get burned there, I think.” As Safe Superintelligence cofounder (and former OpenAI chief scientist and cofounder) Ilya Sutskever put it on a recent podcast: Silicon Valley has “more companies than ideas.” Demis Hassabis, the CEO of Google DeepMind, offered a similar diagnosis when I spoke with him in November. “It feels like there’s obviously a bubble in the private market,” he said. “You look at seed rounds with just nothing being tens of billions of dollars. That seems a little unsustainable.” Anthropic CEO Dario Amodei also struck at his competition during the New York Times DealBook Summit in early December. He said he feels confident about the technology itself but worries about how others are behaving on the business side: “On the economic side, I have my concerns where, even if the technology fulfills all its promises, I think there are players in the ecosystem who, if they just make a timing error, they just get it off by a little bit, bad things could happen.” Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters He stopped short of naming Sam Altman and OpenAI, but the implication was clear. “There are some players who are YOLOing,” he said. “Let’s say you’re a person who just kind of constitutionally wants to YOLO things or just likes big numbers. Then you may turn the dial too far.” Amodei also flagged “circular deals,” or the increasingly common arrangements where chip suppliers like Nvidia invest in AI companies that then turn around and spend those funds on their chips. Anthropic has done some of these, he said, though “not at the same scale as some other players.” (OpenAI is at the center of a number of such deals, as are Nvidia, CoreWeave, and a roster of other players.) The danger, he explained, comes when the numbers get too big: “If you start stacking these where they get to huge amounts of money, and you’re saying, ’By 2027 or 2028 I need to make $200 billion a year,’ then yeah, you can overextend yourself.” Zuckerberg shared a similar message at an internal employee Q&A session after Meta’s last earnings call. He noted that unprofitable startups like OpenAI and Anthropic risk bankruptcy if they misjudge the timing of their investments, but Meta has the advantage of strong cash flow, he reassured staff.
How could a bubble burst? My conversations with tech executives and investors suggest that the bubble will be most likely to pop if overfunded startups can’t turn a profit or grow into their lofty valuations. This bubble could last longer than than past ones, given that private markets aren’t traded on public markets and therefore move more slowly, but the ripple effects will still be profound when the end comes. If companies making grand commitments to data center buildouts no longer have the revenue growth to support them, the headline deals that have propped up the stock market come into question. Anthropic’s Amodei illustrated the problem during his DealBook Summit appearance, where he said the multi-year data center commitments he has to make combine with the company’s rapid, unpredictable revenue growth rate to create a “cone of uncertainty” about how much to spend.
The two most prominent private players in AI, OpenAI and Anthropic, have yet to turn a profit. A recent Deutsche Bank chart put the situation in stark historical context. Amazon burned through $3 billion before becoming profitable. Tesla, around $4 billion. Uber, $30 billion. OpenAI is projected to burn through $140 billion by 2029, while Anthropic is expected to burn $20 billion by 2027. Consultants at Bain estimate that the wave of AI infrastructure spending will require $2 trillion in annual AI revenue by 2030 just to justify the investment. That’s more than the combined 2024 revenue of Amazon, Apple, Alphabet, Microsoft, Meta, and Nvidia. When I talk to leaders of these large tech companies, they all agree that their sprawling businesses can absorb an expensive miscalculation about the returns from their AI infrastructure buildouts. It’s all the other companies that are either highly leveraged with debt or just unprofitable—even OpenAI and Anthropic—that they worry about. Still, given the level of spending on AI, it still needs a viable business model beyond subscriptions, which won’t be able to drive profits from billions of people’s eyeballs like the ad-driven businesses that have defined the last 20 years of the internet. Even the largest tech companies know they need to ship the world-changing agents they keep hyping: AI that can fully replace coworkers and complete tasks in the real world. For now, investors are mostly buying into the hype of the powerful AI systems that these data center buildouts will supposedly unlock in the future. At some point the biggest spenders, like OpenAI, will need to show investors that the money spent on the infrastructure buildout was worth it. There’s also still a lot of uncertainty about the technical direction that AI is heading in. LLMs are expected to remain critical to more advanced AI systems, but industry leaders can’t seem to agree on which additional breakthroughs are needed to achieve artificial general intelligence, or AGI. Some are betting on new kinds of AI that can understand the physical world, while others are focused on training AI to learn in a general way, like a human. In other words, what if all this unprecedented spending turns out to have been backing the wrong horse? The question now What makes this moment surreal is the honesty. The same people pouring billions into AI will openly tell you it might all come crashing down.
Taylor framed it as two truths existing at once. “I think it is both true that AI will transform the economy,” he told me, “and I think we’re also in a bubble, and a lot of people will lose a lot of money. I think both are absolutely true at the same time.” He compared it to the internet. Webvan failed, but Instacart succeeded years later with essentially the same idea. If you were an Amazon shareholder from its IPO to now, you’re looking pretty good. If you were a Webvan shareholder, you probably feel differently. “When the dust settles and you see who the winners are, society benefits from those inventions,” Amazon founder Jeff Bezos said in October. “This is real. The benefit to society from AI is going to be gigantic.” Goldman Sachs says the AI boom now looks the way tech stocks did in 1997, several years before the dot-com bubble actually burst. The bank flagged five warning signs seen in the late 1990s that investors should watch now: peak investment spending, falling corporate profits, rising corporate debt, Fed rate cuts, and widening credit spreads. We’re probably not at 1999 levels yet. But the imbalances are building fast. Michael Burry, who famously called the 2008 housing bubble collapse (as seen in the film The Big Short), recently compared the AI boom to the 1990s dot-com bubble too. Maybe AI will save us from our own irrational exuberance. But for now, we’re living in an in-between moment when everyone knows what’s coming but keeps blowing more air into the balloon anyway. As Altman put it that night at dinner: “Someone is going to lose a phenomenal amount of money. We don’t know who.” Alex Heath is the author of Sources, a newsletter about the AI race, and the cohost of ACCESS, a podcast about the tech industry’s inside conversations. Previously, he was deputy editor at The Verge.

AI might not be coming for lawyers’ jobs anytime soon
When the generative AI boom took off in 2022, Rudi Miller and her law school classmates were suddenly gripped with anxiety. “Before graduating, there was discussion about what the job market would look like for us if AI became adopted,” she recalls. So when it came time to choose a speciality, Miller—now a junior associate at the law firm Orrick—decided to become a litigator, the kind of lawyer who represents clients in court. She hoped the courtroom would be the last human stage. “Judges haven’t allowed ChatGPT-enabled robots to argue in court yet,” she says. This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next. She had reason to be worried. The artificial-intelligence job apocalypse seemed to be coming for lawyers. In March 2023, researchers reported that GPT-4 had smashed the Uniform Bar Exam. That same month, an industry report predicted that 44% of legal work could be automated. The legal tech industry entered a boom as law firms began adopting generative AI to mine mountains of documents and draft contracts, work ordinarily done by junior associates. Last month, the law firm Clifford Chance axed 10% of its staff in London, citing increased use of AI as a reason.
But for all the hype, LLMs are still far from thinking like lawyers—let alone replacing them. The models continue to hallucinate case citations, struggle to navigate gray areas of the law and reason about novel questions, and stumble when they attempt to synthesize information scattered across statutes, regulations, and court cases. And there are deeper institutional reasons to think the models could struggle to supplant legal jobs. While AI is reshaping the grunt work of the profession, the end of lawyers may not be arriving anytime soon. The big experiment The legal industry has long been defined by long hours and grueling workloads, so the promise of superhuman efficiency is appealing. Law firms are experimenting with general-purpose tools like ChatGPT and Microsoft Copilot and specialized legal tools like Harvey and Thomson Reuters’ CoCounsel, with some building their own in-house tools on top of frontier models. They’re rolling out AI boot camps and letting associates bill hundreds of hours to AI experimentation. As of 2024, 47.8% of attorneys at law firms employing 500 or more lawyers used AI, according to the American Bar Association.
But lawyers say that LLMs are a long way from reasoning well enough to replace them. Lucas Hale, a junior associate at McDermott Will & Schulte, has been embracing AI for many routine chores. He uses Relativity to sift through long documents and Microsoft Copilot for drafting legal citations. But when he turns to ChatGPT with a complex legal question, he finds the chatbot spewing hallucinations, rambling off topic, or drawing a blank. “In the case where we have a very narrow question or a question of first impression for the court,” he says, referring to a novel legal question that a court has never decided before, “that’s the kind of thinking that the tool can’t do.” Much of Lucas’s work involves creatively applying the law to new fact patterns. “Right now, I don’t think very much of the work that litigators do, at least not the work that I do, can be outsourced to an AI utility,” he says. Allison Douglis, a senior associate at Jenner & Block, uses an LLM to kick off her legal research. But the tools only take her so far. “When it comes to actually fleshing out and developing an argument as a litigator, I don’t think they’re there,” she says. She has watched the models hallucinate case citations and fumble through ambiguous areas of the law. “Right now, I would much rather work with a junior associate than an AI tool,” she says. “Unless they get extraordinarily good very quickly, I can’t imagine that changing in the near future.” Beyond the bar The legal industry has seemed ripe for an AI takeover ever since ChatGPT’s triumph on the bar exam. But passing a standardized test isn’t the same as practicing law. The exam tests whether people can memorize legal rules and apply them to hypothetical situations—not whether they can exercise strategic judgment in complicated realities or craft arguments in uncharted legal territory. And models can be trained to ace benchmarks without genuinely improving their reasoning. Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters But new benchmarks are aiming to better measure the models’ ability to do legal work in the real world. The Professional Reasoning Benchmark, published by ScaleAI in November, evaluated leading LLMs on legal and financial tasks designed by professionals in the field. The study found that the models have critical gaps in their reliability for professional adoption, with the best-performing model scoring only 37% on the most difficult legal problems, meaning it met just over a third of possible points on the evaluation criteria. The models frequently made inaccurate legal judgments, and if they did reach correct conclusions, they did so through incomplete or opaque reasoning processes. “The tools actually are not there to basically substitute [for] your lawyer,” says Afra Feyza Akyurek, the lead author of the paper. “Even though a lot of people think that LLMs have a good grasp of the law, it’s still lagging behind.”
The paper builds on other benchmarks measuring the models’ performance on economically valuable work. The AI Productivity Index, published by the data firm Mercor in September and updated in December, found that the models have “substantial limitations” in performing legal work. The best-performing model scored 77.9% on legal tasks, meaning it satisfied roughly four out of five evaluation criteria. A model with such a score might generate substantial economic value in some industries, but in fields where errors are costly, it may not be useful at all, the early version of the study noted. Professional benchmarks are a big step forward in evaluating the LLMs’ real-world capabilities, but they may still not capture what lawyers actually do. “These questions, although more challenging than those in past benchmarks, still don’t fully reflect the kinds of subjective, extremely challenging questions lawyers tackle in real life,” says Jon Choi, a law professor at the University of Washington School of Law, who coauthored a study on legal benchmarks in 2023. Unlike math or coding, in which LLMs have made significant progress, legal reasoning may be challenging for the models to learn. The law deals with messy real-world problems, riddled with ambiguity and subjectivity, that often have no right answer, says Choi. Making matters worse, a lot of legal work isn’t recorded in ways that can be used to train the models, he says. When it is, documents can span hundreds of pages, scattered across statutes, regulations, and court cases that exist in a complex hierarchy. But a more fundamental limitation might be that LLMs are simply not trained to think like lawyers. “The reasoning models still don’t fully reason about problems like we humans do,” says Julian Nyarko, a law professor at Stanford Law School. The models may lack a mental model of the world—the ability to simulate a scenario and predict what will happen—and that capability could be at the heart of complex legal reasoning, he says. It’s possible that the current paradigm of LLMs trained on next-word prediction gets us only so far. The jobs remain Despite early signs that AI is beginning to affect entry-level workers, labor statistics have yet to show that lawyers are being displaced. 93.4% of law school graduates in 2024 were employed within 10 months of graduation—the highest rate on record—according to the National Association for Law Placement. The number of graduates working in law firms rose by 13% from 2023 to 2024. For now, law firms are slow to shrink their ranks. “We’re not reducing headcounts at this point,” said Amy Ross, the chief of attorney talent at the law firm Ropes & Gray. Even looking ahead, the effects could be incremental. “I will expect some impact on the legal profession’s labor market, but not major,” says Mert Demirer, an economist at MIT. “AI is going to be very useful in terms of information discovery and summary,” he says, but for complex legal tasks, “the law’s low risk tolerance, plus the current capabilities of AI, are going to make that case less automatable at this point.” Capabilities may evolve over time, but that’s a big unknown. It’s not just that the models themselves are not ready to replace junior lawyers. Institutional barriers may also shape how AI is deployed. Higher productivity reduces billable hours, challenging the dominant business model of law firms. Liability looms large for lawyers, and clients may still want a human on the hook. Regulations could also constrain how lawyers use the technology.
Still, as AI takes on some associate work, law firms may need to reinvent their training system. “When junior work dries up, you have to have a more formal way of teaching than hoping that an apprenticeship works,” says Ethan Mollick, a management professor at the Wharton School of the University of Pennsylvania. Zach Couger, a junior associate at McDermott Will & Schulte, leans on ChatGPT to comb through piles of contracts he once slogged through by hand. He can’t imagine going back to doing the job himself, but he wonders what he’s missing. “I’m worried that I’m not getting the same reps that senior attorneys got,” he says, referring to the repetitive training that has long defined the early experiences of lawyers. “On the other hand, it is very nice to have a semi–knowledge expert to just ask questions to that’s not a partner who’s also very busy.” Even though an AI job apocalypse looks distant, the uncertainty sticks with him. Lately, Couger finds himself staying up late, wondering if he could be part of the last class of associates at big law firms: “I may be the last plane out.”

AI materials discovery now needs to move into the real world
.cst-large,
.cst-default {
width: 100%;
}
@media (max-width: 767px) {
.cst-block {
overflow-x: hidden;
}
}
@media (min-width: 630px) {
.cst-large {
margin-left: -25%;
width: 150%;
}
@media (min-width: 960px) {
.cst-large {
margin-left: -16.666666666666664%;
width: 140.26%;
}
}
@media (min-width: 1312px) {
.cst-large {
width: 145.13%;
}
}
}
@media (min-width: 60rem) {
.flourish-embed {
width: 60vw;
transform: translateX(-50%);
left: 50%;
position: relative;
}
}
The microwave-size instrument at Lila Sciences in Cambridge, Massachusetts, doesn’t look all that different from others that I’ve seen in state-of-the-art materials labs. Inside its vacuum chamber, the machine zaps a palette of different elements to create vaporized particles, which then fly through the chamber and land to create a thin film, using a technique called sputtering. What sets this instrument apart is that artificial intelligence is running the experiment; an AI agent, trained on vast amounts of scientific literature and data, has determined the recipe and is varying the combination of elements. Later, a person will walk the samples, each containing multiple potential catalysts, over to a different part of the lab for testing. Another AI agent will scan and interpret the data, using it to suggest another round of experiments to try to optimize the materials’ performance. This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next. For now, a human scientist keeps a close eye on the experiments and will approve the next steps on the basis of the AI’s suggestions and the test results. But the startup is convinced this AI-controlled machine is a peek into the future of materials discovery—one in which autonomous labs could make it far cheaper and faster to come up with novel and useful compounds. Flush with hundreds of millions of dollars in new funding, Lila Sciences is one of AI’s latest unicorns. The company is on a larger mission to use AI-run autonomous labs for scientific discovery—the goal is to achieve what it calls scientific superintelligence. But I’m here this morning to learn specifically about the discovery of new materials. Lila Sciences’ John Gregoire (background) and Rafael Gómez-Bombarelli watch as an AI-guided sputtering instrument makes samples of thin-film alloys.CODY O’LOUGHLIN We desperately need better materials to solve our problems. We’ll need improved electrodes and other parts for more powerful batteries; compounds to more cheaply suck carbon dioxide out of the air; and better catalysts to make green hydrogen and other clean fuels and chemicals. And we will likely need novel materials like higher-temperature superconductors, improved magnets, and different types of semiconductors for a next generation of breakthroughs in everything from quantum computing to fusion power to AI hardware.
But materials science has not had many commercial wins in the last few decades. In part because of its complexity and the lack of successes, the field has become something of an innovation backwater, overshadowed by the more glamorous—and lucrative—search for new drugs and insights into biology. The idea of using AI for materials discovery is not exactly new, but it got a huge boost in 2020 when DeepMind showed that its AlphaFold2 model could accurately predict the three-dimensional structure of proteins. Then, in 2022, came the success and popularity of ChatGPT. The hope that similar AI models using deep learning could aid in doing science captivated tech insiders. Why not use our new generative AI capabilities to search the vast chemical landscape and help simulate atomic structures, pointing the way to new substances with amazing properties?
“Simulations can be super powerful for framing problems and understanding what is worth testing in the lab. But there’s zero problems we can ever solve in the real world with simulation alone.” John Gregoire, Lila Sciences, chief autonomous science officer Researchers touted an AI model that had reportedly discovered “millions of new materials.” The money began pouring in, funding a host of startups. But so far there has been no “eureka” moment, no ChatGPT-like breakthrough—no discovery of new miracle materials or even slightly better ones. The startups that want to find useful new compounds face a common bottleneck: By far the most time-consuming and expensive step in materials discovery is not imagining new structures but making them in the real world. Before trying to synthesize a material, you don’t know if, in fact, it can be made and is stable, and many of its properties remain unknown until you test it in the lab. “Simulations can be super powerful for kind of framing problems and understanding what is worth testing in the lab,” says John Gregoire, Lila Sciences’ chief autonomous science officer. “But there’s zero problems we can ever solve in the real world with simulation alone.” Startups like Lila Sciences have staked their strategies on using AI to transform experimentation and are building labs that use agents to plan, run, and interpret the results of experiments to synthesize new materials. Automation in laboratories already exists. But the idea is to have AI agents take it to the next level by directing autonomous labs, where their tasks could include designing experiments and controlling the robotics used to shuffle samples around. And, most important, companies want to use AI to vacuum up and analyze the vast amount of data produced by such experiments in the search for clues to better materials. If they succeed, these companies could shorten the discovery process from decades to a few years or less, helping uncover new materials and optimize existing ones. But it’s a gamble. Even though AI is already taking over many laboratory chores and tasks, finding new—and useful—materials on its own is another matter entirely. Innovation backwater I have been reporting about materials discovery for nearly 40 years, and to be honest, there have been only a few memorable commercial breakthroughs, such as lithium-ion batteries, over that time. There have been plenty of scientific advances to write about, from perovskite solar cells to graphene transistors to metal-organic frameworks (MOFs), materials based on an intriguing type of molecular architecture that recently won its inventors a Nobel Prize. But few of those advances—including MOFs—have made it far out of the lab. Others, like quantum dots, have found some commercial uses, but in general, the kinds of life-changing inventions created in earlier decades have been lacking.
Blame the amount of time (typically 20 years or more) and the hundreds of millions of dollars it takes to make, test, optimize, and manufacture a new material—and the industry’s lack of interest in spending that kind of time and money in low-margin commodity markets. Or maybe we’ve just run out of ideas for making stuff. The need to both speed up that process and find new ideas is the reason researchers have turned to AI. For decades, scientists have used computers to design potential materials, calculating where to place atoms to form structures that are stable and have predictable characteristics. It’s worked—but only kind of. Advances in AI have made that computational modeling far faster and have promised the ability to quickly explore a vast number of possible structures. Google DeepMind, Meta, and Microsoft have all launched efforts to bring AI tools to the problem of designing new materials. But the limitations that have always plagued computational modeling of new materials remain. With many types of materials, such as crystals, useful characteristics often can’t be predicted solely by calculating atomic structures. To uncover and optimize those properties, you need to make something real. Or as Rafael Gómez-Bombarelli, one of Lila’s cofounders and an MIT professor of materials science, puts it: “Structure helps us think about the problem, but it’s neither necessary nor sufficient for real materials problems.”
Perhaps no advance exemplified the gap between the virtual and physical worlds more than DeepMind’s announcement in late 2023 that it had used deep learning to discover “millions of new materials,” including 380,000 crystals that it declared “the most stable, making them promising candidates for experimental synthesis.” In technical terms, the arrangement of atoms represented a minimum energy state where they were content to stay put. This was “an order-of-magnitude expansion in stable materials known to humanity,” the DeepMind researchers proclaimed. To the AI community, it appeared to be the breakthrough everyone had been waiting for. The DeepMind research not only offered a gold mine of possible new materials, it also created powerful new computational methods for predicting a large number of structures. But some materials scientists had a far different reaction. After closer scrutiny, researchers at the University of California, Santa Barbara, said they’d found “scant evidence for compounds that fulfill the trifecta of novelty, credibility, and utility.” In fact, the scientists reported, they didn’t find any truly novel compounds among the ones they looked at; some were merely “trivial” variations of known ones. The scientists appeared particularly peeved that the potential compounds were labeled materials. They wrote: “We would respectfully suggest that the work does not report any new materials but reports a list of proposed compounds. In our view, a compound can be called a material when it exhibits some functionality and, therefore, has potential utility.” Some of the imagined crystals simply defied the conditions of the real world. To do computations on so many possible structures, DeepMind researchers simulated them at absolute zero, where atoms are well ordered; they vibrate a bit but don’t move around. At higher temperatures—the kind that would exist in the lab or anywhere in the world—the atoms fly about in complex ways, often creating more disorderly crystal structures. A number of the so-called novel materials predicted by DeepMind appeared to be well-ordered versions of disordered ones that were already known.
More generally, the DeepMind paper was simply another reminder of how challenging it is to capture physical realities in virtual simulations—at least for now. Because of the limitations of computational power, researchers typically perform calculations on relatively few atoms. Yet many desirable properties are determined by the microstructure of the materials—at a scale much larger than the atomic world. And some effects, like high-temperature superconductivity or even the catalysis that is key to many common industrial processes, are far too complex or poorly understood to be explained by atomic simulations alone. A common language Even so, there are signs that the divide between simulations and experimental work is beginning to narrow. DeepMind, for one, says that since the release of the 2023 paper it has been working with scientists in labs around the world to synthesize AI-identified compounds and has achieved some success. Meanwhile, a number of the startups entering the space are looking to combine computational and experimental expertise in one organization. One such startup is Periodic Labs, cofounded by Ekin Dogus Cubuk, a physicist who led the scientific team that generated the 2023 DeepMind headlines, and by Liam Fedus, a co-creator of ChatGPT at OpenAI. Despite its founders’ background in computational modeling and AI software, the company is building much of its materials discovery strategy around synthesis done in automated labs. The vision behind the startup is to link these different fields of expertise by using large language models that are trained on scientific literature and able to learn from ongoing experiments. An LLM might suggest the recipe and conditions to make a compound; it can also interpret test data and feed additional suggestions to the startup’s chemists and physicists. In this strategy, simulations might suggest possible material candidates, but they are also used to help explain the experimental results and suggest possible structural tweaks. The grand prize would be a room-temperature superconductor, a material that could transform computing and electricity but that has eluded scientists for decades. Periodic Labs, like Lila Sciences, has ambitions beyond designing and making new materials. It wants to “create an AI scientist”—specifically, one adept at the physical sciences. “LLMs have gotten quite good at distilling chemistry information, physics information,” says Cubuk, “and now we’re trying to make it more advanced by teaching it how to do science—for example, doing simulations, doing experiments, doing theoretical modeling.” The approach, like that of Lila Sciences, is based on the expectation that a better understanding of the science behind materials and their synthesis will lead to clues that could help researchers find a broad range of new ones. One target for Periodic Labs is materials whose properties are defined by quantum effects, such as new types of magnets. The grand prize would be a room-temperature superconductor, a material that could transform computing and electricity but that has eluded scientists for decades.
Superconductors are materials in which electricity flows without any resistance and, thus, without producing heat. So far, the best of these materials become superconducting only at relatively low temperatures and require significant cooling. If they can be made to work at or close to room temperature, they could lead to far more efficient power grids, new types of quantum computers, and even more practical high-speed magnetic-levitation trains. Lila staff scientist Natalie Page (right), Gómez- Bombarelli, and Gregoire inspect thin-film samples after they come out of the sputtering machine and before they undergo testing.CODY O’LOUGHLIN The failure to find a room-temperature superconductor is one of the great disappointments in materials science over the last few decades. I was there when President Reagan spoke about the technology in 1987, during the peak hype over newly made ceramics that became superconducting at the relatively balmy temperature of 93 Kelvin (that’s −292 °F), enthusing that they “bring us to the threshold of a new age.” There was a sense of optimism among the scientists and businesspeople in that packed ballroom at the Washington Hilton as Reagan anticipated “a host of benefits, not least among them a reduced dependence on foreign oil, a cleaner environment, and a stronger national economy.” In retrospect, it might have been one of the last times that we pinned our economic and technical aspirations on a breakthrough in materials.
The promised new age never came. Scientists still have not found a material that becomes superconducting at room temperatures, or anywhere close, under normal conditions. The best existing superconductors are brittle and tend to make lousy wires. One of the reasons that finding higher-temperature superconductors has been so difficult is that no theory explains the effect at relatively high temperatures—or can predict it simply from the placement of atoms in the structure. It will ultimately fall to lab scientists to synthesize any interesting candidates, test them, and search the resulting data for clues to understanding the still puzzling phenomenon. Doing so, says Cubuk, is one of the top priorities of Periodic Labs. AI in charge It can take a researcher a year or more to make a crystal structure for the first time. Then there are typically years of further work to test its properties and figure out how to make the larger quantities needed for a commercial product. Startups like Lila Sciences and Periodic Labs are pinning their hopes largely on the prospect that AI-directed experiments can slash those times. One reason for the optimism is that many labs have already incorporated a lot of automation, for everything from preparing samples to shuttling test items around. Researchers routinely use robotic arms, software, automated versions of microscopes and other analytical instruments, and mechanized tools for manipulating lab equipment. The automation allows, among other things, for high-throughput synthesis, in which multiple samples with various combinations of ingredients are rapidly created and screened in large batches, greatly speeding up the experiments. The idea is that using AI to plan and run such automated synthesis can make it far more systematic and efficient. AI agents, which can collect and analyze far more data than any human possibly could, can use real-time information to vary the ingredients and synthesis conditions until they get a sample with the optimal properties. Such AI-directed labs could do far more experiments than a person and could be far smarter than existing systems for high-throughput synthesis. But so-called self-driving labs for materials are still a work in progress. Many types of materials require solid-state synthesis, a set of processes that are far more difficult to automate than the liquid-handling activities that are commonplace in making drugs. You need to prepare and mix powders of multiple inorganic ingredients in the right combination for making, say, a catalyst and then decide how to process the sample to create the desired structure—for example, identifying the right temperature and pressure at which to carry out the synthesis. Even determining what you’ve made can be tricky.
In 2023, the A-Lab at Lawrence Berkeley National Laboratory claimed to be the first fully automated lab to use inorganic powders as starting ingredients. Subsequently, scientists reported that the autonomous lab had used robotics and AI to synthesize and test 41 novel materials, including some predicted in the DeepMind database. Some critics questioned the novelty of what was produced and complained that the automated analysis of the materials was not up to experimental standards, but the Berkeley researchers defended the effort as simply a demonstration of the autonomous system’s potential. “How it works today and how we envision it are still somewhat different. There’s just a lot of tool building that needs to be done,” says Gerbrand Ceder, the principal scientist behind the A-Lab. AI agents are already getting good at doing many laboratory chores, from preparing recipes to interpreting some kinds of test data—finding, for example, patterns in a micrograph that might be hidden to the human eye. But Ceder is hoping the technology could soon “capture human decision-making,” analyzing ongoing experiments to make strategic choices on what to do next. For example, his group is working on an improved synthesis agent that would better incorporate what he calls scientists’ “diffused” knowledge—the kind gained from extensive training and experience. “I imagine a world where people build agents around their expertise, and then there’s sort of an uber-model that puts it together,” he says. “The uber-model essentially needs to know what agents it can call on and what they know, or what their expertise is.” “In one field that I work in, solid-state batteries, there are 50 papers published every day. And that is just one field that I work in. The A I revolution is about finally gathering all the scientific data we have.” Gerbrand Ceder, principal scientist, A-Lab One of the strengths of AI agents is their ability to devour vast amounts of scientific literature. “In one field that I work in, solid-state batteries, there are 50 papers published every day. And that is just one field that I work in,” says Ceder. It’s impossible for anyone to keep up. “The AI revolution is about finally gathering all the scientific data we have,” he says. Last summer, Ceder became the chief science officer at an AI materials discovery startup called Radical AI and took a sabbatical from the University of California, Berkeley, to help set up its self-driving labs in New York City. A slide deck shows the portfolio of different AI agents and generative models meant to help realize Ceder’s vision. If you look closely, you can spot an LLM called the “orchestrator”—it’s what CEO Joseph Krause calls the “head honcho.” New hope So far, despite the hype around the use of AI to discover new materials and the growing momentum—and money—behind the field, there still has not been a convincing big win. There is no example like the 2016 victory of DeepMind’s AlphaGo over a Go world champion. Or like AlphaFold’s achievement in mastering one of biomedicine’s hardest and most time-consuming chores, predicting 3D structures of proteins. The field of materials discovery is still waiting for its moment. It could come if AI agents can dramatically speed the design or synthesis of practical materials, similar to but better than what we have today. Or maybe the moment will be the discovery of a truly novel one, such as a room-temperature superconductor. A small window provides a view of the inside workings of Lila’s sputtering instrument.The startup uses the machine to create a wide variety of experimental samples, including potential materials that could be useful for coatings and catalysts.CODY O’LOUGHLIN With or without such a breakthrough moment, startups face the challenge of trying to turn their scientific achievements into useful materials. The task is particularly difficult because any new materials would likely have to be commercialized in an industry dominated by large incumbents that are not particularly prone to risk-taking. Susan Schofer, a tech investor and partner at the venture capital firm SOSV, is cautiously optimistic about the field. But Schofer, who spent several years in the mid-2000s as a catalyst researcher at one of the first startups using automation and high-throughput screening for materials discovery (it didn’t survive), wants to see some evidence that the technology can translate into commercial successes when she evaluates startups to invest in. In particular, she wants to see evidence that the AI startups are already “finding something new, that’s different, and know how they are going to iterate from there.” And she wants to see a business model that captures the value of new materials. She says, “I think the ideal would be: I got a spec from the industry. I know what their problem is. We’ve defined it. Now we’re going to go build it. Now we have a new material that we can sell, that we have scaled up enough that we’ve proven it. And then we partner somehow to manufacture it, but we get revenue off selling the material.” Schofer says that while she gets the vision of trying to redefine science, she’d advise startups to “show us how you’re going to get there.” She adds, “Let’s see the first steps.” Demonstrating those first steps could be essential in enticing large existing materials companies to embrace AI technologies more fully. Corporate researchers in the industry have been burned before—by the promise over the decades that increasingly powerful computers will magically design new materials; by combinatorial chemistry, a fad that raced through materials R&D labs in the early 2000s with little tangible result; and by the promise that synthetic biology would make our next generation of chemicals and materials. More recently, the materials community has been blanketed by a new hype cycle around AI. Some of that hype was fueled by the 2023 DeepMind announcement of the discovery of “millions of new materials,” a claim that, in retrospect, clearly overpromised. And it was further fueled when an MIT economics student posted a paper in late 2024 claiming that a large, unnamed corporate R&D lab had used AI to efficiently invent a slew of new materials. AI, it seemed, was already revolutionizing the industry. A few months later, the MIT economics department concluded that “the paper should be withdrawn from public discourse.” Two prominent MIT economists who are acknowledged in a footnote in the paper added that they had “no confidence in the provenance, reliability or validity of the data and the veracity of the research.” Can AI move beyond the hype and false hopes and truly transform materials discovery? Maybe. There is ample evidence that it’s changing how materials scientists work, providing them—if nothing else—with useful lab tools. Researchers are increasingly using LLMs to query the scientific literature and spot patterns in experimental data. But it’s still early days in turning those AI tools into actual materials discoveries. The use of AI to run autonomous labs, in particular, is just getting underway; making and testing stuff takes time and lots of money. The morning I visited Lila Sciences, its labs were largely empty, and it’s now preparing to move into a much larger space a few miles away. Periodic Labs is just beginning to set up its lab in San Francisco. It’s starting with manual synthesis guided by AI predictions; its robotic high-throughput lab will come soon. Radical AI reports that its lab is almost fully autonomous but plans to soon move to a larger space. Prominent AI researchers Liam Fedus (left) and Ekin Dogus Cubuk are the cofounders of Periodic Labs. The San Francisco–based startup aims to build an AI scientist that’s adept at the physical sciences.JASON HENRY When I talk to the scientific founders of these startups, I hear a renewed excitement about a field that long operated in the shadows of drug discovery and genomic medicine. For one thing, there is the money. “You see this enormous enthusiasm to put AI and materials together,” says Ceder. “I’ve never seen this much money flow into materials.” Reviving the materials industry is a challenge that goes beyond scientific advances, however. It means selling companies on a whole new way of doing R&D. But the startups benefit from a huge dose of confidence borrowed from the rest of the AI industry. And maybe that, after years of playing it safe, is just what the materials business needs. This story is part of an online package on resetting expectations around AI. For more see technologyreview.com/hypecorrection.

Improved Gemini audio models for powerful voice experiences
What customers are sayingGoogle Cloud customers are already using Gemini’s native audio capabilities to drive real business results, from mortgage processing to customer calls.“Users often forget they’re talking to AI within a minute of using Sidekick, and in some cases have thanked the bot after a long chat…New Live API AI capabilities offered through Gemini [2.5 Flash Native Audio] empower our merchants to win.” – David Wurtz, VP of Product, Shopify”By integrating the Gemini 2.5 Flash Native Audio model…we’ve significantly enhanced Mia’s capabilities since launching in May 2025. This powerful combination has enabled us to generate over 14,000 loans for our broker partners.” – Jason Bressler, Chief Technology Officer, United Wholesale Mortgage (UWM)“Working with the Gemini 2.5 Flash Native Audio model through Vertex AI allows Newo.ai AI Receptionists to achieve unmatched conversational intelligence … .They can identify the main speaker even in noisy settings, switch languages mid-conversation, and sound remarkably natural and emotionally expressive.” – David Yang, Co-founder, Newo.aiLive Speech TranslationGemini now natively supports new live speech-to-speech translation capabilities designed to handle both continuous listening and two-way conversation.With continuous listening, Gemini automatically translates speech in multiple languages into a single target language. This allows you to put headphones in and hear the world around you in your language.For two-way conversation, Gemini’s live speech translation handles translation between two languages in real-time, automatically switching the output language based on who is speaking. For example, if you speak English and want to chat with a Hindi speaker, you’ll hear English translations in real-time in your headphones, while your phone broadcasts Hindi when you’re done speaking.Gemini’s live speech translation has a number of key capabilities that help in the real world:Language coverage: Translate speech in over 70 languages and 2000 language pairs by combining Gemini model’s world knowledge and multilingual capabilities with its native audio capabilitiesStyle transfer: Captures the nuance of human speech, preserving the speaker’s intonation, pacing and pitch so the translation sounds natural.Multilingual input: Understands multiple languages simultaneously in a single session, helping you follow multilingual conversations without needing to fiddle around with language settings.Auto detection: Identifies the spoken language and begins translation, so you don’t even need to know what language is being spoken to start translating.Noise robustness: Filters out ambient noise so you can converse comfortably even in loud, outdoor environments.

The Download: expanded carrier screening, and how Southeast Asia plans to get to space
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Expanded carrier screening: Is it worth it? Carrier screening tests would-be parents for hidden genetic mutations that might affect their children. It initially involved testing for specific genes in at-risk populations. Expanded carrier screening takes things further, giving would-be parents an option to test for a wide array of diseases in prospective parents and egg and sperm donors.
The companies offering these screens “started out with 100 genes, and now some of them go up to 2,000,” Sara Levene, genetics counsellor at Guided Genetics, said at a meeting I attended this week. “It’s becoming a bit of an arms race amongst labs, to be honest.” But expanded carrier screening comes with downsides. And it isn’t for everyone. Read the full story.
—Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. Southeast Asia seeks its place in space It’s a scorching October day in Bangkok and I’m wandering through the exhibits at the Thai Space Expo, held in one of the city’s busiest shopping malls, when I do a double take. Amid the flashy space suits and model rockets on display, there’s a plain-looking package of Thai basil chicken. I’m told the same kind of vacuum-sealed package has just been launched to the International Space Station.It’s an unexpected sight, one that reflects the growing excitement within the Southeast Asian space sector. And while there is some uncertainty about how exactly the region’s space sector may evolve, there is plenty of optimism, too. Read the full story. —Jonathan O’Callaghan This story is from the next print issue of MIT Technology Review magazine. If you haven’t already, subscribe now to receive future issues once they land.
The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Disney just signed a major deal with OpenAIMeaning you’ll soon be able to create Sora clips starring 200 Marvel, Pixel and Star Wars characters. (Hollywood Reporter $)+ Disney used to be openly skeptical of AI. What changed? (WSJ $)+ It’s not feeling quite so friendly towards Google, however. (Ars Technica)+ Expect a load of AI slop making its way to Disney Plus. (The Verge) 2 Donald Trump has blocked US states from enforcing their own AI rulesBut technically, only Congress has the power to override state laws. (NYT $)+ A new task force will seek out states with “inconsistent” AI rules. (Engadget)+ The move is particularly bad news for California. (The Markup)3 Reddit is challenging Australia’s social media ban for teensIt’s arguing that the ban infringes on their freedom of political communication. (Bloomberg $)+ We’re learning more about the mysterious machinations of the teenage brain. (Vox)4 ChatGPT’s “adult mode” is due to launch early next yearBut OpenAI admits it needs to improve its age estimation tech first. (The Verge)+ It’s pretty easy to get DeepSeek to talk dirty. (MIT Technology Review) 5 The death of Running Tide’s carbon removal dreamThe company’s demise is a wake-up call to others dabbling in experimental tech. (Wired $)+ We first wrote about Running Tide’s issues back in 2022. (MIT Technology Review)+ What’s next for carbon removal? (MIT Technology Review)6 That dirty-talking AI teddy bear wasn’t a one-offIt turns out that a wide range of LLM-powered toys aren’t suitable for children. (NBC News) + AI toys are all the rage in China—and now they’re appearing on shelves in the US too. (MIT Technology Review) 7 These are the cheapest places to create a fake online accountFor a few cents, scammers can easily set up bots. (FT $) 8 How professors are attempting to AI-proof examsChatGPT won’t help you cut corners to ace an oral examination. (WP $) 9 Can a font be woke?Marco Rubio seems to think so. (The Atlantic $)10 Next year is all about maximalist circus decor 🎪That’s according to Pinterest’s trend predictions for 2026. (The Guardian)
Quote of the day
“Trump is delivering exactly what his billionaire benefactors demanded—all at the expense of our kids, our communities, our workers, and our planet.” —Senator Ed Markey criticizes Donald Trump’s decision to sign an order cracking down on US states’ ability to self-regulate AI, the Wall Street Journal reports. One more thing Taiwan’s “silicon shield” could be weakeningTaiwanese politics increasingly revolves around one crucial question: Will China invade? China’s ruling party has wanted to seize Taiwan for more than half a century. But in recent years, China’s leader, Xi Jinping, has placed greater emphasis on the idea of “taking back” the island (which the Chinese Communist Party, or CCP, has never controlled).Many in Taiwan and elsewhere think one major deterrent has to do with the island’s critical role in semiconductor manufacturing. Taiwan produces the majority of the world’s semiconductors and more than 90% of the most advanced chips needed for AI applications.But now some Taiwan specialists and some of the island’s citizens are worried that this “silicon shield,” if it ever existed, is cracking. Read the full story. —Johanna M. Costigan
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)+ Reasons to be cheerful: people are actually nicer than we think they are.+ This year’s Krampus Run in Whitby—the Yorkshire town that inspired Bram Stoker’s Dracula—looks delightfully spooky.+ How to find the magic in that most mundane of locations: the airport.+ The happiest of birthdays to Dionne Warwick, who turns 85 today.

The Download: introducing the AI Hype Correction package
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Introducing: the AI Hype Correction package AI is going to reproduce human intelligence. AI will eliminate disease. AI is the single biggest, most important invention in human history. You’ve likely heard it all—but probably none of these things are true.AI is changing our world, but we don’t yet know the real winners, or how this will all shake out.After a few years of out-of-control hype, people are now starting to re-calibrate what AI is, what it can do, and how we should think about its ultimate impact.Here, at the end of 2025, we’re starting the post-hype phase. This new package of stories, called Hype Correction, is a way to reset expectations—a critical look at where we are, what AI makes possible, and where we go next.Here’s a sneak peek at what you can expect: + An introduction to four ways of thinking about the great AI hype correction of 2025.+ While it’s safe to say we’re definitely in an AI bubble right now, what’s less clear is what it really looks like—and what comes after it pops. Read the full story.+ Why OpenAI’s Sam Altman can be traced back to so many of the more outlandish proclamations about AI doing the rounds these days. Read the full story.+ It’s a weird time to be an AI doomer. But they’re not giving up.+ AI coding is now everywhere—but despite the billions of dollars being poured into improving AI models’ coding abilities, not everyone is convinced. Read the full story.+ If we really want to start finding new kinds of materials faster, AI materials discovery needs to make it out of the lab and move into the real world. Read the full story.+ Why reports of AI’s potential to replace trained human lawyers are greatly exaggerated.+ Dr. Margaret Mitchell, chief ethics scientist at AI startup Hugging Face, explains why the generative AI hype train is distracting us from what AI actually is and what it can—and crucially, cannot—do. Read the full story.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 iRobot has filed for bankruptcyThe Roomba maker is considering handing over control to its main Chinese supplier. (Bloomberg $)+ A proposed Amazon acquisition fell through close to two years ago. (FT $)+ How the company lost its way. (TechCrunch)+ A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook? (MIT Technology Review) 2 Meta’s 2025 has been a total rollercoaster rideFrom its controversial AI team to Mark Zuckerberg’s newfound appreciation for masculine energy. (Insider $) 3 The Trump administration is giving the crypto industry a much easier rideIt’s dismissed crypto lawsuits involving many firms with financial ties to Trump. (NYT $)+ Celebrities are feeling emboldened to flog crypto once again. (The Guardian)+ A bitcoin investor wants to set up a crypto libertarian community in the Caribbean. (FT $) 4 There’s a new weight-loss drug in townAnd people are already taking it, even though it’s unapproved. (Wired $)+ What we still don’t know about weight-loss drugs. (MIT Technology Review)5 Chinese billionaires are having dozens of US-born surrogate babiesAn entire industry has sprung up to support them. (WSJ $)+ A controversial Chinese CRISPR scientist is still hopeful about embryo gene editing. (MIT Technology Review) 6 Trump’s “big beautiful bill” funding hinges on states integrating AI into healthcareExperts fear it’ll be used as a cost-cutting measure, even if it doesn’t work. (The Guardian)+ Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions. (MIT Technology Review) 7 Extreme rainfall is wreaking havoc in the desertOman and the UAE are unaccustomed to increasingly common torrential downpours. (WP $) 8 Data centers are being built in countries that are too hot for themWhich makes it a lot harder to cool them sufficiently. (Rest of World)
9 Why AI image generators are getting deliberately worseTheir makers are pursuing realism—not that overly polished, Uncanny Valley look. (The Verge)+ Inside the AI attention economy wars. (NY Mag $) 10 How a tiny Swedish city became a major video game hubSkövde has formed an unlikely community of cutting-edge developers. (The Guardian)+ Google DeepMind is using Gemini to train agents inside one of Skövde’s biggest franchises. (MIT Technology Review) Quote of the day “They don’t care about the games. They don’t care about the art. They just want their money.” —Anna C Webster, chair of the freelancing committee of the United Videogame Workers union, tells the Guardian why their members are protesting the prestigious 2025 Game Awards in the wake of major layoffs. One more thing
Recapturing early internet whimsy with HTMLWebsites weren’t always slick digital experiences.There was a time when surfing the web involved opening tabs that played music against your will and sifting through walls of text on a colored background. In the 2000s, before Squarespace and social media, websites were manifestations of individuality—built from scratch using HTML, by users who had some knowledge of code.Scattered across the web are communities of programmers working to revive this seemingly outdated approach. And the movement is anything but a superficial appeal to retro aesthetics—it’s about celebrating the human touch in digital experiences. Read the full story. —Tiffany Ng
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Here’s how a bit of math can help you wrap your presents much more neatly this year.+ It seems that humans mastered making fire way, way earlier than we realized.+ The Arab-owned cafes opening up across the US sound warm and welcoming.+ How to give a gift the recipient will still be using and loving for decades to come.

Uniper Approves Its First Scottish Solar Project
Uniper SE said it has decided to proceed constructing its first solar project in Scotland, the 45-megawatt (MW) Berryhill Solar Farm just north of Dundee. The German power and gas utility expects to start construction “early 2026” and fire up the facility later in the year. Berryhill’s output, from about 150,000 solar panels, would be enough “to power the equivalent of over 12,500 UK households each year, 1/5th the population of Angus – contributing to the UK’s net zero targets”, Uniper said in an online statement. “The project has been developed jointly with partner Solar2 and Uniper plans to start the construction process as its sole owner”, Uniper said. Scottish and Southern Electricity Networks, the district network operator, is responsible for installing the nine-kilometer (5.59 miles) medium-voltage cable to connect the solar farm to the grid’s point of connection at the Charleston substation on the Kingsway in Dundee, according to Uniper. The project had obtained planning permission from the Angus Council in 2022, which was modified 2024, it said. “Once operational, there will be a community benefit fund associated with the project which will be agreed with the applicable local community council – Muirhead, Birkhill and Liff Council”, Uniper said. In another Scottish renewable project, Uniper earlier this year greenlighted the construction of a 46.2-MW wind farm in East Ayrshire. It expects to put the facility into production 2028 with enough generation for 66,000 United Kingdom homes each year, it said in a press release June 11. Uniper had announced two other UK renewable power projects in 2025: the Tamworth Solar Farm with a capacity of around 44.2 MWp and the 21.33-MWp Totmonslow Solar Farm. The two projects’ combined capacity can power about 23,300 homes a year, according to Uniper. Uniper aims to connect the projects, which received approval from local

Strategists Say Oil’s Fermi Paradox Nearing an End
In an oil and gas report sent to Rigzone recently by the Macquarie team, Macquarie strategists, including Vikas Dwivedi, noted that oil’s “Fermi Paradox [is] nearing an end”, adding that “onshore stocks [are] starting to build”. “We continue to expect a heavily oversupplied market,” the strategists said in the report. “We estimate a 1Q26 peak supply-demand surplus of over four million barrels per day. Signs of the surplus are showing with continued offshore builds, increasing onshore builds, and extremely strong freight rates,” they added. “We estimate that approximately one-third of the offshore build is long-haul shipments from the Americas to Asia,” they continued. In the report, the strategists revealed that they expect onshore builds to accelerate through year-end 2025 and into early 2026, a process which they said “should drive Brent towards the low $50 range, with a possibility of reaching $45 per barrel”. “Since the end of August, offshore inventories have increased by roughly 250 million barrels and onshore storage up by ~30 million barrels,” the strategists highlighted in the report. “In the past month, the trend has accelerated with onshore … [plus] offshore stocks building by ~ three million barrels per day. Yet, structure remains backwardated, as AB barrels continued clearing East,” they added. A separate report sent to Rigzone by the Macquarie team on December 5 showed that Macquarie was projecting that the Brent price will average $68.21 per barrel overall in 2025 and $60.75 per barrel overall in 2026. According to that report, Macquarie expects the Brent price to average $63.00 per barrel in the fourth quarter of this year, $57.00 per barrel in the first quarter of 2026, $59.00 per barrel in the second quarter, $60.00 per barrel in the third quarter, and $67.00 per barrel in the fourth quarter. In that report, Macquarie

BP, Eni Exit Two Angola Producing Assets
A consortium between BW Energy Ltd and Maurel & Prom SA has signed an agreement with Azule Energy Holdings Ltd, a 50-50 venture between BP PLC and Eni SpA, to acquire Azule’s non-operating stakes in Block 14 and Block 14K offshore Angola. Chevron Corp-operated Block 14, a mature deepwater area, produces about 40,000 barrels of oil per day (bopd) gross from nine fields, while Block 14K is a tieback to Block 14 with current production of 2,000 bopd gross, BW Energy and Maurel & Prom noted in separate press releases. BW Energy, part of Singapore-based energy shipping operator BW Group Ltd, and Maurel & Prom, a Paris-based exploration and production company majority-owned by Indonesia’s national oil and gas company PT Pertamina (Persero), will each acquire 10 percent in Block 14 and five percent in Block 14K. Azule said separately its combined share from the blocks averaged 9,600 bopd in 2024. “This transaction is aligned with Azule Energy’s strategy to concentrate our efforts on our core assets in Angola”, said Azule chief executive Joseph Murphy. Last year in Angola, Azule sold its 12 percent interest in Block 3/05 and 16 percent stake in Lower Congo Basin to Afentra, as confirmed by Azule May 23, 2024. The transaction with BW Energy and Maurel & Prom is valued up to $310 million including up to $115 million of deferred payments, Azule said. BW Energy said of the assets, “Current producing reserves are estimated at 9.3 million barrels net to BW Energy, with several identified opportunities to further increase recoverable volumes”. It added, “Abandonment and decommissioning costs are covered by existing provisions”. BW Energy chief executive Carl K. Arnet said, “The entry to Angola is a key step in BW Energy’s West Africa growth strategy and provides further diversification of our resource base”. “Angola is a mature hydrocarbon basin

WoodMac Says Eni Find Reinforces Kutei as One of Hottest Plays
Eni’s latest discovery in Indonesia reinforces the Kutei Basin’s reputation as one of the hottest global exploration plays of recent years. That’s what Andrew Harwood, Wood Mackenzie (WoodMac) Vice President, Corporate Research, said in a statement sent to Rigzone, adding that the find “will add to Indonesia’s gas resources when the country increasingly focuses on gas availability”. “It provides options for Indonesia as the nation balances domestic demand needs with future export opportunities,” Harwood said. Harwood noted that the Konta-1 discovery “adds momentum to Eni’s existing plans to invest in and develop new gas sources for the currently underutilized Bontang LNG plant”. “The Konta-1 discovery lies in the northern Muara Bakau area, close to Eni’s pre-FID Kutei North Hub. It provides future tie-back upside and offers Plan B for Eni if the un-appraised Geng North underperforms initial expectations,” he added. Harwood also said Eni’s latest find encourages the company’s ongoing exploration campaign, which he pointed out runs into 2026. “Wood Mackenzie’s pick of prospects in line for drilling is Geliga, which holds multi trillion cubic foot potential,” he stated. Harwood went on to note that 2026 “looks exciting for Eni’s Indonesian portfolio with several major milestones ahead”. “These include exploration campaign results, a final investment decision on the Northern hub development, and the launch of ‘NewCo’ – the strategic satellite venture between Eni and Petronas,” he highlighted. In a statement sent to Rigzone recently, Eni announced a “significant gas discovery” in the Konta-1 exploration well off the coast of East Kalimantan in Indonesia. “Estimates indicate 600 billion cubic feet of gas initially in place (GIIP) with a potential upside beyond one trillion cubic feet,” Eni said in the statement. “Konta-1 was drilled to a depth of 4,575 meters [15,009 feet] in 570 meters [1,870 feet] water depth, encountering gas in

China Fossil Fuel Generation Set for First Drop in Decade
China’s fossil fuel power plants are on track to chart their first annual drop in generation in a decade as renewables flood the grid to meet rising demand. Thermal electricity output fell 4.2 percent in November, according to data published by the National Bureau of Statistics on Monday. Generation from coal and gas-fired plants is down 0.7 percent this year, on track for the first annual decline since 2015 unless there’s a sharp jump in December. China’s massive fleet of coal power stations is the world’s leading source of greenhouse gases fueling global warming. Even though the nation is continuing to build more of the plants, their use is plateauing as huge investments in renewables meet growing consumption needs. Wind power jumped 22 percent in November from the previous year, while large solar farms saw a 23 percent rise in generation, additional data released Monday showed. Even as power-sector emissions in China drop, they’ve been largely offset by rising pollution from a growing fleet of chemicals and plastics factories, according to the Centre for Research on Energy and Clean Air. The nation’s coal output fell on an annual basis for a fifth month, while oil and natural gas continued to rise toward annual production records. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.
Stay Ahead with the Paperboy Newsletter
Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.