Your Gateway to Power, Energy, Datacenters, Bitcoin and AI
Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.
Discover What Matters Most to You

AI
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Discover What Matter Most to You
Featured Articles

CEO Denies Alleged TotalEnergies Link to Mozambique Crimes
TotalEnergies SE Chief Executive Officer Patrick Pouyanne rejected accusations the French energy firm has responsibilities in alleged killing of civilians four years ago at its liquefied natural gas project site in Mozambique. The company “is accused of having directly financed and materially supported” a group of armed forces, who “allegedly detained, tortured and killed dozens of civilians” at the LNG project in the north of the country, the European Center for Constitutional and Human Rights said in a statement Tuesday. It filed a criminal complaint over the allegations with the French National Anti-Terrorism Prosecutor this week. “We will defend ourselves and we will explain that all this has nothing to do with TotalEnergies,” Pouyanne said Wednesday on LCI television station. “We’ve done inquiries. We never managed to find evidence” of the allegations. The complaint comes as Total is on the verge of restarting construction of the project for the first time since the site was shut in 2021 due to an Islamist insurgency. Other global corporations operating in conflict areas have had cases brought against them including Holcim Ltd.’s Lafarge, on trial in France over operations in Syria, and a US ruling against BNP Paribas related to Sudan. The ECCHR complaint, citing an account by Politico, accuses Total of “complicity in war crimes” through a financial link to a Mozambican army unit that allegedly held civilians in shipping containers where dozens of them were tortured and killed at the project between July and September 2021. The company had evacuated the site earlier that year after an attack by insurgents and declared a force majeure. In 2023, Jean-Christophe Rufin, a former French ambassador hired by Total to review the security and humanitarian situation around the project, warned that the developers should stop paying bonuses to Mozambique’s security forces protecting the site. Total asked government authorities to

Powering the grid: embracing EPC for extra-high-voltage growth
Across the country, the demand for power is soaring. Hyperscale facilities, rising industrial load, extreme weather impacts and the loss of firm power capacity are pushing the grid harder than ever. Integration of renewable and distributed generation sources — often far from load centers — has been constrained as infrastructure build-out has lagged soaring demand. The response from the energy sector has been a boom in capital investment, significant new construction and rebuilds of aging infrastructure, aiming to dramatically increase capacity on the grid. The complexity and sheer scale of these projects pose serious risks. A streamlined approach to project delivery, utilizing the engineer-procure-construct (EPC) model, will be key to delivering at the rate the market demands. Accelerating the front end of projects, from concept to mobilization, offers opportunities to optimize through integrated delivery and collaborative contracting. Three important takeaways: Extra-high-voltage (EHV) projects, such as 765-kV transmission lines, are an important part of the sector’s response to modern challenges. Given limited practical experience with such projects, partnerships can better leverage that pool of experience. The portfolio-based approach required to scale extra-high-voltage infrastructure needs EPC delivery for maximum efficiency. The importance of collaboration and coordination is magnified for region-spanning efforts. Bridging Experience Gaps Solving capacity challenges means significant capital investment is essential, particularly in EHV transmission infrastructure. However, most of the limited 765-kV infrastructure in the U.S. was built decades ago. The number of people in today’s workforce who have hands-on experience with design, construction or commissioning at that scale is small and shrinking. The supply of experienced workers — especially field personnel, skilled linemen and engineering leadership — for high-voltage projects is a major constraint in an already-tight labor market. The risk created by that lack of bench strength requires trust among all stakeholders for the projects. Intentional knowledge transfer

Dynagas Q3 Revenue Down YoY
Dynagas LNG Partners LP on Thursday reported $38.89 million in revenue for the third quarter, down from $39.07 million for the same three-month period last year. The decrease brought down net profit adjusted for nonrecurring items from $14.48 million for Q3 2024 to $14.23 million, or $0.36 per share, for Q3 2025, the Athens-based owner and operator of liquefied natural gas (LNG) carriers said in its quarterly report. The revenue fall was driven by “the decrease of the daily hire rate of the Arctic Aurora in the three-month period ending September 30, 2025, and the decrease in revenue earning days of the Yenisei River due to unscheduled repairs”, Dynagas said. “The above decrease in voyage revenues was partially offset by the non-cash effect of the amortization of deferred revenues and the value of the EU ETS emissions allowances due to the Partnership by the charterers of its vessels”. Dynagas logged average daily hire gross of commissions of nearly $70,000 per day per vessel in Q3 2025, down from around $72,800 per day per vessel for Q3 2024. Its fleet, consisting of six carriers with a combined capacity of approximately 914,000 cubic meters (32.28 million cubic feet), had utilization rates of 99.1 percent and 100 percent in Q3 2025 and Q3 2024 respectively. “Our fleet-wide time charter equivalent of $67,094 per day comfortably exceeded our cash breakeven for the quarter of approximately $47,500, allowing us to continue generating stable free cash flow”, said chief executive Tony Lauritzen. While revenue dropped, net income grew from $15.05 million for Q3 2024 to $18.66 million for Q3 2025. This was “mainly attributable to the increase of other income from insurance claims for damages incurred in prior years, the decrease in net interest and finance costs… [and] the decrease in general and administrative expenses”, Dynagas said.

Nvidia is flying high: Is there anything left to say?
Supply chain risks, he said, “are numerous in nature; however, it is clear that Nvidia is customer Number One with all of their suppliers, which drives an inordinate allocation of resources to ensure that production flows. Any disruption would likely be materials-based as opposed to a process or labor issue from their vendor base.” He added, “geopolitical events would be the most likely origin of any type of medium to long term disruption, think China-Taiwan, expansion of the Russia-Ukraine conflict, or escalation in the US-China trade war.” For lower impact events, he said, “[Nvidia] does a nice job of setting conservative shipment goals and targets for Wall Street, which they almost invariably beat quarter after quarter. This provides some cushion for them to absorb a labor, process, or geopolitical hiccup and still meet their stated goals. Shipment volumes may not exceed targets, but shipments would continue to flow; the spice must flow after all.” In a worst-case scenario where shipments are materially impacted, there is little recourse for enterprises that are not large-scale cloud consumers with clout with the limited providers in the space, Bickley added. Enterprises joining a ‘very long queue’ According to Sanchit Vir Gogia, the chief analyst at Greyhound Research, the Nvidia earnings call “confirms that the bottleneck in enterprise AI is no longer imagination or budget. It is capacity. Nvidia reported $57 billion in quarterly revenue, with more than $51 billion from data center customers alone, yet still described itself as supply-constrained at record levels.” Blackwell and Blackwell Ultra, he said, have become the default currency of AI infrastructure, yet even at a build rate of roughly 1,000 GPU racks per week, the company cannot meet demand.

Oil Slips as Peace Talks Advance
Oil fell after Ukrainian President Volodymyr Zelenskiy said he agreed to work on a peace plan drafted by the US and Russia aimed at ending the war in Ukraine. West Texas Intermediate fell 0.5% to settle above $59 a barrel on Thursday, paring some losses from intraday lows following Zelenskiy’s comments. A peace deal, if followed by the elimination of sanctions on Russian oil over its invasion of Ukraine, could unleash supply from the world’s third-largest producer. Oil markets are already staring down expectations for a surplus as OPEC+ and other producers ramp up output, with the commodity heading for a yearly loss amid concerns of a glut. The flurry of renewed activity to end the war comes just hours before US sanctions targeting Russia’s two largest oil companies, Rosneft PJSC and Lukoil PJSC, are due to come into effect. Russia has consistently found a way to sell its sanctioned oil through so-called “shadow” channels. But Moscow’s oil revenue is expected to stagnate amid falling global crude prices, posing a risk to its budget and broader economy. Still, any accord remains far from certain. The US has signaled to Zelenskiy that he should accept the deal drawn up in consultation with Moscow, according to a person familiar. But the plan outlines known Russian demands for concessions that Kyiv has repeatedly said are unacceptable and that have so far hindered any breakthrough in efforts to reach a ceasefire. “Notably, Ukraine is reiterating its openness to discuss ending the war, what’s uncertain is Russia’s real interest in ending the war,” said Rachel Ziemba, an adjunct senior fellow at the Center for a New American Security. “It remains to be seen if Russia is interested in ending the war or just in buying time to reduce more extensive sanctions.” Earlier in the day,

Russian Oil Giant Recommends Lowest Interim Dividends Since 2020
Russian oil giant Rosneft PJSC plans to pay the lowest interim dividends since the pandemic in 2020 as slumping crude prices, a stronger ruble and looming US sanctions bite. The board of directors of Russia’s largest state-controlled oil producer recommended to pay 11.56 rubles, $0.14, per share in interim dividends, according to a regulatory filing on Thursday. The recommendation comes just a day before unprecedented US sanctions are due to hit Rosneft and fellow Russian oil giant Lukoil PJSC. President Donald Trump’s administration last month stepped up restrictions on Russia’s oil industry, which together with gas accounts for about a quarter of the nation’s coffers. Rosneft’s earnings were already undermined by lower global oil prices amid fears of global surplus and much stronger ruble, with the appreciation of the nation’s currency meaning fewer rubles for each sold barrel. As a result, Rosneft’s net income shrank by 68% in the first half of the year from the same period in 2024. Rosneft, responsible for over a third of the nation’s oil output, has been paying dividends to the state since 1999, and to other shareholders since 2006 when it began trading publicly. The producer started to pay interim dividends in 2017, distributing half of its profit to shareholders. It scrapped the payouts for the first half of 2020 after posting a loss for the period. Lukoil’s board of directors will discuss recommendations on interim dividends on Friday. The oil producer initially planned to discuss nine-month payouts on Oct. 23, but postponed after US announced sanctions against the company on Oct. 22. Some Lukoil units on Friday received extensions to sanctions waivers that the Trump administration imposed. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate

CEO Denies Alleged TotalEnergies Link to Mozambique Crimes
TotalEnergies SE Chief Executive Officer Patrick Pouyanne rejected accusations the French energy firm has responsibilities in alleged killing of civilians four years ago at its liquefied natural gas project site in Mozambique. The company “is accused of having directly financed and materially supported” a group of armed forces, who “allegedly detained, tortured and killed dozens of civilians” at the LNG project in the north of the country, the European Center for Constitutional and Human Rights said in a statement Tuesday. It filed a criminal complaint over the allegations with the French National Anti-Terrorism Prosecutor this week. “We will defend ourselves and we will explain that all this has nothing to do with TotalEnergies,” Pouyanne said Wednesday on LCI television station. “We’ve done inquiries. We never managed to find evidence” of the allegations. The complaint comes as Total is on the verge of restarting construction of the project for the first time since the site was shut in 2021 due to an Islamist insurgency. Other global corporations operating in conflict areas have had cases brought against them including Holcim Ltd.’s Lafarge, on trial in France over operations in Syria, and a US ruling against BNP Paribas related to Sudan. The ECCHR complaint, citing an account by Politico, accuses Total of “complicity in war crimes” through a financial link to a Mozambican army unit that allegedly held civilians in shipping containers where dozens of them were tortured and killed at the project between July and September 2021. The company had evacuated the site earlier that year after an attack by insurgents and declared a force majeure. In 2023, Jean-Christophe Rufin, a former French ambassador hired by Total to review the security and humanitarian situation around the project, warned that the developers should stop paying bonuses to Mozambique’s security forces protecting the site. Total asked government authorities to

Powering the grid: embracing EPC for extra-high-voltage growth
Across the country, the demand for power is soaring. Hyperscale facilities, rising industrial load, extreme weather impacts and the loss of firm power capacity are pushing the grid harder than ever. Integration of renewable and distributed generation sources — often far from load centers — has been constrained as infrastructure build-out has lagged soaring demand. The response from the energy sector has been a boom in capital investment, significant new construction and rebuilds of aging infrastructure, aiming to dramatically increase capacity on the grid. The complexity and sheer scale of these projects pose serious risks. A streamlined approach to project delivery, utilizing the engineer-procure-construct (EPC) model, will be key to delivering at the rate the market demands. Accelerating the front end of projects, from concept to mobilization, offers opportunities to optimize through integrated delivery and collaborative contracting. Three important takeaways: Extra-high-voltage (EHV) projects, such as 765-kV transmission lines, are an important part of the sector’s response to modern challenges. Given limited practical experience with such projects, partnerships can better leverage that pool of experience. The portfolio-based approach required to scale extra-high-voltage infrastructure needs EPC delivery for maximum efficiency. The importance of collaboration and coordination is magnified for region-spanning efforts. Bridging Experience Gaps Solving capacity challenges means significant capital investment is essential, particularly in EHV transmission infrastructure. However, most of the limited 765-kV infrastructure in the U.S. was built decades ago. The number of people in today’s workforce who have hands-on experience with design, construction or commissioning at that scale is small and shrinking. The supply of experienced workers — especially field personnel, skilled linemen and engineering leadership — for high-voltage projects is a major constraint in an already-tight labor market. The risk created by that lack of bench strength requires trust among all stakeholders for the projects. Intentional knowledge transfer

Dynagas Q3 Revenue Down YoY
Dynagas LNG Partners LP on Thursday reported $38.89 million in revenue for the third quarter, down from $39.07 million for the same three-month period last year. The decrease brought down net profit adjusted for nonrecurring items from $14.48 million for Q3 2024 to $14.23 million, or $0.36 per share, for Q3 2025, the Athens-based owner and operator of liquefied natural gas (LNG) carriers said in its quarterly report. The revenue fall was driven by “the decrease of the daily hire rate of the Arctic Aurora in the three-month period ending September 30, 2025, and the decrease in revenue earning days of the Yenisei River due to unscheduled repairs”, Dynagas said. “The above decrease in voyage revenues was partially offset by the non-cash effect of the amortization of deferred revenues and the value of the EU ETS emissions allowances due to the Partnership by the charterers of its vessels”. Dynagas logged average daily hire gross of commissions of nearly $70,000 per day per vessel in Q3 2025, down from around $72,800 per day per vessel for Q3 2024. Its fleet, consisting of six carriers with a combined capacity of approximately 914,000 cubic meters (32.28 million cubic feet), had utilization rates of 99.1 percent and 100 percent in Q3 2025 and Q3 2024 respectively. “Our fleet-wide time charter equivalent of $67,094 per day comfortably exceeded our cash breakeven for the quarter of approximately $47,500, allowing us to continue generating stable free cash flow”, said chief executive Tony Lauritzen. While revenue dropped, net income grew from $15.05 million for Q3 2024 to $18.66 million for Q3 2025. This was “mainly attributable to the increase of other income from insurance claims for damages incurred in prior years, the decrease in net interest and finance costs… [and] the decrease in general and administrative expenses”, Dynagas said.

Nvidia is flying high: Is there anything left to say?
Supply chain risks, he said, “are numerous in nature; however, it is clear that Nvidia is customer Number One with all of their suppliers, which drives an inordinate allocation of resources to ensure that production flows. Any disruption would likely be materials-based as opposed to a process or labor issue from their vendor base.” He added, “geopolitical events would be the most likely origin of any type of medium to long term disruption, think China-Taiwan, expansion of the Russia-Ukraine conflict, or escalation in the US-China trade war.” For lower impact events, he said, “[Nvidia] does a nice job of setting conservative shipment goals and targets for Wall Street, which they almost invariably beat quarter after quarter. This provides some cushion for them to absorb a labor, process, or geopolitical hiccup and still meet their stated goals. Shipment volumes may not exceed targets, but shipments would continue to flow; the spice must flow after all.” In a worst-case scenario where shipments are materially impacted, there is little recourse for enterprises that are not large-scale cloud consumers with clout with the limited providers in the space, Bickley added. Enterprises joining a ‘very long queue’ According to Sanchit Vir Gogia, the chief analyst at Greyhound Research, the Nvidia earnings call “confirms that the bottleneck in enterprise AI is no longer imagination or budget. It is capacity. Nvidia reported $57 billion in quarterly revenue, with more than $51 billion from data center customers alone, yet still described itself as supply-constrained at record levels.” Blackwell and Blackwell Ultra, he said, have become the default currency of AI infrastructure, yet even at a build rate of roughly 1,000 GPU racks per week, the company cannot meet demand.

Oil Slips as Peace Talks Advance
Oil fell after Ukrainian President Volodymyr Zelenskiy said he agreed to work on a peace plan drafted by the US and Russia aimed at ending the war in Ukraine. West Texas Intermediate fell 0.5% to settle above $59 a barrel on Thursday, paring some losses from intraday lows following Zelenskiy’s comments. A peace deal, if followed by the elimination of sanctions on Russian oil over its invasion of Ukraine, could unleash supply from the world’s third-largest producer. Oil markets are already staring down expectations for a surplus as OPEC+ and other producers ramp up output, with the commodity heading for a yearly loss amid concerns of a glut. The flurry of renewed activity to end the war comes just hours before US sanctions targeting Russia’s two largest oil companies, Rosneft PJSC and Lukoil PJSC, are due to come into effect. Russia has consistently found a way to sell its sanctioned oil through so-called “shadow” channels. But Moscow’s oil revenue is expected to stagnate amid falling global crude prices, posing a risk to its budget and broader economy. Still, any accord remains far from certain. The US has signaled to Zelenskiy that he should accept the deal drawn up in consultation with Moscow, according to a person familiar. But the plan outlines known Russian demands for concessions that Kyiv has repeatedly said are unacceptable and that have so far hindered any breakthrough in efforts to reach a ceasefire. “Notably, Ukraine is reiterating its openness to discuss ending the war, what’s uncertain is Russia’s real interest in ending the war,” said Rachel Ziemba, an adjunct senior fellow at the Center for a New American Security. “It remains to be seen if Russia is interested in ending the war or just in buying time to reduce more extensive sanctions.” Earlier in the day,

Russian Oil Giant Recommends Lowest Interim Dividends Since 2020
Russian oil giant Rosneft PJSC plans to pay the lowest interim dividends since the pandemic in 2020 as slumping crude prices, a stronger ruble and looming US sanctions bite. The board of directors of Russia’s largest state-controlled oil producer recommended to pay 11.56 rubles, $0.14, per share in interim dividends, according to a regulatory filing on Thursday. The recommendation comes just a day before unprecedented US sanctions are due to hit Rosneft and fellow Russian oil giant Lukoil PJSC. President Donald Trump’s administration last month stepped up restrictions on Russia’s oil industry, which together with gas accounts for about a quarter of the nation’s coffers. Rosneft’s earnings were already undermined by lower global oil prices amid fears of global surplus and much stronger ruble, with the appreciation of the nation’s currency meaning fewer rubles for each sold barrel. As a result, Rosneft’s net income shrank by 68% in the first half of the year from the same period in 2024. Rosneft, responsible for over a third of the nation’s oil output, has been paying dividends to the state since 1999, and to other shareholders since 2006 when it began trading publicly. The producer started to pay interim dividends in 2017, distributing half of its profit to shareholders. It scrapped the payouts for the first half of 2020 after posting a loss for the period. Lukoil’s board of directors will discuss recommendations on interim dividends on Friday. The oil producer initially planned to discuss nine-month payouts on Oct. 23, but postponed after US announced sanctions against the company on Oct. 22. Some Lukoil units on Friday received extensions to sanctions waivers that the Trump administration imposed. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate

NuEnergy Completes Drilling for ‘Early Gas Sales’ Project in Indonesia
NuEnergy Gas Ltd said it had completed drilling for the fourth and final well in its “Early Gas Sales” project under the initial development plan for the Tanjung Enim coalbed methane (CBM) production sharing contract (PSC) in Indonesia. “Gas shows were observed at surface via surface logging equipment, confirming the presence of methane across multiple seams”, the Australian company said in a stock filing. The TE-B01-003 well, drilled 451 meters (1,479.66 feet) deep, intersected five coal seams at depths ranging between 299 and 419 meters, according to NuEnergy. “NuEnergy has installed a progressive cavity pump system for the TE-B01-003 well and preparations are now underway to commence dewatering – a key step toward establishing stable gas flow and optimizing well performance”, the company said. “Gas will be gathered at the surface facility and delivered to the gas processing facility upon reaching target production levels”. It added, “Pursuant to the signed heads of agreement with PT Perusahaan Gas Negara Tbk (PGN), gas produced from the drilled wells, TE-B06-001, TE-B06-002, TE-B06-003 well and the TE-B01-003 well, will be delivered via an infield pipeline to PGN’s processing and distribution facility”. The Early Gas Sales project will sell one million standard cubic feet a day (MMscfd) to Indonesian state-owned gas distributor PGN, toward the 25-MMscfd initial plan for the Tanjung Enim license, according to NuEnergy. On September 8, it announced approval from the Energy and Mineral Resources Ministry for the one-MMscfd sale through its subsidiary Dart Energy (Tanjung Enim) Pte Ltd (DETE). “With the gas allocation approval now secured, DETE will proceed with finalizing the Gas Sale and Purchase Agreement with PGN”, NuEnergy said then. Meanwhile the bigger Tanjung Enim Plan of Development (POD) 1 was approved June 2021 “under a gross split scheme which will allow the PSC to proceed field development, surface facility

Hammerfest LNG Workers Got Ill from MEG Tanks Venting, Equinor Finds
Dozens of workers at the Hammerfest LNG export terminal on the Norwegian island of Melkoya got ill from exposure to vented gas from tanks storing monoethylene glycol (MEG) during the one year to summer 2025, a probe by operator Equinor ASA has revealed. “We must acknowledge that we should have gone more in-depth to identify the causes when the first incidents of exposure occurred at Melkoya last summer”, Christina Dreetz, Equinor senior vice president for onshore plants, said in an online statement by the majority state-owned company. “Through measures implemented both during and after the investigation, we now have routines that enable us to manage risk more effectively”. The statement reported, “During a period of high activity at Hammerfest LNG, from summer 2024 to summer 2025, 37 people sought medical attention on four different occasions and nine people were absent from work following the exposure incidents. Some experienced health issues such as headaches, nausea and dizziness, while others noticed nothing. “Reactions to vented gas and the associated odor is a cause of the various health issues experienced by personnel, but it is unlikely that the exposure has led to long-term health issues”. Equinor’s investigation “points to insufficient risk assessment before the project start-up and follow-up as the reason why several incidents occurred during the one-year period”, the statement said. Equinor found that venting from the tanks housing MEG, a chemical used to prevent hydrate formation in pipelines from the Snohvit field to the liquefaction facility, had been the main cause of the exposure incidents. The tanks are designed so that vented gas consists of nitrogen and water vapor, according to Equinor. “Changes in the well stream in the MEG tanks or temperature fluctuations have contributed to changes in the composition of the vented gas. This has resulted in odors and,

Oil Falls on Rising Fuel Stocks
Crude retreated after a US government report showed rising inventories of fuel and other refined products, easing supply concerns while investors tracked stalled diplomatic attempts to end Russia’s war on Ukraine. West Texas Intermediate dropped 2.1% to trade above $59 a barrel, the biggest loss in a week. Ukrainian President Volodymyr Zelenskiy arrived in Turkey to “reinvigorate negotiations,” raising eyebrows among investors that had all but written off a deescalation of a conflict that has spurred restrictions on Russia’s energy sector. An Axios report that Washington has been working in consultation with the Kremlin to draft a new plan also eased supply concerns, though Moscow denied any talks. US envoy Steve Witkoff was expected to meet Ukrainian leaders in Turkey on Wednesday but postponed his trip, Axios reported. The developments may help cushion the impact of US sanctions against Russia’s two biggest oil producers, Rosneft PJSC and Lukoil PJSC, which are set to kick in within days. The US Treasury claimed the restrictions are already undermining Russia’s funding capacity. That’s particularly visible in surging diesel-market tightness, in which Russia is a significant player, raising concerns about shortages of heating fuel just ahead of winter. Some of those fears were allayed after the US Energy Information Administration reported on Wednesday that gasoline and distillate inventories in the US expanded for the first time in more than a month. Heating oil futures dropped as much as 5.2% after touching the highest since April 2024 on Tuesday, leading the energy complex lower. “Higher refining activity and lower implied demand for both helped gasoline and distillate inventories rise, albeit modestly for distillates,” said Matt Smith, Americas lead oil analyst at Kpler. The 3.4 million-barrel decline in US crude inventories last week was smaller than the American Petroleum Institute’s 4.4 million estimate, helping temper some

Some load forecasts using ‘unrealistically high load factors’: Grid Strategies VP
Dive Brief: Significant load growth is likely to arrive as forecast, but uncertainties associated with data centers are complicating load growth estimation, as are “unrealistically high load factors for the new large loads” in some load forecasts, said John Wilson, a vice president at Grid Strategies. Wilson is one of the lead authors of a November report which found the five-year forecast of U.S. utility peak load growth has increased from 24 GW to 166 GW over the past three years — by more than a factor of six. The report concluded that the “data center portion of utility load forecasts is likely overstated by roughly 25 GW,” based on reports from market analysts. Dive Insight: Despite projected load growth, many utility third-quarter earnings reports have shown relatively flat deliveries of electricity. Wilson said he thinks a definitive answer as to whether or not load growth is materializing will come next year. “If [large loads] start to get put off or canceled, and the load doesn’t come in, then we could see a lot of revisions to forecasts that are really large,” he said. The utility forecast for added data center load by 2030 is 90 GW, “nearly 10% of forecast peak load,” the report said, but “data center market analysts indicate that data center growth is unlikely to require much more than 65 GW through 2030.” Wilson said he thinks the overestimation could be due “simply to the challenge that utilities have in understanding whether a potential customer is pursuing just the site in their service area, or whether they’re pursuing multiple sites and they’re not planning on building out all of them.” This is information that utilities haven’t typically gathered, he said, although he’s seeing a trend toward utilities making those questions part of their application process. Wilson said another factor

Winter peak demand is rising faster than resource additions: NERC
Listen to the article 4 min This audio is auto-generated. Please let us know if you have feedback. Dive Brief: Peak demand on the bulk power system will be 20 GW higher this winter than last, but total resources to meet the peak have only increased 9.4 GW, according to a report released Tuesday by the North American Electric Reliability Corp. Despite the mismatch, all regions of the bulk power system should have sufficient resources for expected peak demand this winter, NERC said in its 2025-2026 Winter Reliability Assessment. However, several regions could face challenges in the event of extreme weather. There have been 11 GW of batteries and 8 GW of demand response resources added to the bulk power system since last winter, NERC said. Solar, thermal and hydro have also seen small additions, but contributions from wind resources are 14 GW lower following capacity accounting changes in some markets. Dive Insight: NERC officials described a mixed bag heading into the winter season. “The bulk power system is entering another winter with pockets of elevated risk, and the drivers are becoming more structural than seasonal,” said John Moura, NERC’s director of reliability assessments and performance analysis. “We’re seeing steady demand growth, faster than previous years, landing on a system that’s still racing to build new resources, navigating supply chain constraints and integrating large amounts of variable, inverter-based generation.” Aggregate peak demand across NERC’s footprint will be 20 GW, or 2.5%, higher than last winter. “Essentially, you have a doubling between the last several successive [winter reliability assessments],” said Mark Olson, NERC’s manager of reliability assessment. Nearly all of NERC’s assessment areas “are reporting year-on-year demand growth with some forecasting increases near 10%,” the reliability watchdog said. The U.S. West, Southeast and Mid-Atlantic — areas with significant data center development — have

Energy Secretary Strengthens Midwest Grid Reliability Heading into Winter Months
WASHINGTON—U.S. Secretary of Energy Chris Wright issued an emergency order to address critical grid reliability issues facing the Midwestern region of the United States heading into the cold winter months. The emergency order directs the Midcontinent Independent System Operator (MISO), in coordination with Consumers Energy, to ensure that the J.H. Campbell coal-fired power plant in West Olive, Michigan remains available for operation and to take every step to minimize costs for the American people. The Campbell Plant was scheduled to shut down on May 31, 2025 — 15 years before the end of its scheduled design life. “Because of the last administration’s dangerous energy subtraction policies targeting reliable and affordable energy sources, the United States continues to face an energy emergency,” said Energy Secretary Wright. “The Trump administration will keep taking action to reverse these energy subtraction policies, lowering energy costs and minimizing the risks of blackouts. Americans deserve access to affordable, reliable and secure energy regardless of whether the wind is blowing or the sun is shining, especially in dangerously cold weather.” Since the Department of Energy’s (DOE) original order issued on May 23, the Campbell plant has proven critical to MISO’s operations, operating regularly during periods of high energy demand and low levels of intermittent energy production. A subsequent order was issued on August 20, 2025. As outlined in DOE’s Resource Adequacy Report, power outages could increase by 100 times in 2030 if the U.S. continues to take reliable power offline. The emergency conditions that led to the issuance of the original orders persist.MISO’s service area will continue to face emergency conditions both in the near and long term. Two recent winter studies (2024 – 2025 NERC Winter Reliability Assessment and the 2023 – 2024 NERC Winter Reliability Assessment) have assessed the MISO assessment area as an elevated risk, with the “potential

AI means the end of internet search as we’ve known it
We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way. But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way. Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results. More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene. I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources. On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages. People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest). People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see.
Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate. Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know? In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good. Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed. And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first. But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad.
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing. Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search. “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly. It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be. But once you’ve used AI Overviews a bit, you realize they are different. Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web.
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.) “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.” That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language.
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video. When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai. There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out? I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too. “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web. “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful. “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.” But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result? Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend. “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says. Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.” Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.” “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.” He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew? A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says. OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more. “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience. Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does. “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.” When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them. “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed! The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers. It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.” We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge. The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets. Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed. “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information. In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses. But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on.

Subsea7 Scores Various Contracts Globally
Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Driving into the future
Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more. We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen. Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes.
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake. What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story.
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa. Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Oil Holds at Highest Levels Since October
Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

What to expect from NaaS in 2025
Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

UK battery storage industry ‘back on track’
UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

The Download: surviving extreme temperatures, and the big whale-wind turbine conspiracy
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The quest to find out how our bodies react to extreme temperatures Climate change is subjecting vulnerable people to temperatures that push their limits. In 2023, about 47,000 heat-related deaths are believed to have occurred in Europe. Researchers estimate that climate change could add an extra 2.3 million European heat deaths this century. That’s heightened the stakes for solving the mystery of just what happens to bodies in extreme conditions.While we broadly know how people thermoregulate, the science of keeping warm or cool is mottled with blind spots. Researchers around the world are revising rules about when extremes veer from uncomfortable to deadly. Their findings change how we should think about the limits of hot and cold—and how to survive in a new world. Read the full story. —Max G.Levy
This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories about the body. If you haven’t already, subscribe now to receive future issues once they land.
Whales are dying. Don’t blame wind turbines. Whale deaths have become a political flashpoint. There are currently three active mortality events for whales in the Atlantic, meaning clusters of deaths that experts consider unusual. And Republican lawmakers, conservative think tanks, and—most notably—President Donald Trump (a longtime enemy of wind power) are making dubious claims that offshore wind farms are responsible.But any finger-pointing at wind turbines for whale deaths ignores the fact that whales have been washing up on beaches since long before the giant machines were rooted in the ocean floor. This is something that has always happened. And the scientific consensus is clear: There’s no evidence that wind farms are the cause of recent increases in whale deaths. Read the full story. —Casey Crownhart This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology. Check out the rest of the series here. The State of AI: Energy is king, and the US is falling behind In the age of AI, the biggest barrier to progress isn’t money but energy. That should be particularly worrying in the US, where massive data centers are waiting to come online. It doesn’t look as if the country will build the steady power supply or infrastructure needed to serve them all.It wasn’t always like this. For about a decade before 2020, data centers were able to offset increased demand with efficiency improvements. Now, though, electricity demand is ticking up in the US, with billions of queries to popular AI models each day—and efficiency gains aren’t keeping pace.If we want AI to have the chance to deliver on big promises without driving electricity prices sky-high for the rest of us, the US needs to learn some lessons from the rest of the world on energy abundance. Just look at China. Read the full story. —Casey Crownhart & Pilita Clark
This is from The State of AI, our subscriber-only collaboration between the Financial Times & MIT Technology Review examining the ways in which AI is reshaping global power.Every Monday for the next four weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power. While subscribers to The Algorithm, our weekly AI newsletter, get access to an extended excerpt, subscribers to the magazine are able to read the whole thing. Sign up here to receive future editions every Monday. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 How China narrowed its AI divide with the USAmerica still has a clear lead—but for how long? (WSJ $)+ The AI boom won’t offset tariffs and America’s immigration crackdown forever. (FT $)+ How quickly is AI likely to progress really? (Economist $)+ Is China about to win the AI race? (MIT Technology Review) 2 Anthropic is due to turn a profit much faster than OpenAIThe two companies are taking very different approaches to making money. (WSJ $)+ OpenAI has lured Intel’s AI chief away. (Bloomberg $) 3 The EU is setting up a new intelligence sharing unitIt’s a bid to shore up intel in the wake of Donald Trump’s plans to reduce security support for Europe. (FT $) 4 Trump officials are poised to suggest oil drilling off the coast of CaliforniaThat’s likely to rile the state’s politicians and leaders. (WP $)+ What role should oil and gas companies play in climate tech? (MIT Technology Review)
5 America’s cyber defenses are poorRepeated cuts and mass layoffs are making it harder to protect the nation. (The Verge) 6 China is on track to hit its peak CO2 emissions target earlyAlthough it’s likely to miss its goal for cutting carbon intensity. (The Guardian)+ World leaders are heading to COP30 in Brazil this week. (New Yorker $)
7 OpenAI cannot use song lyrics without a licenseThat’s what a German court has decided, after siding with a music rights society. (Reuters)+ OpenAI is no stranger to legal proceedings. (The Atlantic $)+ AI is coming for music. (MIT Technology Review) 8 A small Michigan town is fighting a proposed AI data centerThe planned center is part of a collaboration between the University of Michigan and nuclear weapons scientists. (404 Media)+ Here’s where America’s data centers should be built instead. (Wired $)+ Communities in Latin America are pushing back, too. (The Guardian)+ Should we be moving data centers to space? (MIT Technology Review)9 AI models can’t tell the time ⏰Analog clocks leave them completely stumped. (IEEE Spectrum) 10 ChatGPT is giving daters the ickThese refuseniks don’t want anything to do with AI, or love interests who use it. (The Guardian) Quote of the day “I never imagined that making a cup of tea or obtaining water, antibiotics, or painkillers would require such tremendous effort.”
—An anonymous member of startup accelerator Gaza Sky Geeks tells Rest of World about the impact the war has had on them. One more thing How Rust went from a side project to the world’s most-loved programming languageMany software projects emerge because—somewhere out there—a programmer had a personal problem to solve.That’s more or less what happened to Graydon Hoare. In 2006, Hoare was a 29-year-old computer programmer working for Mozilla. After a software crash broke the elevator in his building, he set about designing a new computer language; one that he hoped would make it possible to write small, fast code without memory bugs.That language developed into Rust, one of the hottest new languages on the planet. But while it isn’t unusual for someone to make a new computer language, it’s incredibly rare for one to take hold and become part of the programming pantheon. How did Rust do it? Read the full story.

Reimagining cybersecurity in the era of AI and quantum
In partnership withCISCO AI and quantum technologies are dramatically reconfiguring how cybersecurity functions, redefining the speed and scale with which digital defenders and their adversaries can operate. The weaponization of AI tools for cyberattacks is already proving a worthy opponent to current defenses. From reconnaissance to ransomware, cybercriminals can automate attacks faster than ever before with AI. This includes using generative AI to create social engineering attacks at scale, churning out tens of thousands of tailored phishing emails in seconds, or accessing widely available voice cloning software capable of bypassing security defenses for as little as a few dollars. And now, agentic AI raises the stakes by introducing autonomous systems that can reason, act, and adapt like human adversaries. But AI isn’t the only force shaping the threat landscape. Quantum computing has the potential to seriously undermine current encryption standards if developed unchecked. Quantum algorithms can solve the mathematical problems underlying most modern cryptography, particularly public-key systems like RSA and Elliptic Curve, widely used for secure online communication, digital signatures, and cryptocurrency. “We know quantum is coming. Once it does, it will force a change in how we secure data across everything, including governments, telecoms, and financial systems,” says Peter Bailey, senior vice president and general manager of Cisco’s security business.
“Most organizations are understandably focused on the immediacy of AI threats,” says Bailey. “Quantum might sound like science fiction, but those scenarios are coming faster than many realize. It’s critical to start investing now in defenses that can withstand both AI and quantum attacks.” Critical to this defense is a zero trust approach to cybersecurity, which assumes no user or device can be inherently trusted. By enforcing continuous verification, zero trust enables constant monitoring and ensures that any attempts to exploit vulnerabilities are quickly detected and addressed in real time. This approach is technology-agnostic and creates a resilient framework even in the face of an ever-changing threat landscape.
Putting up AI defenses AI is lowering the barrier to entry for cyberattacks, enabling hackers even with limited skills or resources to infiltrate, manipulate, and exploit the slightest digital vulnerability. Nearly three-quarters (74%) of cybersecurity professionals say AI-enabled threats are already having a significant impact on their organization, and 90% anticipate such threats in the next one to two years. “AI-powered adversaries have advanced techniques and operate at machine speed,” says Bailey. “The only way to keep pace is to use AI to automate response and defend at machine speed.” To do this, Bailey says, organizations must modernize systems, platforms, and security operations to automate threat detection and response—processes that have previously relied on human rule-writing and reaction times. These systems must adapt dynamically as environments evolve and criminal tactics change. At the same time, companies must strengthen the security of their AI models and data to reduce exposure to manipulation from AI-enabled malware. Such risks could include, for instance, prompt injections, where a malicious user crafts a prompt to manipulate an AI model into performing unintended actions, bypassing its original instructions and safeguards. Agentic AI further ups the ante, with hackers able to use AI agents to automate attacks and make tactical decisions without constant human oversight. “Agentic AI has the potential to collapse the cost of the kill chain,” says Bailey. “That means everyday cybercriminals could start executing campaigns that today only well-funded espionage operations can afford.” Organizations, in turn, are exploring how AI agents can help them stay ahead. Nearly 40% of companies expect agentic AI to augment or assist teams over the next 12 months, especially in cybersecurity, according to Cisco’s 2025 AI Readiness Index. Use cases include AI agents trained on telemetry, which can identify anomalies or signals from machine data too disparate and unstructured to be deciphered by humans. Calculating the quantum threat As many cybersecurity teams focus on the very real AI-driven threat, quantum is waiting on the sidelines. Almost three-quarters (73%) of US organizations surveyed by KPMG say they believe it is only a matter of time before cybercriminals are using quantum to decrypt and disrupt today’s cybersecurity protocols. And yet, the majority (81%) also admit they could do more to ensure that their data remains secure.
Companies are right to be concerned. Threat actors are already carrying out harvest now, decrypt later attacks, stockpiling sensitive encrypted data to crack once quantum technology matures. Examples include state-sponsored actors intercepting government communications and cybercriminal networks storing encrypted internet traffic or financial records. Large technology companies are among the first to roll out quantum defenses. For example, Apple is using cryptography protocol PQ3 to defend against harvest now, decrypt later attacks on its iMessage platform. Google is testing post-quantum cryptography (PQC)—which is resistant to attacks from both quantum and classical computers—in its Chrome browser. And Cisco “has made significant investments in quantum-proofing our software and infrastructure,” says Bailey. “You’ll see more enterprises and governments taking similar steps over the next 18 to 24 months,” he adds. As regulations like the US Quantum Computing Cybersecurity Preparedness Act lay out requirements for mitigating against quantum threats, including standardized PQC algorithms by the National Institute of Standards and Technology, a wider range of organizations will start preparing their own quantum defenses. For organizations beginning that journey, Bailey outlines two key actions. First, establish visibility. “Understand what data you have and where it lives,” he says. “Take inventory, assess sensitivity, and review your encryption keys, rotating out any that are weak or outdated.” Second, plan for migration. “Next, assess what it will take to support post-quantum algorithms across your infrastructure. That means addressing not just the technology, but also the process and people implications,” Bailey says. Adopting proactive defense Ultimately, the foundation for building resilience against both AI and quantum is a zero trust approach, says Bailey. By embedding zero trust access controls across users, devices, business applications, networks, and clouds, this approach grants only the minimum access required to complete a task and enables continuous monitoring. It can also minimize the attack surface by confining a potential threat to an isolated zone, preventing it from accessing other critical systems. Into this zero trust architecture, organizations can integrate specific measures to defend against AI and quantum risks. For instance, quantum-immune cryptography and AI-powered analytics and security tools can be used to identify complex attack patterns and automate real-time responses. “Zero trust slows down attacks and builds resilience,” Bailey says. “It ensures that even if a breach occurs, the crown jewels stay protected and operations can recover quickly.”
Ultimately, companies should not wait for threats to emerge and evolve. They must get ahead now. “This isn’t a what-if scenario; it’s a when,” says Bailey. “Organizations that invest early will be the ones setting the pace, not scrambling to catch up.” This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: busting weather myths, and AI heart attack prediction
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Why it’s so hard to bust the weather control conspiracy theory It was October 2024, and Hurricane Helene had just devastated the US Southeast. Representative Marjorie Taylor Greene of Georgia found an abstract target on which to pin the blame: “Yes they can control the weather,” she posted on X. “It’s ridiculous for anyone to lie and say it can’t be done.”She was repeating what’s by now a pretty familiar and popular conspiracy theory: that shadowy forces are out there, wielding technology to control the weather and wreak havoc on their enemies. This preposterous claim has grown louder and more common in recent years, especially after extreme weather strikes.But here’s the thing: While Greene and other believers are not correct, this conspiracy theory—like so many others—holds a kernel of much more modest truth. Read the full story. —Dave Levitan
This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology. Check out the rest of the series here.
AI could predict who will have a heart attack For all the modern marvels of cardiology, we struggle to predict who will have a heart attack. Many people never get screened at all. Now, startups are applying AI algorithms to screen millions of CT scans for early signs of heart disease.This technology could be a breakthrough for public health, applying an old tool to uncover patients whose high risk for a heart attack is hiding in plain sight. But it remains unproven at scale, while raising thorny questions about implementation and even how we define disease. Read the full story. —Vishal Khetpal This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories about the body. If you haven’t already, subscribe now to receive future issues once they land. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Spending on AI may be to blame for all those tech layoffsAI isn’t necessarily replacing jobs, but spending on it is gobbling up budgets. (Fast Company $)+ Junior roles are likely to be the first on the chopping block. (FT $)+ Are the crazy sums that businesses are sinking into AI sustainable? (WP $)+ People are worried that AI will take everyone’s jobs. We’ve been here before. (MIT Technology Review)
2 Anti-vaccine activists gathered in Austin over the weekendThey celebrated RFK Jr’s rise and outlined their goals—including eliminating school vaccine mandates. (WP $)+ We’re on the verge of stopping the next pandemic. But will we? (Vox)+ How conspiracy theories infiltrated the doctor’s office. (MIT Technology Review) 3 People who’ve experienced AI-induced delusions are forming a movementThey’re pushing for legal action against chatbot makers. (Bloomberg $)+ The looming crackdown on AI companionship. (MIT Technology Review) 4 AI-generated clips of women being strangled are flooding social mediaMany of them appear to have been created using OpenAI’s Sora 2. (404 Media)5 Tech leaders are obsessed with bioengineering babiesThey’re not allowed to, but they’re not letting a little thing like ethics get in the way. (WSJ $)+ The race to make the perfect baby is creating an ethical mess. (MIT Technology Review)6 Apple has removed two popular gay dating apps in China The country ordered it to take down Blued and Finka from its app. (Wired $) 7 The UK government is worried China could turn off its buses remotelyIt fears hundreds of Chinese-made electric buses on British roads could be at risk. (FT $) 8 How AI is changing the world’s newsrooms 📰It’s brilliant at analyzing large data sets—but shouldn’t be used to write stories. (NYT $) 9 How to contain an invasive speciesExperts argue that too much red tape is getting in the way. (Undark)+ The weeds are winning. (MIT Technology Review)10 The world’s largest electric ship is charging up 🚢Once it’s ready to go, it’ll serve as a ferry in 90 minute bursts. (IEEE Spectrum) Quote of the day
“We would move heaven and Earth, pun intended, to try to get to the Moon sooner.” —Dave Limp, CEO of Blue Origin, says the company is raring to work with NASA to get humans back on the Moon, Ars Technica reports.
One more thing Design thinking was supposed to fix the world. Where did it go wrong?In the 1990s, a six-step methodology for innovation called design thinking started to grow in popularity. Key to its spread was its replicable aesthetic, represented by the Post-it note: a humble square that anyone can use in infinite ways.But in recent years, for a number of reasons, the shine of design thinking has been wearing off. Critics have argued that its short-term focus on novel and naive ideas results in unrealistic and ungrounded recommendations.Today, some groups are working to reform both design thinking’s principles and its methodologies. These new efforts seek a set of design tools capable of equitably serving diverse communities and solving diverse problems well into the future. It’s a much more daunting—and crucial—task than design thinking’s original remit. Read the full story. —Rebecca Ackermann We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + These tree-dwelling toads give birth to live young—who knew?!+ Now’s the time to practice your baking skills ahead of Thanksgiving.+ Younguk Yi’s glitching paintings are a lot of fun.+ Place your bets! This fun game follows three balls in a race to the bottom, but who will win?

The State of AI: Energy is king, and the US is falling behind
Welcome to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday for the next six weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power. This week, Casey Crownhart, senior reporter for energy at MIT Technology Review and Pilita Clark, FT’s columnist, consider how China’s rapid renewables buildout could help it leapfrog on AI progress. Casey Crownhart writes: In the age of AI, the biggest barrier to progress isn’t money but energy. That should be particularly worrying here in the US, where massive data centers are waiting to come online, and it doesn’t look as if the country will build the steady power supply or infrastructure needed to serve them all.
It wasn’t always like this. For about a decade before 2020, data centers were able to offset increased demand with efficiency improvements. Now, though, electricity demand is ticking up in the US, with billions of queries to popular AI models each day—and efficiency gains aren’t keeping pace. With too little new power capacity coming online, the strain is starting to show: Electricity bills are ballooning for people who live in places where data centers place a growing load on the grid. If we want AI to have the chance to deliver on big promises without driving electricity prices sky-high for the rest of us, the US needs to learn some lessons from the rest of the world on energy abundance. Just look at China.
China installed 429 GW of new power generation capacity in 2024, more than six times the net capacity added in the US during that time. China still generates much of its electricity with coal, but that makes up a declining share of the mix. Rather, the country is focused on installing solar, wind, nuclear, and gas at record rates. The US, meanwhile, is focused on reviving its ailing coal industry. Coal-fired power plants are polluting and, crucially, expensive to run. Aging plants in the US are also less reliable than they used to be, generating electricity just 42% of the time, compared with a 61% capacity factor in 2014. It’s not a great situation. And unless the US changes something, we risk becoming consumers as opposed to innovators in both energy and AI tech. Already, China earns more from exporting renewables than the US does from oil and gas exports. Building and permitting new renewable power plants would certainly help, since they’re currently the cheapest and fastest to bring online. But wind and solar are politically unpopular with the current administration. Natural gas is an obvious candidate, though there are concerns about delays with key equipment. One quick fix would be for data centers to be more flexible. If they agreed not to suck electricity from the grid during times of stress, new AI infrastructure might be able to come online without any new energy infrastructure. One study from Duke University found that if data centers agree to curtail their consumption just 0.25% of the time (roughly 22 hours over the course of the year), the grid could provide power for about 76 GW of new demand. That’s like adding about 5% of the entire grid’s capacity without needing to build anything new. But flexibility wouldn’t be enough to truly meet the swell in AI electricity demand. What do you think, Pilita? What would get the US out of these energy constraints? Is there anything else we should be thinking about when it comes to AI and its energy use?
Pilita Clark responds: I agree. Data centers that can cut their power use at times of grid stress should be the norm, not the exception. Likewise, we need more deals like those giving cheaper electricity to data centers that let power utilities access their backup generators. Both reduce the need to build more power plants, which makes sense regardless of how much electricity AI ends up using. This is a critical point for countries across the world, because we still don’t know exactly how much power AI is going to consume. Forecasts for what data centers will need in as little as five years’ time vary wildly, from less than twice today’s rates to four times as much. This is partly because there’s a lack of public data about AI systems’ energy needs. It’s also because we don’t know how much more efficient these systems will become. The US chip designer Nvidia said last year that its specialized chips had become 45,000 times more energy efficient over the previous eight years. Moreover, we have been very wrong about tech energy needs before. At the height of the dot-com boom in 1999, it was erroneously claimed that the internet would need half the US’s electricity within a decade—necessitating a lot more coal power. Still, some countries are clearly feeling the pressure already. In Ireland, data centers chew up so much power that new connections have been restricted around Dublin to avoid straining the grid. Some regulators are eyeing new rules forcing tech companies to provide enough power generation to match their demand. I hope such efforts grow. I also hope AI itself helps boost power abundance and, crucially, accelerates the global energy transition needed to combat climate change. OpenAI’s Sam Altman said in 2023 that “once we have a really powerful super intelligence, addressing climate change will not be particularly difficult.”
The evidence so far is not promising, especially in the US, where renewable projects are being axed. Still, the US may end up being an outlier in a world where ever cheaper renewables made up more than 90% of new power capacity added globally last year. Europe is aiming to power one of its biggest data centers predominantly with renewables and batteries. But the country leading the green energy expansion is clearly China.
The 20th century was dominated by countries rich in the fossil fuels whose reign the US now wants to prolong. China, in contrast, may become the world’s first green electrostate. If it does this in a way that helps it win an AI race the US has so far controlled, it will mark a striking chapter in economic, technological, and geopolitical history. Casey Crownhart replies: I share your skepticism of tech executives’ claims that AI will be a groundbreaking help in the race to address climate change. To be fair, AI is progressing rapidly. But we don’t have time to wait for technologies standing on big claims with nothing to back them up. When it comes to the grid, for example, experts say there’s potential for AI to help with planning and even operating, but these efforts are still experimental. Meanwhile, much of the world is making measurable progress on transitioning to newer, greener forms of energy. How that will affect the AI boom remains to be seen. What is clear is that AI is changing our grid and our world, and we need to be clear-eyed about the consequences. Further reading
MIT Technology Review reporters did the math on the energy needs of an AI query. There are still a few reasons to be optimistic about AI’s energy demands. The FT’s visual data team take a look inside the relentless race for AI capacity. And global FT reporters ask whether data centers can ever truly be green.

The Download: a new home under the sea, and cloning pets
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The first new subsea habitat in 40 years is about to launch Vanguard feels and smells like a new RV. It has long, gray banquettes that convert into bunks, a microwave cleverly hidden under a counter, a functional steel sink with a French press and crockery above. A weird little toilet hides behind a curtain. But you can’t just fire up Vanguard’s engine and roll off the lot. Once it is sealed and moved to its permanent home beneath the waves of the Florida Keys National Marine Sanctuary early next year, Vanguard will be the world’s first new subsea habitat in nearly four decades.Teams of four scientists will live and work on the seabed for a week at a time, entering and leaving the habitat as scuba divers. Read our story about some of their potential missions.
—Mark Harris
Cloning isn’t just for celebrity pets like Tom Brady’s dog This week, we heard that Tom Brady had his dog cloned. The former quarterback revealed that his Junie is actually a clone of Lua, a pit bull mix that died in 2023. Brady’s announcement follows those of celebrities like Paris Hilton and Barbra Streisand, who also famously cloned their pet dogs. But some believe there are better ways to make use of cloning technologies, such as diversifying the genetic pools of inbred species, or potentially bringing other animals back from the brink of extinction. Read the full story. —Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 OpenAI is facing a wave of new lawsuits The cases concern wrongful death complaints, and claims ChatGPT caused mental breakdowns. (NYT $)+ One family claims ChatGPT “goaded” their son into taking his own life. (CNN)+ The looming crackdown on AI companionship. (MIT Technology Review) 2 Tesla shareholders approved Elon Musk’s $1 trillion pay packageMore than 75% of voters backed it. (WSJ $)+ Musk had hinted he’d leave Tesla if the deal wasn’t greenlit. (Axios)+ Tesla has to hit its ambitious targets before he can get his hands on the cash. (Wired $)3 The EU is poised to water down the AI actAfter pressure from Big Tech and the US government. (FT $)+ While the legislation was passed last year, many provisions haven’t kicked in yet. (Reuters) 4 Meta is earning a colossal amount of money from scam adsThey accounted for 10% of its revenue last year. (Reuters)+ Meta claims it “aggressively” addresses scam ads on its platform. (CNBC) 5 The Chan Zuckerberg Initiative is pivoting to AIIt’s shifting its philanthropic focus from social justice programs to curing disease. (WP $)+ To achieve its goals, the charity will need extra computing power. (NYT $) 6 Unesco has adopted global standards on neurotechnologyExperts were increasingly concerned that a lack of guardrails could give rise to unethical practices. (The Guardian)+ Meet the other companies developing brain-computer interfaces. (MIT Technology Review) 7 Benchmarks hugely oversell AI performanceA new study questions their reliability and the validity of their results. (NBC News)+ How to build a better AI benchmark. (MIT Technology Review) 8 Kim Kardashian blames ChatGPT for failing her law examsIt’s almost like she shouldn’t have been consulting it for legal expertise in the first place. (Hollywood Reporter)+ AI and social media is worsening brain rot. (NYT $)+ How AI is introducing errors into courtrooms. (MIT Technology Review) 9 Hyundai is using robot dogs to inspect its EV production lineAnd they may soon be joined by a bipedal master. (IEEE Spectrum)
10 Grand Theft Auto VI has been delayed yet againThe highly anticipated video game has big, big shoes to fill. (Bloomberg $)+ It’ll land a full 13 years after its previous incarnation—or will it? (BBC)
Quote of the day “This is what oligarchy looks like.” —Senator Bernie Sanders reacts to Tesla shareholders’ decision to award Elon Musk a $1 trillion pay package in a post on X. One more thing Finding forgotten Indigenous landscapes with electromagnetic technology
The fertile river valleys of the American Midwest hide tens of thousands of Indigenous earthworks, according to experts: geometric structures consisting of walls, mounds, ditches, and berms, some dating back nearly 3,000 years.Archaeologists now believe that the earthworks functioned as religious gathering places, tombs for culturally important clans, and annual calendars, perhaps all at the same time. They can take the form of giant circles and squares, cloverleafs and octagons, complex S-curves and simple mounds.Until recently, it seemed as if much of the continent’s pre-European archaeological heritage had been carelessly wiped out, uprooted, and lost for good. But traces remain: electromagnetic remnants in the soil that can be detected using specialty surveying equipment. And archaeologists and tribal historians are working together to uncover them. Read the full story. —Geoff Manaugh We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + If you’re a wildlife fan, take a look at this compilation of the best places to catch a glimpse of unusual animals.+ El Salvador’s annual fireball festival is a completely unhinged celebration of all things volcanic.+ The most influential Bostonians of 2025 have been announced.+ Get me in a potato bed, stat.

The first new subsea habitat in 40 years is about to launch
EXECUTIVE SUMMARY Vanguard feels and smells like a new RV. It has long, gray banquettes that convert into bunks, a microwave cleverly hidden under a counter, a functional steel sink with a French press and crockery above. A weird little toilet hides behind a curtain. But some clues hint that you can’t just fire up Vanguard’s engine and roll off the lot. The least subtle is its door, a massive disc of steel complete with a wheel that spins to lock. COURTESY MARK HARRIS Once it is sealed and moved to its permanent home beneath the waves of the Florida Keys National Marine Sanctuary early next year, Vanguard will be the world’s first new subsea habitat in nearly four decades. Teams of four scientists will live and work on the seabed for a week at a time, entering and leaving the habitat as scuba divers. Their missions could include reef restoration, species surveys, underwater archaeology, or even astronaut training. One of Vanguard’s modules, unappetizingly named the “wet porch,” has a permanent opening in the floor (a.k.a. a “moon pool”) that doesn’t flood because Vanguard’s air pressure is matched to the water around it.
It is this pressurization that makes the habitat so useful. Scuba divers working at its maximum operational depth of 50 meters would typically need to make a lengthy stop on their way back to the surface to avoid decompression sickness. This painful and potentially fatal condition, better known as the bends, develops if divers surface too quickly. A traditional 50-meter dive gives scuba divers only a handful of minutes on the seafloor, and they can make only a couple of such dives a day. With Vanguard’s atmosphere at the same pressure as the water, its aquanauts need to decompress only once, at the end of their stay. They can potentially dive for many hours every day. That could unlock all kinds of new science and exploration. “More time in the ocean opens a world of possibility, accelerating discoveries, inspiration, solutions,” said Kristen Tertoole, Deep’s chief operating officer, at Vanguard’s unveiling in Miami in October. “The ocean is Earth’s life support system. It regulates our climate, sustains life, and holds mysteries we’ve only begun to explore, but it remains 95% undiscovered.”
COURTESY DEEP Subsea habitats are not a new invention. Jacques Cousteau (naturally) built the first in 1962, although it was only about the size of an elevator. Larger habitats followed in the 1970s and ’80s, maxing out at around the size of Vanguard. But the technology has come a long way since then. Vanguard uses a tethered connection to a buoy above, known as the “surface expression,” that pipes fresh air and water down to the habitat. It also hosts a diesel generator to power a Starlink internet connection and a tank to hold wastewater. Norman Smith, Deep’s chief technology officer, says the company modeled the most severe hurricanes that Florida expects over the next 20 years and designed the tether to withstand them. Even if the worst happens and the link is broken, Deep says, Vanguard has enough air, water, and energy storage to support its crew for at least 72 hours. That number came from DNV, an independent classification agency that inspects and certifies all types of marine vessels so that they can get commercial insurance. Vanguard will be the first subsea habitat to get a DNV classification. “That means you have to deal with the rules and all the challenging, frustrating things that come along with it, but it means that on a foundational level, it’s going to be safe,” says Patrick Lahey, founder of Triton Submarines, a manufacturer of classed submersibles. JASON KOERNER/GETTY IMAGES FOR DEEP Although Deep hopes Vanguard itself will enable decades of useful science, its prime function for the company is to prove out technologies for its planned successor, an advanced modular habitat called Sentinel. Sentinel modules will be six meters wide, twice the diameter of Vanguard, complete with sweeping staircases and single-occupant cabins. A small deployment might have a crew of eight, about the same as the International Space Station. A big Sentinel system could house 50, up to 225 meters deep. Deep claims that Sentinel will be launched at some point in 2027. Ultimately, according to its mission statement, Deep seeks to “make humans aquatic,” an indication that permanent communities are on its long-term road map. Deep has not publicly disclosed the identity of its principal funder, but business records in the UK indicate that as of January 31, 2025 a Canadian man, Robert MacGregor, owned at least 75% of its holding company. According to a Reuters investigation, MacGregor was once linked with Craig Steven Wright, a computer scientist who claimed to be Satoshi Nakamoto, as bitcoin’s elusive creator is pseudonymously known. However, Wright’s claims to be Nakamoto later collapsed. MacGregor has kept a very low public profile in recent years. When contacted for comment, Deep spokesperson Mike Bohan refused to comment on the link with Wright, only to say it was inaccurate, but said: “Robert MacGregor started his career as an IP lawyer in the dot-com era, moving into blockchain technology and has diverse interests including philanthropy, real estate, and now Deep.” In any case, MacGregor could find keeping that low profile more difficult if Vanguard is successful in reinvigorating ocean science and exploration as the company hopes. The habitat is due to be deployed early next year, following final operational tests at Triton’s facility in Florida. It will welcome its first scientists shortly after. “The ocean is not just our resource; it is our responsibility,” says Tertoole. “Deep is more than a single habitat. We are building a full-stack capability for human presence in the ocean.” JASON KOERNER/GETTY IMAGES FOR DEEP

CEO Denies Alleged TotalEnergies Link to Mozambique Crimes
TotalEnergies SE Chief Executive Officer Patrick Pouyanne rejected accusations the French energy firm has responsibilities in alleged killing of civilians four years ago at its liquefied natural gas project site in Mozambique. The company “is accused of having directly financed and materially supported” a group of armed forces, who “allegedly detained, tortured and killed dozens of civilians” at the LNG project in the north of the country, the European Center for Constitutional and Human Rights said in a statement Tuesday. It filed a criminal complaint over the allegations with the French National Anti-Terrorism Prosecutor this week. “We will defend ourselves and we will explain that all this has nothing to do with TotalEnergies,” Pouyanne said Wednesday on LCI television station. “We’ve done inquiries. We never managed to find evidence” of the allegations. The complaint comes as Total is on the verge of restarting construction of the project for the first time since the site was shut in 2021 due to an Islamist insurgency. Other global corporations operating in conflict areas have had cases brought against them including Holcim Ltd.’s Lafarge, on trial in France over operations in Syria, and a US ruling against BNP Paribas related to Sudan. The ECCHR complaint, citing an account by Politico, accuses Total of “complicity in war crimes” through a financial link to a Mozambican army unit that allegedly held civilians in shipping containers where dozens of them were tortured and killed at the project between July and September 2021. The company had evacuated the site earlier that year after an attack by insurgents and declared a force majeure. In 2023, Jean-Christophe Rufin, a former French ambassador hired by Total to review the security and humanitarian situation around the project, warned that the developers should stop paying bonuses to Mozambique’s security forces protecting the site. Total asked government authorities to

Powering the grid: embracing EPC for extra-high-voltage growth
Across the country, the demand for power is soaring. Hyperscale facilities, rising industrial load, extreme weather impacts and the loss of firm power capacity are pushing the grid harder than ever. Integration of renewable and distributed generation sources — often far from load centers — has been constrained as infrastructure build-out has lagged soaring demand. The response from the energy sector has been a boom in capital investment, significant new construction and rebuilds of aging infrastructure, aiming to dramatically increase capacity on the grid. The complexity and sheer scale of these projects pose serious risks. A streamlined approach to project delivery, utilizing the engineer-procure-construct (EPC) model, will be key to delivering at the rate the market demands. Accelerating the front end of projects, from concept to mobilization, offers opportunities to optimize through integrated delivery and collaborative contracting. Three important takeaways: Extra-high-voltage (EHV) projects, such as 765-kV transmission lines, are an important part of the sector’s response to modern challenges. Given limited practical experience with such projects, partnerships can better leverage that pool of experience. The portfolio-based approach required to scale extra-high-voltage infrastructure needs EPC delivery for maximum efficiency. The importance of collaboration and coordination is magnified for region-spanning efforts. Bridging Experience Gaps Solving capacity challenges means significant capital investment is essential, particularly in EHV transmission infrastructure. However, most of the limited 765-kV infrastructure in the U.S. was built decades ago. The number of people in today’s workforce who have hands-on experience with design, construction or commissioning at that scale is small and shrinking. The supply of experienced workers — especially field personnel, skilled linemen and engineering leadership — for high-voltage projects is a major constraint in an already-tight labor market. The risk created by that lack of bench strength requires trust among all stakeholders for the projects. Intentional knowledge transfer

Dynagas Q3 Revenue Down YoY
Dynagas LNG Partners LP on Thursday reported $38.89 million in revenue for the third quarter, down from $39.07 million for the same three-month period last year. The decrease brought down net profit adjusted for nonrecurring items from $14.48 million for Q3 2024 to $14.23 million, or $0.36 per share, for Q3 2025, the Athens-based owner and operator of liquefied natural gas (LNG) carriers said in its quarterly report. The revenue fall was driven by “the decrease of the daily hire rate of the Arctic Aurora in the three-month period ending September 30, 2025, and the decrease in revenue earning days of the Yenisei River due to unscheduled repairs”, Dynagas said. “The above decrease in voyage revenues was partially offset by the non-cash effect of the amortization of deferred revenues and the value of the EU ETS emissions allowances due to the Partnership by the charterers of its vessels”. Dynagas logged average daily hire gross of commissions of nearly $70,000 per day per vessel in Q3 2025, down from around $72,800 per day per vessel for Q3 2024. Its fleet, consisting of six carriers with a combined capacity of approximately 914,000 cubic meters (32.28 million cubic feet), had utilization rates of 99.1 percent and 100 percent in Q3 2025 and Q3 2024 respectively. “Our fleet-wide time charter equivalent of $67,094 per day comfortably exceeded our cash breakeven for the quarter of approximately $47,500, allowing us to continue generating stable free cash flow”, said chief executive Tony Lauritzen. While revenue dropped, net income grew from $15.05 million for Q3 2024 to $18.66 million for Q3 2025. This was “mainly attributable to the increase of other income from insurance claims for damages incurred in prior years, the decrease in net interest and finance costs… [and] the decrease in general and administrative expenses”, Dynagas said.

Nvidia is flying high: Is there anything left to say?
Supply chain risks, he said, “are numerous in nature; however, it is clear that Nvidia is customer Number One with all of their suppliers, which drives an inordinate allocation of resources to ensure that production flows. Any disruption would likely be materials-based as opposed to a process or labor issue from their vendor base.” He added, “geopolitical events would be the most likely origin of any type of medium to long term disruption, think China-Taiwan, expansion of the Russia-Ukraine conflict, or escalation in the US-China trade war.” For lower impact events, he said, “[Nvidia] does a nice job of setting conservative shipment goals and targets for Wall Street, which they almost invariably beat quarter after quarter. This provides some cushion for them to absorb a labor, process, or geopolitical hiccup and still meet their stated goals. Shipment volumes may not exceed targets, but shipments would continue to flow; the spice must flow after all.” In a worst-case scenario where shipments are materially impacted, there is little recourse for enterprises that are not large-scale cloud consumers with clout with the limited providers in the space, Bickley added. Enterprises joining a ‘very long queue’ According to Sanchit Vir Gogia, the chief analyst at Greyhound Research, the Nvidia earnings call “confirms that the bottleneck in enterprise AI is no longer imagination or budget. It is capacity. Nvidia reported $57 billion in quarterly revenue, with more than $51 billion from data center customers alone, yet still described itself as supply-constrained at record levels.” Blackwell and Blackwell Ultra, he said, have become the default currency of AI infrastructure, yet even at a build rate of roughly 1,000 GPU racks per week, the company cannot meet demand.

Oil Slips as Peace Talks Advance
Oil fell after Ukrainian President Volodymyr Zelenskiy said he agreed to work on a peace plan drafted by the US and Russia aimed at ending the war in Ukraine. West Texas Intermediate fell 0.5% to settle above $59 a barrel on Thursday, paring some losses from intraday lows following Zelenskiy’s comments. A peace deal, if followed by the elimination of sanctions on Russian oil over its invasion of Ukraine, could unleash supply from the world’s third-largest producer. Oil markets are already staring down expectations for a surplus as OPEC+ and other producers ramp up output, with the commodity heading for a yearly loss amid concerns of a glut. The flurry of renewed activity to end the war comes just hours before US sanctions targeting Russia’s two largest oil companies, Rosneft PJSC and Lukoil PJSC, are due to come into effect. Russia has consistently found a way to sell its sanctioned oil through so-called “shadow” channels. But Moscow’s oil revenue is expected to stagnate amid falling global crude prices, posing a risk to its budget and broader economy. Still, any accord remains far from certain. The US has signaled to Zelenskiy that he should accept the deal drawn up in consultation with Moscow, according to a person familiar. But the plan outlines known Russian demands for concessions that Kyiv has repeatedly said are unacceptable and that have so far hindered any breakthrough in efforts to reach a ceasefire. “Notably, Ukraine is reiterating its openness to discuss ending the war, what’s uncertain is Russia’s real interest in ending the war,” said Rachel Ziemba, an adjunct senior fellow at the Center for a New American Security. “It remains to be seen if Russia is interested in ending the war or just in buying time to reduce more extensive sanctions.” Earlier in the day,

Russian Oil Giant Recommends Lowest Interim Dividends Since 2020
Russian oil giant Rosneft PJSC plans to pay the lowest interim dividends since the pandemic in 2020 as slumping crude prices, a stronger ruble and looming US sanctions bite. The board of directors of Russia’s largest state-controlled oil producer recommended to pay 11.56 rubles, $0.14, per share in interim dividends, according to a regulatory filing on Thursday. The recommendation comes just a day before unprecedented US sanctions are due to hit Rosneft and fellow Russian oil giant Lukoil PJSC. President Donald Trump’s administration last month stepped up restrictions on Russia’s oil industry, which together with gas accounts for about a quarter of the nation’s coffers. Rosneft’s earnings were already undermined by lower global oil prices amid fears of global surplus and much stronger ruble, with the appreciation of the nation’s currency meaning fewer rubles for each sold barrel. As a result, Rosneft’s net income shrank by 68% in the first half of the year from the same period in 2024. Rosneft, responsible for over a third of the nation’s oil output, has been paying dividends to the state since 1999, and to other shareholders since 2006 when it began trading publicly. The producer started to pay interim dividends in 2017, distributing half of its profit to shareholders. It scrapped the payouts for the first half of 2020 after posting a loss for the period. Lukoil’s board of directors will discuss recommendations on interim dividends on Friday. The oil producer initially planned to discuss nine-month payouts on Oct. 23, but postponed after US announced sanctions against the company on Oct. 22. Some Lukoil units on Friday received extensions to sanctions waivers that the Trump administration imposed. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate
Stay Ahead with the Paperboy Newsletter
Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.