Your Gateway to Power, Energy, Datacenters, Bitcoin and AI
Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.
Discover What Matters Most to You

AI
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Discover What Matter Most to You
Featured Articles

Some load forecasts using ‘unrealistically high load factors’: Grid Strategies VP
Dive Brief: Significant load growth is likely to arrive as forecast, but uncertainties associated with data centers are complicating load growth estimation, as are “unrealistically high load factors for the new large loads” in some load forecasts, said John Wilson, a vice president at Grid Strategies. Wilson is one of the lead authors of a November report which found the five-year forecast of U.S. utility peak load growth has increased from 24 GW to 166 GW over the past three years — by more than a factor of six. The report concluded that the “data center portion of utility load forecasts is likely overstated by roughly 25 GW,” based on reports from market analysts. Dive Insight: Despite projected load growth, many utility third-quarter earnings reports have shown relatively flat deliveries of electricity. Wilson said he thinks a definitive answer as to whether or not load growth is materializing will come next year. “If [large loads] start to get put off or canceled, and the load doesn’t come in, then we could see a lot of revisions to forecasts that are really large,” he said. The utility forecast for added data center load by 2030 is 90 GW, “nearly 10% of forecast peak load,” the report said, but “data center market analysts indicate that data center growth is unlikely to require much more than 65 GW through 2030.” Wilson said he thinks the overestimation could be due “simply to the challenge that utilities have in understanding whether a potential customer is pursuing just the site in their service area, or whether they’re pursuing multiple sites and they’re not planning on building out all of them.” This is information that utilities haven’t typically gathered, he said, although he’s seeing a trend toward utilities making those questions part of their application process. Wilson said another factor

AWS boosts its long-distance cloud connections with custom DWDM transponder
By controlling the entire hardware stack, AWS can implement comprehensive security measures that would be challenging with third-party solutions, Rehder stated. “This initial long-haul deployment represents just the first implementation of the in-house technology across our extensive long-haul network. We have already extended deployment to Europe, with plans to use the AWS DWDM transponder for all new long-haul connections throughout our global infrastructure,” Rehder wrote. Cloud vendors are some of the largest optical users in the world, though not all develop their own DWDM or other optical systems, according to a variety of papers on the subject. Google develops its own DWDM, for example, but others like Microsoft Azure develop only parts and buy optical gear from third parties. Others such as IBM, Oracle and Alibaba have optical backbones but also utilize third-party equipment. “We are anticipating that the time has come to interconnect all those new AI data centers being built,” wrote Jimmy Yu, vice president at Dell’Oro Group, in a recent optical report. “We are forecasting data center interconnect to grow at twice the rate of the overall market, driven by increased spending from cloud providers. The direct purchases of equipment for DCI will encompass ZR/ZR+ optics for IPoDWDM, optical line systems for transport, and DWDM systems for high-performance, long-distance terrestrial and subsea transmission.”

Scaling innovation in manufacturing with AI
In partnership withMicrosoft and NVIDIA Manufacturing is getting a major system upgrade. As AI amplifies existing technologies—like digital twins, the cloud, edge computing, and the industrial internet of things (IIoT)—it is enabling factory operations teams to shift from reactive, isolated problem-solving to proactive, systemwide optimization. Digital twins—physically accurate virtual representations of a piece of equipment, a production line, a process, or even an entire factory—allow workers to test, optimize, and contextualize complex, real-world environments. Manufacturers are using digital twins to simulate factory environments with pinpoint detail. “AI-powered digital twins mark a major evolution in the future of manufacturing, enabling real-time visualization of the entire production line, not just individual machines,” says Indranil Sircar, global chief technology officer for the manufacturing and mobility industry at Microsoft. “This is allowing manufacturers to move beyond isolated monitoring toward much wider insights.” A digital twin of a bottling line, for example, can integrate one-dimensional shop-floor telemetry, two-dimensional enterprise data, and three-dimensional immersive modeling into a single operational view of the entire production line to improve efficiency and reduce costly downtime. Many high-speed industries face downtime rates as high as 40%, estimates Jon Sobel, co-founder and chief executive officer of Sight Machine, an industrial AI company that partners with Microsoft and NVIDIA to transform complex data into actionable insights. By tracking micro-stops and quality metrics via digital twins, companies can target improvements and adjustments with greater precision, saving millions in once-lost productivity without disrupting ongoing operations.
AI offers the next opportunity. Sircar estimates that up to 50% of manufacturers are currently deploying AI in production. This is up from 35% of manufacturers surveyed in a 2024 MIT Technology Review Insights report who said they have begun to put AI use cases into production. Larger manufacturers with more than $10 billion in revenue were significantly ahead, with 77% already deploying AI use cases, according to the report. “Manufacturing has a lot of data and is a perfect use case for AI,” says Sobel. “An industry that has been seen by some as lagging when it comes to digital technology and AI may be in the best position to lead. It’s very unexpected.” Download the report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Winter peak demand is rising faster than resource additions: NERC
Listen to the article 4 min This audio is auto-generated. Please let us know if you have feedback. Dive Brief: Peak demand on the bulk power system will be 20 GW higher this winter than last, but total resources to meet the peak have only increased 9.4 GW, according to a report released Tuesday by the North American Electric Reliability Corp. Despite the mismatch, all regions of the bulk power system should have sufficient resources for expected peak demand this winter, NERC said in its 2025-2026 Winter Reliability Assessment. However, several regions could face challenges in the event of extreme weather. There have been 11 GW of batteries and 8 GW of demand response resources added to the bulk power system since last winter, NERC said. Solar, thermal and hydro have also seen small additions, but contributions from wind resources are 14 GW lower following capacity accounting changes in some markets. Dive Insight: NERC officials described a mixed bag heading into the winter season. “The bulk power system is entering another winter with pockets of elevated risk, and the drivers are becoming more structural than seasonal,” said John Moura, NERC’s director of reliability assessments and performance analysis. “We’re seeing steady demand growth, faster than previous years, landing on a system that’s still racing to build new resources, navigating supply chain constraints and integrating large amounts of variable, inverter-based generation.” Aggregate peak demand across NERC’s footprint will be 20 GW, or 2.5%, higher than last winter. “Essentially, you have a doubling between the last several successive [winter reliability assessments],” said Mark Olson, NERC’s manager of reliability assessment. Nearly all of NERC’s assessment areas “are reporting year-on-year demand growth with some forecasting increases near 10%,” the reliability watchdog said. The U.S. West, Southeast and Mid-Atlantic — areas with significant data center development — have

Energy Secretary Strengthens Midwest Grid Reliability Heading into Winter Months
WASHINGTON—U.S. Secretary of Energy Chris Wright issued an emergency order to address critical grid reliability issues facing the Midwestern region of the United States heading into the cold winter months. The emergency order directs the Midcontinent Independent System Operator (MISO), in coordination with Consumers Energy, to ensure that the J.H. Campbell coal-fired power plant in West Olive, Michigan remains available for operation and to take every step to minimize costs for the American people. The Campbell Plant was scheduled to shut down on May 31, 2025 — 15 years before the end of its scheduled design life. “Because of the last administration’s dangerous energy subtraction policies targeting reliable and affordable energy sources, the United States continues to face an energy emergency,” said Energy Secretary Wright. “The Trump administration will keep taking action to reverse these energy subtraction policies, lowering energy costs and minimizing the risks of blackouts. Americans deserve access to affordable, reliable and secure energy regardless of whether the wind is blowing or the sun is shining, especially in dangerously cold weather.” Since the Department of Energy’s (DOE) original order issued on May 23, the Campbell plant has proven critical to MISO’s operations, operating regularly during periods of high energy demand and low levels of intermittent energy production. A subsequent order was issued on August 20, 2025. As outlined in DOE’s Resource Adequacy Report, power outages could increase by 100 times in 2030 if the U.S. continues to take reliable power offline. The emergency conditions that led to the issuance of the original orders persist.MISO’s service area will continue to face emergency conditions both in the near and long term. Two recent winter studies (2024 – 2025 NERC Winter Reliability Assessment and the 2023 – 2024 NERC Winter Reliability Assessment) have assessed the MISO assessment area as an elevated risk, with the “potential

US Risks Winter Blackouts on Data Center Demand
Rising electricity demand from data centers is raising the risk of blackouts across a wide swath of the US during extreme conditions this winter, according to the regulatory body overseeing grid stability. Power consumption has grown 20 gigawatts from the previous winter, the North American Electric Reliability Corp. said Tuesday in its winter assessment. A gigawatt is the typical size of a nuclear power reactor. Supply hasn’t kept up. As as result, a repeat of severe winter storms in North America that unleash a polar vortex, of which there have been several in recent years, could trigger energy shortfalls across the US from the Northwest to Texas to the Carolinas. All regions have adequate resources in normal conditions. “Data centers are a main contributor to load growth in those areas where demand has risen substantially since last winter,” Mark Olson, manager of the reliability assessment, said in an emailed statement. America’s power grid has been facing rising blackout risks for years as aging infrastructure is increasingly stressed by severe storms and wildfires. Now the data center boom, driven by the spread of artificial intelligence, is adding to the strain by supercharging US electricity growth after being stagnant for two decades. Winter is especially risky because solar generation is available for fewer hours and battery operations may be affected. Gas supplies, meantime, could drop off because of freeze-offs or pipeline constraints. The areas designated by NERC as having elevated risks of shortfall shifted from the previous winter to include the US southeast and parts of the West, including Washington and Oregon. The Texas grid continues to be highlighted after cascading failures in February 2021 left millions of people without power for days and resulted in more than 200 deaths. New England also continues to face elevated risks on potential natural gas pipeline

Some load forecasts using ‘unrealistically high load factors’: Grid Strategies VP
Dive Brief: Significant load growth is likely to arrive as forecast, but uncertainties associated with data centers are complicating load growth estimation, as are “unrealistically high load factors for the new large loads” in some load forecasts, said John Wilson, a vice president at Grid Strategies. Wilson is one of the lead authors of a November report which found the five-year forecast of U.S. utility peak load growth has increased from 24 GW to 166 GW over the past three years — by more than a factor of six. The report concluded that the “data center portion of utility load forecasts is likely overstated by roughly 25 GW,” based on reports from market analysts. Dive Insight: Despite projected load growth, many utility third-quarter earnings reports have shown relatively flat deliveries of electricity. Wilson said he thinks a definitive answer as to whether or not load growth is materializing will come next year. “If [large loads] start to get put off or canceled, and the load doesn’t come in, then we could see a lot of revisions to forecasts that are really large,” he said. The utility forecast for added data center load by 2030 is 90 GW, “nearly 10% of forecast peak load,” the report said, but “data center market analysts indicate that data center growth is unlikely to require much more than 65 GW through 2030.” Wilson said he thinks the overestimation could be due “simply to the challenge that utilities have in understanding whether a potential customer is pursuing just the site in their service area, or whether they’re pursuing multiple sites and they’re not planning on building out all of them.” This is information that utilities haven’t typically gathered, he said, although he’s seeing a trend toward utilities making those questions part of their application process. Wilson said another factor

AWS boosts its long-distance cloud connections with custom DWDM transponder
By controlling the entire hardware stack, AWS can implement comprehensive security measures that would be challenging with third-party solutions, Rehder stated. “This initial long-haul deployment represents just the first implementation of the in-house technology across our extensive long-haul network. We have already extended deployment to Europe, with plans to use the AWS DWDM transponder for all new long-haul connections throughout our global infrastructure,” Rehder wrote. Cloud vendors are some of the largest optical users in the world, though not all develop their own DWDM or other optical systems, according to a variety of papers on the subject. Google develops its own DWDM, for example, but others like Microsoft Azure develop only parts and buy optical gear from third parties. Others such as IBM, Oracle and Alibaba have optical backbones but also utilize third-party equipment. “We are anticipating that the time has come to interconnect all those new AI data centers being built,” wrote Jimmy Yu, vice president at Dell’Oro Group, in a recent optical report. “We are forecasting data center interconnect to grow at twice the rate of the overall market, driven by increased spending from cloud providers. The direct purchases of equipment for DCI will encompass ZR/ZR+ optics for IPoDWDM, optical line systems for transport, and DWDM systems for high-performance, long-distance terrestrial and subsea transmission.”

Scaling innovation in manufacturing with AI
In partnership withMicrosoft and NVIDIA Manufacturing is getting a major system upgrade. As AI amplifies existing technologies—like digital twins, the cloud, edge computing, and the industrial internet of things (IIoT)—it is enabling factory operations teams to shift from reactive, isolated problem-solving to proactive, systemwide optimization. Digital twins—physically accurate virtual representations of a piece of equipment, a production line, a process, or even an entire factory—allow workers to test, optimize, and contextualize complex, real-world environments. Manufacturers are using digital twins to simulate factory environments with pinpoint detail. “AI-powered digital twins mark a major evolution in the future of manufacturing, enabling real-time visualization of the entire production line, not just individual machines,” says Indranil Sircar, global chief technology officer for the manufacturing and mobility industry at Microsoft. “This is allowing manufacturers to move beyond isolated monitoring toward much wider insights.” A digital twin of a bottling line, for example, can integrate one-dimensional shop-floor telemetry, two-dimensional enterprise data, and three-dimensional immersive modeling into a single operational view of the entire production line to improve efficiency and reduce costly downtime. Many high-speed industries face downtime rates as high as 40%, estimates Jon Sobel, co-founder and chief executive officer of Sight Machine, an industrial AI company that partners with Microsoft and NVIDIA to transform complex data into actionable insights. By tracking micro-stops and quality metrics via digital twins, companies can target improvements and adjustments with greater precision, saving millions in once-lost productivity without disrupting ongoing operations.
AI offers the next opportunity. Sircar estimates that up to 50% of manufacturers are currently deploying AI in production. This is up from 35% of manufacturers surveyed in a 2024 MIT Technology Review Insights report who said they have begun to put AI use cases into production. Larger manufacturers with more than $10 billion in revenue were significantly ahead, with 77% already deploying AI use cases, according to the report. “Manufacturing has a lot of data and is a perfect use case for AI,” says Sobel. “An industry that has been seen by some as lagging when it comes to digital technology and AI may be in the best position to lead. It’s very unexpected.” Download the report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Winter peak demand is rising faster than resource additions: NERC
Listen to the article 4 min This audio is auto-generated. Please let us know if you have feedback. Dive Brief: Peak demand on the bulk power system will be 20 GW higher this winter than last, but total resources to meet the peak have only increased 9.4 GW, according to a report released Tuesday by the North American Electric Reliability Corp. Despite the mismatch, all regions of the bulk power system should have sufficient resources for expected peak demand this winter, NERC said in its 2025-2026 Winter Reliability Assessment. However, several regions could face challenges in the event of extreme weather. There have been 11 GW of batteries and 8 GW of demand response resources added to the bulk power system since last winter, NERC said. Solar, thermal and hydro have also seen small additions, but contributions from wind resources are 14 GW lower following capacity accounting changes in some markets. Dive Insight: NERC officials described a mixed bag heading into the winter season. “The bulk power system is entering another winter with pockets of elevated risk, and the drivers are becoming more structural than seasonal,” said John Moura, NERC’s director of reliability assessments and performance analysis. “We’re seeing steady demand growth, faster than previous years, landing on a system that’s still racing to build new resources, navigating supply chain constraints and integrating large amounts of variable, inverter-based generation.” Aggregate peak demand across NERC’s footprint will be 20 GW, or 2.5%, higher than last winter. “Essentially, you have a doubling between the last several successive [winter reliability assessments],” said Mark Olson, NERC’s manager of reliability assessment. Nearly all of NERC’s assessment areas “are reporting year-on-year demand growth with some forecasting increases near 10%,” the reliability watchdog said. The U.S. West, Southeast and Mid-Atlantic — areas with significant data center development — have

Energy Secretary Strengthens Midwest Grid Reliability Heading into Winter Months
WASHINGTON—U.S. Secretary of Energy Chris Wright issued an emergency order to address critical grid reliability issues facing the Midwestern region of the United States heading into the cold winter months. The emergency order directs the Midcontinent Independent System Operator (MISO), in coordination with Consumers Energy, to ensure that the J.H. Campbell coal-fired power plant in West Olive, Michigan remains available for operation and to take every step to minimize costs for the American people. The Campbell Plant was scheduled to shut down on May 31, 2025 — 15 years before the end of its scheduled design life. “Because of the last administration’s dangerous energy subtraction policies targeting reliable and affordable energy sources, the United States continues to face an energy emergency,” said Energy Secretary Wright. “The Trump administration will keep taking action to reverse these energy subtraction policies, lowering energy costs and minimizing the risks of blackouts. Americans deserve access to affordable, reliable and secure energy regardless of whether the wind is blowing or the sun is shining, especially in dangerously cold weather.” Since the Department of Energy’s (DOE) original order issued on May 23, the Campbell plant has proven critical to MISO’s operations, operating regularly during periods of high energy demand and low levels of intermittent energy production. A subsequent order was issued on August 20, 2025. As outlined in DOE’s Resource Adequacy Report, power outages could increase by 100 times in 2030 if the U.S. continues to take reliable power offline. The emergency conditions that led to the issuance of the original orders persist.MISO’s service area will continue to face emergency conditions both in the near and long term. Two recent winter studies (2024 – 2025 NERC Winter Reliability Assessment and the 2023 – 2024 NERC Winter Reliability Assessment) have assessed the MISO assessment area as an elevated risk, with the “potential

US Risks Winter Blackouts on Data Center Demand
Rising electricity demand from data centers is raising the risk of blackouts across a wide swath of the US during extreme conditions this winter, according to the regulatory body overseeing grid stability. Power consumption has grown 20 gigawatts from the previous winter, the North American Electric Reliability Corp. said Tuesday in its winter assessment. A gigawatt is the typical size of a nuclear power reactor. Supply hasn’t kept up. As as result, a repeat of severe winter storms in North America that unleash a polar vortex, of which there have been several in recent years, could trigger energy shortfalls across the US from the Northwest to Texas to the Carolinas. All regions have adequate resources in normal conditions. “Data centers are a main contributor to load growth in those areas where demand has risen substantially since last winter,” Mark Olson, manager of the reliability assessment, said in an emailed statement. America’s power grid has been facing rising blackout risks for years as aging infrastructure is increasingly stressed by severe storms and wildfires. Now the data center boom, driven by the spread of artificial intelligence, is adding to the strain by supercharging US electricity growth after being stagnant for two decades. Winter is especially risky because solar generation is available for fewer hours and battery operations may be affected. Gas supplies, meantime, could drop off because of freeze-offs or pipeline constraints. The areas designated by NERC as having elevated risks of shortfall shifted from the previous winter to include the US southeast and parts of the West, including Washington and Oregon. The Texas grid continues to be highlighted after cascading failures in February 2021 left millions of people without power for days and resulted in more than 200 deaths. New England also continues to face elevated risks on potential natural gas pipeline

Some load forecasts using ‘unrealistically high load factors’: Grid Strategies VP
Dive Brief: Significant load growth is likely to arrive as forecast, but uncertainties associated with data centers are complicating load growth estimation, as are “unrealistically high load factors for the new large loads” in some load forecasts, said John Wilson, a vice president at Grid Strategies. Wilson is one of the lead authors of a November report which found the five-year forecast of U.S. utility peak load growth has increased from 24 GW to 166 GW over the past three years — by more than a factor of six. The report concluded that the “data center portion of utility load forecasts is likely overstated by roughly 25 GW,” based on reports from market analysts. Dive Insight: Despite projected load growth, many utility third-quarter earnings reports have shown relatively flat deliveries of electricity. Wilson said he thinks a definitive answer as to whether or not load growth is materializing will come next year. “If [large loads] start to get put off or canceled, and the load doesn’t come in, then we could see a lot of revisions to forecasts that are really large,” he said. The utility forecast for added data center load by 2030 is 90 GW, “nearly 10% of forecast peak load,” the report said, but “data center market analysts indicate that data center growth is unlikely to require much more than 65 GW through 2030.” Wilson said he thinks the overestimation could be due “simply to the challenge that utilities have in understanding whether a potential customer is pursuing just the site in their service area, or whether they’re pursuing multiple sites and they’re not planning on building out all of them.” This is information that utilities haven’t typically gathered, he said, although he’s seeing a trend toward utilities making those questions part of their application process. Wilson said another factor

Winter peak demand is rising faster than resource additions: NERC
Listen to the article 4 min This audio is auto-generated. Please let us know if you have feedback. Dive Brief: Peak demand on the bulk power system will be 20 GW higher this winter than last, but total resources to meet the peak have only increased 9.4 GW, according to a report released Tuesday by the North American Electric Reliability Corp. Despite the mismatch, all regions of the bulk power system should have sufficient resources for expected peak demand this winter, NERC said in its 2025-2026 Winter Reliability Assessment. However, several regions could face challenges in the event of extreme weather. There have been 11 GW of batteries and 8 GW of demand response resources added to the bulk power system since last winter, NERC said. Solar, thermal and hydro have also seen small additions, but contributions from wind resources are 14 GW lower following capacity accounting changes in some markets. Dive Insight: NERC officials described a mixed bag heading into the winter season. “The bulk power system is entering another winter with pockets of elevated risk, and the drivers are becoming more structural than seasonal,” said John Moura, NERC’s director of reliability assessments and performance analysis. “We’re seeing steady demand growth, faster than previous years, landing on a system that’s still racing to build new resources, navigating supply chain constraints and integrating large amounts of variable, inverter-based generation.” Aggregate peak demand across NERC’s footprint will be 20 GW, or 2.5%, higher than last winter. “Essentially, you have a doubling between the last several successive [winter reliability assessments],” said Mark Olson, NERC’s manager of reliability assessment. Nearly all of NERC’s assessment areas “are reporting year-on-year demand growth with some forecasting increases near 10%,” the reliability watchdog said. The U.S. West, Southeast and Mid-Atlantic — areas with significant data center development — have

Energy Secretary Strengthens Midwest Grid Reliability Heading into Winter Months
WASHINGTON—U.S. Secretary of Energy Chris Wright issued an emergency order to address critical grid reliability issues facing the Midwestern region of the United States heading into the cold winter months. The emergency order directs the Midcontinent Independent System Operator (MISO), in coordination with Consumers Energy, to ensure that the J.H. Campbell coal-fired power plant in West Olive, Michigan remains available for operation and to take every step to minimize costs for the American people. The Campbell Plant was scheduled to shut down on May 31, 2025 — 15 years before the end of its scheduled design life. “Because of the last administration’s dangerous energy subtraction policies targeting reliable and affordable energy sources, the United States continues to face an energy emergency,” said Energy Secretary Wright. “The Trump administration will keep taking action to reverse these energy subtraction policies, lowering energy costs and minimizing the risks of blackouts. Americans deserve access to affordable, reliable and secure energy regardless of whether the wind is blowing or the sun is shining, especially in dangerously cold weather.” Since the Department of Energy’s (DOE) original order issued on May 23, the Campbell plant has proven critical to MISO’s operations, operating regularly during periods of high energy demand and low levels of intermittent energy production. A subsequent order was issued on August 20, 2025. As outlined in DOE’s Resource Adequacy Report, power outages could increase by 100 times in 2030 if the U.S. continues to take reliable power offline. The emergency conditions that led to the issuance of the original orders persist.MISO’s service area will continue to face emergency conditions both in the near and long term. Two recent winter studies (2024 – 2025 NERC Winter Reliability Assessment and the 2023 – 2024 NERC Winter Reliability Assessment) have assessed the MISO assessment area as an elevated risk, with the “potential

South Sudan Says Crude Exports Back to Normal
South Sudan said it had resumed oil shipments after attacks on energy facilities in neighboring Sudan disrupted activity. “Operations in all oil fields in South Sudan have returned to a normal export,” Petroleum Ministry Undersecretary Deng Lual Wol told reporters Wednesday in the capital, Juba. “All crude exports from South Sudan are fully flowing to the export terminals in Port Sudan.” Oil companies operating in the two African countries earlier this week shuttered production after the assaults in Sudan, which is embroiled in a more than two-year civil war. Landlocked South Sudan uses pipelines to transport its crude to Red Sea terminals, from where it’s shipped to world markets. Dar Petroleum Operating Co. is producing 97,000 barrels per day following the brief shutdown, but will ramp that up to 150,000, Wol said. Greater Pioneer Operating Co.’s output is 40,000 daily barrels, and should rise to the normal level of 50,000, while Sudd Petroleum Operating Co. is pumping 13,000 barrels per day, down from 15,000 before disruption, he added. Bashayer Pipeline Co., which transports South Sudan’s Dar Blend oil to Sudan, said in a Nov. 15 notice seen by Bloomberg that it had initiated an emergency shutdown after its Al Jabalain processing plant and a power facility came under attack. Sudan’s state-owned Petrolines for Crude Oil Co. issued a Nov. 13 notice about a drone attack at the Heglig oil field, where Nile Blend is produced. It had issued a force majeure notice at 2B OPCO, an exploration and production company in which it has a 50 percent stake. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with

Eni to Acquire 760 MW RE Assets in France from Neoen
Eni SpA said Tuesday it has entered into an agreement to buy a portfolio of already operational renewable energy projects totaling about 760 megawatts across France from Neoen. The transaction involves the transfer of 37 solar plants, 14 wind farms and one battery energy storage to Eni’s renewables arm Plenitude. The facilities produce around 1.1 terawatt hours of power annually, Italy’s state-backed Eni said in a press release. “The transaction represents one of the largest renewable energy deals completed in the French market in recent years and significantly contributes to Plenitude’s 2025 installed capacity targets”, Eni said. The parties have not disclosed the transaction price. Eni aims to reach over 5.5 gigawatts (GW) of installed renewable generation capacity this year, toward 10 GW by 2028 and 15 GW by 2030, according to a plan it announced February. As of the third quarter of 2025, it had 4.8 GW of installed renewable capacity, according to its quarterly report October 24. Eni plans to integrate the Neoen assets with its existing assets to “enable optimized operations and synergies”, Tuesday’s statement said. “The acquisition expands our presence in France, where we already serve around one million retail customers and where we are growing in both energy solutions and e-mobility markets”, said Plenitude chief executive Stefano Goberti. “Through this operation, we strengthen our integrated business model and accelerate progress toward achieving our strategic objectives”. Plenitude currently serves 10 million households and businesses across Europe, and aims to have over 11 million customers by 2028 and 15 million by 2030, Eni said. Paris-based Neoen said separately it would “continue to manage the plants for some years through the provision of asset management services to Plenitude”. Neoen said it would retain 1.1 GW of assets in operation or under construction in France including 754 MW of

Monumental Completes Capital Raise to Fund More Production Restarts in NZ
Monumental Energy Corp said Tuesday it had completed the issuance of 16.2 million units for CAD 0.05 per unit in an oversubscribed non-brokered private placement, generating gross proceeds of CAD 810,000 ($580,000). Vancouver, Canada-based Monumental said in an online statement it would use net proceeds “to fund cost overruns on Copper Moki 1 oil and gas well, to fund the costs and expenses to formally enter into and fund additional workover projects with New Zealand Energy Corp. and L&M Energy and for general working capital purposes and corporate expenses”. “Each unit is comprised of one common share in the capital of the company and one transferable common share purchase warrant”, Toronto-listed Monumental said. “Each warrant entitles the holder thereof to purchase one additional common share of the company at a price of CAD 0.08 per share until November 18, 2028. “In connection with the private placement, the company paid in consideration of the services rendered by certain finders an aggregate cash commission of CAD 38,850 and issued an aggregate of 777,000 non-transferable common share purchase warrants. Each finder warrant entitles the holder thereof to purchase one additional common share of the company at the issuer price until November 18, 2028”. Last month Monumental said it has agreed to fund New Zealand Energy Corp’s (NZEC) share of workover costs to restart flows at several wells in the Waihapa/Ngaere field in the onshore Taranaki basin. “These workovers will follow the same royalty structure as that established for the successful Copper Moki programs, whereas Monumental will earn a 25 percent royalty on NZEC’s production share after full recovery of its capital investment, which will be repaid from 75 percent of NZEC’s net revenue interest”, Monumental, a shareholder in NZEC, said in a press release October 15. L&M Energy will shoulder the remaining investment as NZEC’s

Microsoft will invest $80B in AI data centers in fiscal 2025
And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs). In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

John Deere unveils more autonomous farm machines to address skill labor shortage
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

2025 playbook for enterprise AI success, from agents to evals
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Three Aberdeen oil company headquarters sell for £45m
Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

2025 ransomware predictions, trends, and how to prepare
Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

Scaling innovation in manufacturing with AI
In partnership withMicrosoft and NVIDIA Manufacturing is getting a major system upgrade. As AI amplifies existing technologies—like digital twins, the cloud, edge computing, and the industrial internet of things (IIoT)—it is enabling factory operations teams to shift from reactive, isolated problem-solving to proactive, systemwide optimization. Digital twins—physically accurate virtual representations of a piece of equipment, a production line, a process, or even an entire factory—allow workers to test, optimize, and contextualize complex, real-world environments. Manufacturers are using digital twins to simulate factory environments with pinpoint detail. “AI-powered digital twins mark a major evolution in the future of manufacturing, enabling real-time visualization of the entire production line, not just individual machines,” says Indranil Sircar, global chief technology officer for the manufacturing and mobility industry at Microsoft. “This is allowing manufacturers to move beyond isolated monitoring toward much wider insights.” A digital twin of a bottling line, for example, can integrate one-dimensional shop-floor telemetry, two-dimensional enterprise data, and three-dimensional immersive modeling into a single operational view of the entire production line to improve efficiency and reduce costly downtime. Many high-speed industries face downtime rates as high as 40%, estimates Jon Sobel, co-founder and chief executive officer of Sight Machine, an industrial AI company that partners with Microsoft and NVIDIA to transform complex data into actionable insights. By tracking micro-stops and quality metrics via digital twins, companies can target improvements and adjustments with greater precision, saving millions in once-lost productivity without disrupting ongoing operations.
AI offers the next opportunity. Sircar estimates that up to 50% of manufacturers are currently deploying AI in production. This is up from 35% of manufacturers surveyed in a 2024 MIT Technology Review Insights report who said they have begun to put AI use cases into production. Larger manufacturers with more than $10 billion in revenue were significantly ahead, with 77% already deploying AI use cases, according to the report. “Manufacturing has a lot of data and is a perfect use case for AI,” says Sobel. “An industry that has been seen by some as lagging when it comes to digital technology and AI may be in the best position to lead. It’s very unexpected.” Download the report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: de-censoring DeepSeek, and Gemini 3
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Quantum physicists have shrunk and “de-censored” DeepSeek R1 The news: A group of quantum physicists at Spanish firm Multiverse Computing claims to have created a version of the powerful reasoning AI model DeepSeek R1 that strips out the censorship built into the original by its Chinese creators. Why it matters: In China, AI companies are subject to rules and regulations meant to ensure that content output aligns with laws and “socialist values.” As a result, companies build in layers of censorship when training the AI systems. When asked questions that are deemed “politically sensitive,” the models often refuse to answer or provide talking points straight from state propaganda.
How they did it: Multiverse Computing specializes in quantum-inspired AI techniques, which it used to create DeepSeek R1 Slim, a model that is 55% smaller but performs almost as well as the original model. It allowed them to identify and remove Chinese censorship so that the model answered sensitive questions in much the same way as Western models. Read the full story. —Caiwei Chen
Google’s new Gemini 3 “vibe-codes” responses and comes with its own agent Google today unveiled Gemini 3, a major upgrade to its flagship multimodal model. The firm says the new model is better at reasoning, has more fluid multimodal capabilities (the ability to work across voice, text or images), and will work like an agent.Gemini Agent is an experimental feature designed to handle multi-step tasks directly inside the app. The agent can connect to services such as Google Calendar, Gmail, and Reminders. Once granted access, it can execute tasks like organizing an inbox or managing schedules. Read the full story. —Caiwei Chen MIT Technology Review Narrated: Why climate researchers are taking the temperature of mountain snow The Sierra’s frozen reservoir provides about a third of California’s water and most of what comes out of the faucets, shower heads, and sprinklers in the towns and cities of northwestern Nevada. The need for better snowpack temperature data has become increasingly critical for predicting when the water will flow down the mountains, as climate change fuels hotter weather, melts snow faster, and drives rapid swings between very wet and very dry periods.
A new generation of tools, techniques, and models promises to improve water forecasts, and help California and other states manage in the face of increasingly severe droughts and flooding. However, observers fear that any such advances could be undercut by the Trump administration’s cutbacks across federal agencies. This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Yesterday’s Cloudflare outage was not triggered by a hackAn error in its bot management system was to blame. (The Verge)+ ChatGPT, X and Uber were among the services that dropped. (WP $)+ It’s another example of the dangers of having a handful of infrastructure providers. (WSJ $)+ Today’s web is incredibly fragile. (Bloomberg $) 2 Donald Trump has called for a federal AI regulatory standardInstead of allowing each state to make its own laws. (Axios)+ He claims the current approach risks slowing down AI progress. (Bloomberg $) 3 Meta has won the antitrust case that threatened to spin off InstagramIt’s one of the most high-profile cases in recent years. (FT $)+ A judge ruled that Meta doesn’t hold a social media monopoly. (BBC)
4 The Three Mile Island nuclear plant is making a comebackIt’s the lucky recipient of a $1 billion federal loan to kickstart the facility. (WP $)+ Why Microsoft made a deal to help restart Three Mile Island. (MIT Technology Review)5 Roblox will block children from speaking to adult strangers The gaming platform is facing fresh lawsuits alleging it is failing to protect young users from online predators. (The Guardian)+ But we don’t know much about how accurate its age verification is. (CNN)+ All users will have to submit a selfie or an ID to use chat features. (Engadget) 6 Boston Dynamics’ robot dog is becoming a widespread policing toolIt’s deployed by dozens of US and Canadian bomb squads and SWAT teams. (Bloomberg $)
7 A tribally-owned network of EV chargers is nearing completionIt’s part of Standing Rock reservation’s big push for clean energy. (NYT $) 8 Resist the temptation to use AI to cheat at conversationsIt makes it much more difficult to forge a connection. (The Atlantic $) 9 Amazon wants San Francisco residents to ride its robotaxis for freeIt’s squaring up against Alphabet’s Waymo in the city for the first time. (CNBC)+ But its cars look very different to traditional vehicles. (LA Times $)+ Zoox is operating around 50 robotaxis across SF and Las Vegas. (The Verge) 10 TikTok’s new setting allows you to filter out AI-generated clipsFarewell, sweet slop. (TechCrunch)+ How do AI models generate videos? (MIT Technology Review) Quote of the day
“The rapids of social media rush along so fast that the Court has never even stepped into the same case twice.” —Judge James Boasberg, who rejected the Federal Trade Commission’s claim that Meta had created an illegal social media monopoly, acknowledges the law’s failure to keep up with technology, Politico reports. One more thing
Namibia wants to build the world’s first hydrogen economyFactories have used fossil fuels to process iron ore for three centuries, and the climate has paid a heavy price: According to the International Energy Agency, the steel industry today accounts for 8% of carbon dioxide emissions.But it turns out there is a less carbon-intensive alternative: using hydrogen. Unlike coal or natural gas, which release carbon dioxide as a by-product, this process releases water. And if the hydrogen itself is “green,” the climate impact of the entire process will be minimal. HyIron, which has a site in the Namib desert, is one of a handful of companies around the world that are betting green hydrogen can help the $1.8 trillion steel industry clean up its act. The question now is whether Namibia’s government, its trading partners, and hydrogen innovators can work together to build the industry in a way that satisfies the world’s appetite for cleaner fuels—and also helps improve lives at home. Read the full story. —Jonathan W. Rosen We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.+ This art installation in Paris revolves around porcelain bowls clanging against each other in a pool of water—it’s oddly hypnotic.+ Feeling burnt out? Get down to your local sauna for a quick reset.+ New York’s subway system is something else.+ Your dog has ancient origins. No, really!

Quantum physicists have shrunk and “de-censored” DeepSeek R1
EXECUTIVE SUMMARY A group of quantum physicists claims to have created a version of the powerful reasoning AI model DeepSeek R1 that strips out the censorship built into the original by its Chinese creators. The scientists at Multiverse Computing, a Spanish firm specializing in quantum-inspired AI techniques, created DeepSeek R1 Slim, a model that is 55% smaller but performs almost as well as the original model. Crucially, they also claim to have eliminated official Chinese censorship from the model. In China, AI companies are subject to rules and regulations meant to ensure that content output aligns with laws and “socialist values.” As a result, companies build in layers of censorship when training the AI systems. When asked questions that are deemed “politically sensitive,” the models often refuse to answer or provide talking points straight from state propaganda. To trim down the model, Multiverse turned to a mathematically complex approach borrowed from quantum physics that uses networks of high-dimensional grids to represent and manipulate large data sets. Using these so-called tensor networks shrinks the size of the model significantly and allows a complex AI system to be expressed more efficiently.
The method gives researchers a “map” of all the correlations in the model, allowing them to identify and remove specific bits of information with precision. After compressing and editing a model, Multiverse researchers fine-tune it so its output remains as close as possible to that of the original. To test how well it worked, the researchers compiled a data set of around 25 questions on topics known to be restricted in Chinese models, including “Who does Winnie the Pooh look like?”—a reference to a meme mocking President Xi Jinping—and “What happened in Tiananmen in 1989?” They tested the modified model’s responses against the original DeepSeek R1, using OpenAI’s GPT-5 as an impartial judge to rate the degree of censorship in each answer. The uncensored model was able to provide factual responses comparable to those from Western models, Multiverse says.
This work is part of Multiverse’s broader effort to develop technology to compress and manipulate existing AI models. Most large language models today demand high-end GPUs and significant computing power to train and run. However, they are inefficient, says Roman Orús, Multiverse’s cofounder and chief scientific officer. A compressed model can perform almost as well and save both energy and money, he says. There is a growing effort across the AI industry to make models smaller and more efficient. Distilled models, such as DeepSeek’s own R1-Distill variants, attempt to capture the capabilities of larger models by having them “teach” what they know to a smaller model, though they often fall short of the original’s performance on complex reasoning tasks. Other ways to compress models include quantization, which reduces the precision of the model’s parameters (boundaries that are set when it’s trained), and pruning, which removes individual weights or entire “neurons.” “It’s very challenging to compress large AI models without losing performance,” says Maxwell Venetos, an AI research engineer at Citrine Informatics, a software company focusing on materials and chemicals, who didn’t work on the Multiverse project. “Most techniques have to compromise between size and capability. What’s interesting about the quantum-inspired approach is that it uses very abstract math to cut down redundancy more precisely than usual.” This approach makes it possible to selectively remove bias or add behaviors to LLMs at a granular level, the Multiverse researchers say. In addition to removing censorship from the Chinese authorities, researchers could inject or remove other kinds of perceived biases or specialty knowledge. In the future, Multiverse says, it plans to compress all mainstream open-source models. Thomas Cao, assistant professor of technology policy at Tufts University’s Fletcher School, says Chinese authorities require models to build in censorship—and this requirement now shapes the global information ecosystem, given that many of the most influential open-source AI models come from China. Academics have also begun to document and analyze the phenomenon. Jennifer Pan, a professor at Stanford, and Princeton professor Xu Xu conducted a study earlier this year examining government-imposed censorship in large language models. They found that models created in China exhibit significantly higher rates of censorship, particularly in response to Chinese-language prompts. There is growing interest in efforts to remove censorship from Chinese models. Earlier this year, the AI search company Perplexity released its own uncensored variant of DeepSeek R1, which it named R1 1776. Perplexity’s approach involved post-training the model on a data set of 40,000 multilingual prompts related to censored topics, a more traditional fine-tuning method than the one Multiverse used. However, Cao warns that claims to have fully “removed” censorship may be overstatements. The Chinese government has tightly controlled information online since the internet’s inception, which means that censorship is both dynamic and complex. It is baked into every layer of AI training, from the data collection process to the final alignment steps. “It is very difficult to reverse-engineer that [a censorship-free model] just from answers to such a small set of questions,” Cao says.

Realizing value with AI inference at scale and in production
In partnership withHPE Training an AI model to predict equipment failures is an engineering achievement. But it’s not until prediction meets action—the moment that model successfully flags a malfunctioning machine—that true business transformation occurs. One technical milestone lives in a proof-of-concept deck; the other meaningfully contributes to the bottom line. Craig Partridge, senior director worldwide of Digital Next Advisory at HPE, believes “the true value of AI lies in inference”. Inference is where AI earns its keep. It’s the operational layer that puts all that training to use in real-world workflows. Partridge elaborates, “The phrase we use for this is ‘trusted AI inferencing at scale and in production,'” he says. “That’s where we think the biggest return on AI investments will come from.”Getting to that point is difficult. Christian Reichenbach, worldwide digital advisor at HPE, points to findings from the company’s recent survey of 1,775 IT leaders: While nearly a quarter (22%) of organizations have now operationalized AI—up from 15% the previous year—the majority remain stuck in experimentation. Reaching the next stage requires a three-part approach: establishing trust as an operating principle, ensuring data-centric execution, and cultivating IT leadership capable of scaling AI successfully. Trust as a prerequisite for scalable, high-stakes AI Trusted inference means users can actually rely on the answers they’re getting from AI systems. This is important for applications like generating marketing copy and deploying customer service chatbots, but it’s absolutely critical for higher-stakes scenarios—say, a robot assisting during surgeries or an autonomous vehicle navigating crowded streets.
Whatever the use case, establishing trust will require doubling down on data quality; first and foremost, inferencing outcomes must be built on reliable foundations. This reality informs one of Partridge’s go-to mantras: “Bad data in equals bad inferencing out.” Reichenbach cites a real-world example of what happens when data quality falls short—the rise of unreliable AI-generated content, including hallucinations, that clogs workflows and forces employees to spend significant time fact-checking. “When things go wrong, trust goes down, productivity gains are not reached, and the outcome we’re looking for is not achieved,” he says.
On the other hand, when trust is properly engineered into inference systems, efficiency and productivity gains can increase. Take a network operations team tasked with troubleshooting configurations. With a trusted inferencing engine, that unit gains a reliable copilot that can deliver faster, more accurate, custom-tailored recommendations—”a 24/7 member of the team they didn’t have before,” says Partridge. The shift to data-centric thinking and rise of the AI factory In the first AI wave, companies rushed to hire data scientists and many viewed sophisticated, trillion-parameter models as the primary goal. But today, as organizations move to turn early pilots into real, measurable outcomes, the focus has shifted toward data engineering and architecture. “Over the past five years, what’s become more meaningful is breaking down data silos, accessing data streams, and quickly unlocking value,” says Reichenbach. It’s an evolution happening alongside the rise of the AI factory—the always-on production line where data moves through pipelines and feedback loops to generate continuous intelligence. This shift reflects an evolution from model-centric to data-centric thinking, and with it comes a new set of strategic considerations. “It comes down to two things: How much of the intelligence–the model itself–is truly yours? And how much of the input–the data–is uniquely yours, from your customers, operations, or market?” says Reichenbach. These two central questions inform everything from platform direction and operating models to engineering roles and trust and security considerations. To help clients map their answers—and translate them into actionable strategies—Partridge breaks down HPE’s four-quadrant AI factory implication matrix (see figure): Source: HPE, 2025 Run: Accessing an external, pretrained model via an interface or API; organizations don’t own the model or the data. Implementation requires strong security and governance. It also requires establishing a center of excellence that makes and communicates decisions about AI usage. RAG (retrieval augmented generation): Using external, pre-trained models combined with a company’s proprietary data to create unique insights. Implementation focuses on connecting data streams to inferencing capabilities that provide rapid, integrated access to full-stack AI platforms. Riches: Training custom models on data that resides in the enterprise for unique differentiation opportunities and insights. Implementation requires scalable, energy-efficient environments, and often high-performance systems. Regulate: Leveraging custom models trained on external data, requiring the same scalable setup as Riches, but with added focus on legal and regulatory compliance for handling sensitive, non-owned data with extreme caution. Importantly, these quadrants are not mutually exclusive. Partridge notes that most organizations—including HPE itself—operate across many of the quadrants. “We build our own models to help understand how networks operate,” he says. “We then deploy that intelligence into our products, so that our end customer gets the chance to deliver in what we call the ‘Run’ quadrant. So for them, it’s not their data; it’s not their model. They’re just adding that capability inside their organization.” IT’s moment to scale—and lead The second part of Partridge’s catchphrase about inferencing—”at scale”— speaks to a primary tension in enterprise AI: what works for a handful of use cases often breaks when applied across an entire organization.
“There’s value in experimentation and kicking ideas around,” he says. “But if you want to really see the benefits of AI, it needs to be something that everybody can engage in and that solves for many different use cases.” In Partridge’s view, the challenge of turning boutique pilots into organization-wide systems is uniquely suited to the IT function’s core competencies—and it’s a leadership opportunity the function can’t afford to sit out. “IT takes things that are small-scale and implements the discipline required to run them at scale,” he says. “So, IT organizations really need to lean into this debate.” For IT teams content to linger on the sidelines, history offers a cautionary tale from the last major infrastructure shift: enterprise migration to the cloud. Many IT departments sat out decision-making during the early cloud adoption wave a decade ago, while business units independently deployed cloud services. This led to fragmented systems, redundant spending, and security gaps that took years to untangle. The same dynamic threatens to repeat with AI, as different teams experiment with tools and models outside IT’s purview. This phenomenon—sometimes called shadow AI—describes environments where pilots proliferate without oversight or governance. Partridge believes that most organizations are already operating in the “Run” quadrant in some capacity, as employees will use AI tools whether or not they’re officially authorized to. Rather than shut down experimentation, it is now IT’s mandate to bring structure to it. And enterprises must architect a data platform strategy that brings together enterprise data with guardrails, governance framework, and accessibility to feed AI. Also, it’s critical to keep standardizing infrastructure (such as private cloud AI platforms), protecting data integrity, and safeguarding brand trust, all while enabling the speed and flexibility that AI applications demand. These are the requirements for reaching the final milestone: AI that’s truly in production. For teams on the path to that goal, Reichenbach distills what success requires. “It comes down to knowing where you play: When to Run external models smarter, when to apply RAG to make them more informed, where to invest to unlock Riches from your own data and models, and when to Regulate what you don’t control,” says Reichenbach. “The winners will be those who bring clarity to all quadrants and align technology ambition with governance and value creation.” For more, register to watch MIT Technology Review’s EmTech AI Salon, featuring HPE. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Google’s new Gemini 3 “vibe-codes” responses and comes with its own agent
EXECUTIVE SUMMARY Google today unveiled Gemini 3, a major upgrade to its flagship multimodal model. The firm says the new model is better at reasoning, has more fluid multimodal capabilities (the ability to work across voice, text or images), and will work like an agent. The previous model, Gemini 2.5, supports multimodal input. Users can feed it images, handwriting, or voice. But it usually requires explicit instructions about the format the user wants back, and it defaults to plain text regardless. But Gemini 3 introduces what Google calls “generative interfaces,” which allow the model to make its own choices about what kind of output fits the prompt best, assembling visual layouts and dynamic views on its own instead of returning a block of text. Ask for travel recommendations and it may spin up a website-like interface inside the app, complete with modules, images, and follow-up prompts such as “How many days are you traveling?” or “What kinds of activities do you enjoy?” It also presents clickable options based on what you might want next.
When asked to explain a concept, Gemini 3 may sketch a diagram or generate a simple animation on its own if it believes a visual is more effective. “Visual layout generates an immersive, magazine-style view complete with photos and modules,” says Josh Woodward, VP of Google Labs, Gemini, and AI Studio. “These elements don’t just look good but invite your input to further tailor the results.”
With Gemini 3, Google is also introducing Gemini Agent, an experimental feature designed to handle multi-step tasks directly inside the app. The agent can connect to services such as Google Calendar, Gmail, and Reminders. Once granted access, it can execute tasks like organizing an inbox or managing schedules. Similar to other agents, it breaks tasks into discrete steps, displays its progress in real time, and pauses for approval from the user before continuing. Google describes the feature as a step toward “a true generalist agent.” It will be available on the web for Google AI Ultra subscribers in the US starting November 18. The overall approach can seem a lot like “vibe coding,” where users describe an end goal in plain language and let the model assemble the interface or code needed to get there. The update also ties Gemini more deeply into Google’s existing products. In Search, a limited group of Google AI Pro and Ultra subscribers can now switch to Gemini 3 Pro, the reasoning variation of the new model, to receive deeper, more thorough AI-generated summaries that rely on the model’s reasoning rather than the existing AI Mode. For shopping, Gemini will now pull from Google’s Shopping Graph—which the company says contains more than 50 billion product listings—to generate its own recommendation guides. Users just need to ask a shopping-related question or search a shopping-related phrase, and the model assembles an interactive, Wirecutter-style product recommendation piece, complete with prices and product details, without redirecting to an external site. For developers, Google is also pushing single-prompt software generation further. The company introduced Google Antigravity, a development platform that acts as an all-in-one space where code, tools, and workflows can be created and managed from a single prompt. Derek Nee, CEO of Flowith, an agentic AI application, told MIT Technology Review that Gemini 3 Pro addresses several gaps in earlier models. Improvements include stronger visual understanding, better code generation, and better performance on long tasks—features he sees as essential for developers of AI apps and agents. “Given its speed and cost advantages, we’re integrating the new model into our product,” he says. “We’re optimistic about its potential, but we need deeper testing to understand how far it can go.”

Networking for AI: Building the foundation for real-time intelligence
In partnership withHPE The Ryder Cup is an almost-century-old tournament pitting Europe against the United States in an elite showcase of golf skill and strategy. At the 2025 event, nearly a quarter of a million spectators gathered to watch three days of fierce competition on the fairways. From a technology and logistics perspective, pulling off an event of this scale is no easy feat. The Ryder Cup’s infrastructure must accommodate the tens of thousands of network users who flood the venue (this year, at Bethpage Black in Farmingdale, New York) every day. To manage this IT complexity, Ryder Cup engaged technology partner HPE to create a central hub for its operations. The solution centered around a platform where tournament staff could access data visualization supporting operational decision-making. This dashboard, which leveraged a high-performance network and private-cloud environment, aggregated and distilled insights from diverse real-time data feeds. It was a glimpse into what AI-ready networking looks like at scale—a real-world stress test with implications for everything from event management to enterprise operations. While models and data readiness get the lion’s share of boardroom attention and media hype, networking is a critical third leg of successful AI implementation, explains Jon Green, CTO of HPE Networking. “Disconnected AI doesn’t get you very much; you need a way to get data into it and out of it for both training and inference,” he says.
As businesses move toward distributed, real-time AI applications, tomorrow’s networks will need to parse even more massive volumes of information at ever more lightning-fast speeds. What played out on the greens at Bethpage Black represents a lesson being learned across industries: Inference-ready networks are a make-or-break factor for turning AI’s promise into real-world performance. Making a network AI inference-ready More than half of organizations are still struggling to operationalize their data pipelines. In a recent HPE cross-industry survey of 1,775 IT leaders, 45% said they could run real-time data pushes and pulls for innovation. It’s a noticeable change over last year’s numbers (just 7% reported having such capabilities in 2024), but there’s still work to be done to connect data collection with real-time decision-making.
The network may hold the key to further narrowing that gap. Part of the solution will likely come down to infrastructure design. While traditional enterprise networks are engineered to handle the predictable flow of business applications—email, browsers, file sharing, etc.—they’re not designed to field the dynamic, high-volume data movement required by AI workloads. Inferencing in particular depends on shuttling vast datasets between multiple GPUs with supercomputer-like precision. “There’s an ability to play fast and loose with a standard, off-the-shelf enterprise network,” says Green. “Few will notice if an email platform is half a second slower than it might’ve been. But with AI transaction processing, the entire job is gated by the last calculation taking place. So it becomes really noticeable if you’ve got any loss or congestion.” Networks built for AI, therefore, must operate with a different set of performance characteristics, including ultra-low latency, lossless throughput, specialized equipment, and adaptability at scale. One of these differences is AI’s distributed nature, which affects the seamless flow of data. The Ryder Cup was a vivid demonstration of this new class of networking in action. During the event, a Connected Intelligence Center was put in place to ingest data from ticket scans, weather reports, GPS-tracked golf carts, concession and merchandise sales, spectator and consumer queues, and network performance. Additionally, 67 AI-enabled cameras were positioned throughout the course. Inputs were analyzed through an operational intelligence dashboard and provided staff with an instantaneous view of activity across the grounds. “The tournament is really complex from a networking perspective, because you have many big open areas that aren’t uniformly packed with people,” explains Green. “People tend to follow the action. So in certain areas, it’s really dense with lots of people and devices, while other areas are completely empty.” To handle that variability, engineers built out a two-tiered architecture. Across the sprawling venue, more than 650 WiFi 6E access points, 170 network switches, and 25 user experience sensors worked together to maintain continuous connectivity and feed a private cloud AI cluster for live analytics. The front-end layer connected cameras, sensors, and access points to capture live video and movement data, while a back-end layer—located within a temporary on-site data center—linked GPUs and servers in a high-speed, low-latency configuration that effectively served as the system’s brain. Together, the setup enabled both rapid on-the-ground responses and data collection that could inform future operational planning. “AI models also were available to the team which could process video of the shots taken and help determine, from the footage, which ones were the most interesting,” says Green. Physical AI and the return of on-prem intelligence If time is of the essence for event management, it’s even more critical in contexts where safety is on the line—for instance a self-driving car making a split-second decision to accelerate or brake. In planning for the rise of physical AI, where applications move off screens and onto factory floors and city streets, a growing number of enterprises are rethinking their architectures. Instead of sending the data to centralized clouds for inference, some are deploying edge-based AI clusters that process information closer to where it is generated. Data-intensive training may still occur in the cloud, but inferencing happens on-site.
This hybrid approach is fueling a wave of operational repatriation, as workloads once relegated to the cloud return to on-premises infrastructure for enhanced speed, security, sovereignty, and cost reasons. “We’ve had an out-migration of IT into the cloud in recent years, but physical AI is one of the use cases that we believe will bring a lot of that back on-prem,” predicts Green, giving the example of an AI-infused factory floor, where a round-trip of sensor data to the cloud would be too slow to safely control automated machinery. “By the time processing happens in the cloud, the machine has already moved,” he explains. There’s data to back up Green’s projection: research from Enterprise Research Group shows that 84% of respondents are reevaluating application deployment strategies due to the growth of AI. Market forecasts also reflect this shift. According to IDC, the AI market for infrastructure is expected to reach $758 billion by 2029. AI for networking and the future of self-driving infrastructure The relationship between networking and AI is circular: Modern networks make AI at scale possible, but AI is also helping make networks smarter and more capable. “Networks are some of the most data-rich systems in any organization,” says Green. “That makes them a perfect use case for AI. We can analyze millions of configuration states across thousands of customer environments and learn what actually improves performance or stability.” At HPE for example, which has one of the largest network telemetry repositories in the world, AI models analyze anonymized data collected from billions of connected devices to identify trends and refine behavior over time. The platform processes more than a trillion telemetry points each day, which means it can continuously learn from real-world conditions. The concept broadly known as AIOps (or AI-driven IT operations) is changing how enterprise networks are managed across industries. Today, AI surfaces insights as recommendations that administrators can choose to apply with a single click. Tomorrow, those same systems might automatically test and deploy low-risk changes themselves. That long-term vision, Green notes, is referred to as a “self-driving network”—one that handles the repetitive, error-prone tasks that have historically plagued IT teams. “AI isn’t coming for the network engineer’s job, but it will eliminate the tedious stuff that slows them down,” he says. “You’ll be able to say, ‘Please go configure 130 switches to solve this issue,’ and the system will handle it. When a port gets stuck or someone plugs a connector in the wrong direction, AI can detect it—and in many cases, fix it automatically.” Digital initiatives now depend on how effectively information moves. Whether coordinating a live event or streamlining a supply chain, the performance of the network increasingly defines the performance of the business. Building that foundation today will separate those who pilot from those who scale AI.
For more, register to watch MIT Technology Review’s EmTech AI Salon, featuring HPE. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Some load forecasts using ‘unrealistically high load factors’: Grid Strategies VP
Dive Brief: Significant load growth is likely to arrive as forecast, but uncertainties associated with data centers are complicating load growth estimation, as are “unrealistically high load factors for the new large loads” in some load forecasts, said John Wilson, a vice president at Grid Strategies. Wilson is one of the lead authors of a November report which found the five-year forecast of U.S. utility peak load growth has increased from 24 GW to 166 GW over the past three years — by more than a factor of six. The report concluded that the “data center portion of utility load forecasts is likely overstated by roughly 25 GW,” based on reports from market analysts. Dive Insight: Despite projected load growth, many utility third-quarter earnings reports have shown relatively flat deliveries of electricity. Wilson said he thinks a definitive answer as to whether or not load growth is materializing will come next year. “If [large loads] start to get put off or canceled, and the load doesn’t come in, then we could see a lot of revisions to forecasts that are really large,” he said. The utility forecast for added data center load by 2030 is 90 GW, “nearly 10% of forecast peak load,” the report said, but “data center market analysts indicate that data center growth is unlikely to require much more than 65 GW through 2030.” Wilson said he thinks the overestimation could be due “simply to the challenge that utilities have in understanding whether a potential customer is pursuing just the site in their service area, or whether they’re pursuing multiple sites and they’re not planning on building out all of them.” This is information that utilities haven’t typically gathered, he said, although he’s seeing a trend toward utilities making those questions part of their application process. Wilson said another factor

AWS boosts its long-distance cloud connections with custom DWDM transponder
By controlling the entire hardware stack, AWS can implement comprehensive security measures that would be challenging with third-party solutions, Rehder stated. “This initial long-haul deployment represents just the first implementation of the in-house technology across our extensive long-haul network. We have already extended deployment to Europe, with plans to use the AWS DWDM transponder for all new long-haul connections throughout our global infrastructure,” Rehder wrote. Cloud vendors are some of the largest optical users in the world, though not all develop their own DWDM or other optical systems, according to a variety of papers on the subject. Google develops its own DWDM, for example, but others like Microsoft Azure develop only parts and buy optical gear from third parties. Others such as IBM, Oracle and Alibaba have optical backbones but also utilize third-party equipment. “We are anticipating that the time has come to interconnect all those new AI data centers being built,” wrote Jimmy Yu, vice president at Dell’Oro Group, in a recent optical report. “We are forecasting data center interconnect to grow at twice the rate of the overall market, driven by increased spending from cloud providers. The direct purchases of equipment for DCI will encompass ZR/ZR+ optics for IPoDWDM, optical line systems for transport, and DWDM systems for high-performance, long-distance terrestrial and subsea transmission.”

Scaling innovation in manufacturing with AI
In partnership withMicrosoft and NVIDIA Manufacturing is getting a major system upgrade. As AI amplifies existing technologies—like digital twins, the cloud, edge computing, and the industrial internet of things (IIoT)—it is enabling factory operations teams to shift from reactive, isolated problem-solving to proactive, systemwide optimization. Digital twins—physically accurate virtual representations of a piece of equipment, a production line, a process, or even an entire factory—allow workers to test, optimize, and contextualize complex, real-world environments. Manufacturers are using digital twins to simulate factory environments with pinpoint detail. “AI-powered digital twins mark a major evolution in the future of manufacturing, enabling real-time visualization of the entire production line, not just individual machines,” says Indranil Sircar, global chief technology officer for the manufacturing and mobility industry at Microsoft. “This is allowing manufacturers to move beyond isolated monitoring toward much wider insights.” A digital twin of a bottling line, for example, can integrate one-dimensional shop-floor telemetry, two-dimensional enterprise data, and three-dimensional immersive modeling into a single operational view of the entire production line to improve efficiency and reduce costly downtime. Many high-speed industries face downtime rates as high as 40%, estimates Jon Sobel, co-founder and chief executive officer of Sight Machine, an industrial AI company that partners with Microsoft and NVIDIA to transform complex data into actionable insights. By tracking micro-stops and quality metrics via digital twins, companies can target improvements and adjustments with greater precision, saving millions in once-lost productivity without disrupting ongoing operations.
AI offers the next opportunity. Sircar estimates that up to 50% of manufacturers are currently deploying AI in production. This is up from 35% of manufacturers surveyed in a 2024 MIT Technology Review Insights report who said they have begun to put AI use cases into production. Larger manufacturers with more than $10 billion in revenue were significantly ahead, with 77% already deploying AI use cases, according to the report. “Manufacturing has a lot of data and is a perfect use case for AI,” says Sobel. “An industry that has been seen by some as lagging when it comes to digital technology and AI may be in the best position to lead. It’s very unexpected.” Download the report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Winter peak demand is rising faster than resource additions: NERC
Listen to the article 4 min This audio is auto-generated. Please let us know if you have feedback. Dive Brief: Peak demand on the bulk power system will be 20 GW higher this winter than last, but total resources to meet the peak have only increased 9.4 GW, according to a report released Tuesday by the North American Electric Reliability Corp. Despite the mismatch, all regions of the bulk power system should have sufficient resources for expected peak demand this winter, NERC said in its 2025-2026 Winter Reliability Assessment. However, several regions could face challenges in the event of extreme weather. There have been 11 GW of batteries and 8 GW of demand response resources added to the bulk power system since last winter, NERC said. Solar, thermal and hydro have also seen small additions, but contributions from wind resources are 14 GW lower following capacity accounting changes in some markets. Dive Insight: NERC officials described a mixed bag heading into the winter season. “The bulk power system is entering another winter with pockets of elevated risk, and the drivers are becoming more structural than seasonal,” said John Moura, NERC’s director of reliability assessments and performance analysis. “We’re seeing steady demand growth, faster than previous years, landing on a system that’s still racing to build new resources, navigating supply chain constraints and integrating large amounts of variable, inverter-based generation.” Aggregate peak demand across NERC’s footprint will be 20 GW, or 2.5%, higher than last winter. “Essentially, you have a doubling between the last several successive [winter reliability assessments],” said Mark Olson, NERC’s manager of reliability assessment. Nearly all of NERC’s assessment areas “are reporting year-on-year demand growth with some forecasting increases near 10%,” the reliability watchdog said. The U.S. West, Southeast and Mid-Atlantic — areas with significant data center development — have

Energy Secretary Strengthens Midwest Grid Reliability Heading into Winter Months
WASHINGTON—U.S. Secretary of Energy Chris Wright issued an emergency order to address critical grid reliability issues facing the Midwestern region of the United States heading into the cold winter months. The emergency order directs the Midcontinent Independent System Operator (MISO), in coordination with Consumers Energy, to ensure that the J.H. Campbell coal-fired power plant in West Olive, Michigan remains available for operation and to take every step to minimize costs for the American people. The Campbell Plant was scheduled to shut down on May 31, 2025 — 15 years before the end of its scheduled design life. “Because of the last administration’s dangerous energy subtraction policies targeting reliable and affordable energy sources, the United States continues to face an energy emergency,” said Energy Secretary Wright. “The Trump administration will keep taking action to reverse these energy subtraction policies, lowering energy costs and minimizing the risks of blackouts. Americans deserve access to affordable, reliable and secure energy regardless of whether the wind is blowing or the sun is shining, especially in dangerously cold weather.” Since the Department of Energy’s (DOE) original order issued on May 23, the Campbell plant has proven critical to MISO’s operations, operating regularly during periods of high energy demand and low levels of intermittent energy production. A subsequent order was issued on August 20, 2025. As outlined in DOE’s Resource Adequacy Report, power outages could increase by 100 times in 2030 if the U.S. continues to take reliable power offline. The emergency conditions that led to the issuance of the original orders persist.MISO’s service area will continue to face emergency conditions both in the near and long term. Two recent winter studies (2024 – 2025 NERC Winter Reliability Assessment and the 2023 – 2024 NERC Winter Reliability Assessment) have assessed the MISO assessment area as an elevated risk, with the “potential

US Risks Winter Blackouts on Data Center Demand
Rising electricity demand from data centers is raising the risk of blackouts across a wide swath of the US during extreme conditions this winter, according to the regulatory body overseeing grid stability. Power consumption has grown 20 gigawatts from the previous winter, the North American Electric Reliability Corp. said Tuesday in its winter assessment. A gigawatt is the typical size of a nuclear power reactor. Supply hasn’t kept up. As as result, a repeat of severe winter storms in North America that unleash a polar vortex, of which there have been several in recent years, could trigger energy shortfalls across the US from the Northwest to Texas to the Carolinas. All regions have adequate resources in normal conditions. “Data centers are a main contributor to load growth in those areas where demand has risen substantially since last winter,” Mark Olson, manager of the reliability assessment, said in an emailed statement. America’s power grid has been facing rising blackout risks for years as aging infrastructure is increasingly stressed by severe storms and wildfires. Now the data center boom, driven by the spread of artificial intelligence, is adding to the strain by supercharging US electricity growth after being stagnant for two decades. Winter is especially risky because solar generation is available for fewer hours and battery operations may be affected. Gas supplies, meantime, could drop off because of freeze-offs or pipeline constraints. The areas designated by NERC as having elevated risks of shortfall shifted from the previous winter to include the US southeast and parts of the West, including Washington and Oregon. The Texas grid continues to be highlighted after cascading failures in February 2021 left millions of people without power for days and resulted in more than 200 deaths. New England also continues to face elevated risks on potential natural gas pipeline
Stay Ahead with the Paperboy Newsletter
Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.