Your Gateway to Power, Energy, Datacenters, Bitcoin and AI
Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.
Discover What Matters Most to You

AI
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Discover What Matter Most to You
Featured Articles

Secretary Wright Issues Emergency Order to Secure Southeast Power Grid Amid Heat Wave
WASHINGTON—The Department of Energy (DOE) today issued an emergency order authorized by Section 202(c) of the Federal Power Act to address potential grid shortfall issues in the Southeast U.S. The order, issued amid surging power demand, will help mitigate the risk of blackouts brought on by high temperatures across the Southeast region. “As electricity demand reaches its peak, Americans should not be forced to wonder if their power grid can support their homes and businesses. Under President Trump’s leadership, the Department of Energy will use all tools available to maintain a reliable, affordable, and secure energy system for the American people,” said U.S. Secretary of Energy Chris Wright. “This order ensures Duke Energy Carolinas can supply its customers with consistent and reliable power throughout peak summer demand.” The Order authorizes Duke Energy Carolina to utilize specific electric generating units located within the Duke Energy Carolina area to operate at their maximum generation output levels due to ongoing extreme weather conditions and to preserve the reliability of bulk electric power system. Orders such as this, issued by the Office of Cybersecurity, Energy Security, and Emergency Response (CESER), are in accordance with President Trump’s Executive Order: Declaring a National Energy Emergency and will ensure the availability of generation needed to meet high electricity demand and minimize the risk of blackouts. The order is in effect from June 24 – June 25, 2025. Background: FPA Section 202(c) gives DOE the ability to support energy companies to serve their customers during times of emergencies when they would otherwise not be capable of supplying Americans with reliable, consistent power by providing a waiver of federal, state, or local environmental laws and regulations. The waivers have limitations to ensure public safety and interest are prioritized. ###

Pentagon-backed battery innovation facility opens at UT Dallas
Dive Brief: The University of Texas at Dallas earlier this month announced the opening of its Batteries and Energy to Advance Commercialization and National Security – or BEACONS – facility, which aims to help commercialize new battery technologies, China-proof the lithium-ion battery supply chain and bolster the national battery workforce. The facility is funded by a $30 million award from the U.S. Department of Defense, and is also collaborating with industry partners including Associated Universities Inc. and LEAP Manufacturing. “We want to have that supply chain resilience and independence from the Chinese supply chain,” said BEACONS Director Kyeongjae Cho. “So that even if things really go bad and China decides to cut off [access to] all of these critical mineral supplies, the [domestic battery supply] will not be impacted by that, especially those going to defense applications.” Dive Insight: DOD provides a lot of battery demand, Cho said, due to their need to operate energy-intensive technology in the field. The Pentagon’s battery supply chain is set to shrink after the 2024 National Defense Authorization Act barred DOD from procuring batteries from some Chinese-owned entities starting in October 2027. The banned suppliers are China’s Contemporary Amperex Technology, BYD, Envision Energy, EVE Energy Company, Gotion High-tech and Hithium Energy Storage. China currently dominates the “active materials production portion” of the lithium battery supply chain, according to a 2024 article from the Center for Strategic and International Studies. “Previously, a lot of defense applications were purchasing batteries from Chinese manufacturers,” Cho said. “So that’s creating this dependence on the Chinese supply, and under the unlikely but unfavorable scenario, our defense would be stuck in their supply chain. That’s something we want to avoid.” The program is particularly focused on advancing solid state battery technology, which is more commonly used for drones and defense applications

HPE announces GreenLake Intelligence, goes all-in with agentic AI
Like a teammate who never sleeps Agentic AI is coming to Aruba Central as well, with an autonomous supervisory module talking to multiple specialized models to, for example, determine the root cause of an issue and provide recommendations. David Hughes, SVP and chief product officer, HPE Aruba Networking, said, “It’s like having a teammate who can work while you’re asleep, work on problems, and when you arrive in the morning, have those proposed answers there, complete with chain of thought logic explaining how they got to their conclusions.” Several new services for FinOps and sustainability in GreenLake Cloud are also being integrated into GreenLake Intelligence, including a new workload and capacity optimizer, extended consumption analytics to help organizations control costs, and predictive sustainability forecasting and a managed service mode in the HPE Sustainability Insight Center. In addition, updates to the OpsRamp operations copilot, launched in 2024, will enable agentic automation including conversational product help, an agentic command center that enables AI/ML-based alerts, incident management, and root cause analysis across the infrastructure when it is released in the fourth quarter of 2025. It is now a validated observability solution for the Nvidia Enterprise AI Factory. OpsRamp will also be part of the new HPE CloudOps software suite, available in the fourth quarter, which will include HPE Morpheus Enterprise and HPE Zerto. HPE said the new suite will provide automation, orchestration, governance, data mobility, data protection, and cyber resilience for multivendor, multi cloud, multi-workload infrastructures. Matt Kimball, principal analyst for datacenter, compute, and storage at Moor Insights & strategy, sees HPE’s latest announcements aligning nicely with enterprise IT modernization efforts, using AI to optimize performance. “GreenLake Intelligence is really where all of this comes together. I am a huge fan of Morpheus in delivering an agnostic orchestration plane, regardless of operating stack

US, global cities tout emissions reductions
Dive Brief: Nearly 100 major city members of C40 Cities worldwide reduced per-capita emissions by 7.5% between 2015 and 2024, according to a report that group and the Global Covenant of Mayors for Climate and Energy released Monday. Representatives from C40 Cities, Climate Mayors, the U.S. Climate Alliance and America Is All In are attending global climate events this month to fill a void “when the U.S. federal government is stepping back from leadership on climate action,” the groups said in a joint press release. Climate Mayors Executive Director Kate Wright said that “despite the federal government abandoning its responsibilities on climate,” the U.S. could achieve 54% to 62% emissions reductions by 2035 “with strong climate leadership at the state and local levels.” Dive Insight: Phoenix Mayor Kate Gallego, the chair of Climate Mayors, a bipartisan network of nearly 350 U.S. mayors, said in a statement that cities like Phoenix are on the front lines of climate change and mayors “have a unique responsibility — and opportunity — to drive meaningful solutions.” “By partnering with other local leaders around the world, we can exchange ideas, scale innovations, and build a united front for meaningful action,” she said. Gallego and Wright are among the delegation of U.S. mayors and climate leaders attending the United Nations Framework on Climate Change’s June Climate Meetings in Bonn, Germany; London Climate Action Week; and a Paris Agreement 10-year anniversary event, all in the second half of June. Since the treaty was signed, global clean energy investment has increased tenfold, to $2 trillion, U.N. Climate Change Executive Secretary Simon Stiell said at the anniversary event in Bonn on Saturday. On Jan. 20, President Donald Trump signed an executive order withdrawing the U.S. from the U.N. Framework Convention on Climate Change’s Paris Agreement, an international treaty on

Nuclear regulators lighten microreactor restrictions
Dive Brief: Technicians can load nuclear fuel into certain factory-made microreactors without triggering the strict regulations that apply to conventional nuclear reactors in operation, the Nuclear Regulatory Commission said on June 17. The new policy allows manufacturers to fuel microreactors designed with safety features to prevent criticality, or a self-sustaining chain reaction within the fuel. A related policy the commissioners approved on June 17 allows operational testing of commercial microreactors under the more lenient regulations governing non-commercial research and test reactors. The commissioners also directed NRC staff to consider whether these and other proposed microreactor licensing and oversight strategies could be applicable to other reactor types, including large power reactors. Dive Insight: Microreactors are smaller and generate far less power than conventional large reactors and small modular reactors — typically 20 MW or less of thermal energy, according to the U.S. Department of Energy. Microreactors’ transportability, interchangeability and longer fuel lifecycles make them ideal for use in microgrids and for emergency response missions, DOE says. Microreactors may also be useful for non-electric, direct-heat applications like hydrogen production, water desalination and district heating, DOE says. At least three cities in Finland are considering microreactor-powered district heating systems using a locally developed, 50-MW thermal reactor design. Microreactors “offer a potential solution to achieve deep decarbonization of our larger energy system that powers our industries and our manufacturers,” NRC Commissioner Matthew Marzano said during an April 10 NRC stakeholder meeting on microreactor technology and regulation. There are no microreactors in commercial operation today, but small reactors have powered U.S. Navy ships and submarines for decades. The Department of Defense is pursuing multiple microreactor initiatives, including Project Pele, which could see a working reactor commissioned next year at Idaho National Laboratory. Current NRC regulations for commercial nuclear reactors developed over decades to manage the

DOE grants Duke Energy authority to exceed power plant permit limits during extreme heat
The U.S. Department of Energy on Tuesday issued an emergency order allowing Duke Energy to exceed emissions limits in its power plant permits in the Carolinas during a heat wave. The emergency order expires at 10 p.m. on Wednesday when the heat and humidity is expected to ease, according to DOE. The order, issued under the Federal Power Act’s section 202(c), will help reduce the risk of blackouts brought on by high temperatures across the Southeast region, the department said. Under the order, Duke will be allowed to exceed power plant emissions limits when it declares an Energy Emergency Alert Level 2, which it expects to do, DOE said. The North American Electric Reliability Corp. defines an EEA-2 as when a grid operator cannot provide its expected energy requirements, but is still able to maintain minimum contingency reserve requirements, according to the PJM Interconnection. Once Duke declares that the EEA Level 2 event has ended, its generating units would be required to immediately return to operation within their permitted limits, the department said. In its request to DOE for the emergency order, Duke said about 1,500 MW of its power plants in the Carolinas are offline while other generating units may be limited by conditions and limitations in their environmental permits, according to the department. DOE issued a similar order for Duke Energy Florida in October in response to Hurricane Milton. DOE has also issued 90-day emergency orders to keep generating units that were set to retire on May 30 operating this summer in Michigan and Pennsylvania.

Secretary Wright Issues Emergency Order to Secure Southeast Power Grid Amid Heat Wave
WASHINGTON—The Department of Energy (DOE) today issued an emergency order authorized by Section 202(c) of the Federal Power Act to address potential grid shortfall issues in the Southeast U.S. The order, issued amid surging power demand, will help mitigate the risk of blackouts brought on by high temperatures across the Southeast region. “As electricity demand reaches its peak, Americans should not be forced to wonder if their power grid can support their homes and businesses. Under President Trump’s leadership, the Department of Energy will use all tools available to maintain a reliable, affordable, and secure energy system for the American people,” said U.S. Secretary of Energy Chris Wright. “This order ensures Duke Energy Carolinas can supply its customers with consistent and reliable power throughout peak summer demand.” The Order authorizes Duke Energy Carolina to utilize specific electric generating units located within the Duke Energy Carolina area to operate at their maximum generation output levels due to ongoing extreme weather conditions and to preserve the reliability of bulk electric power system. Orders such as this, issued by the Office of Cybersecurity, Energy Security, and Emergency Response (CESER), are in accordance with President Trump’s Executive Order: Declaring a National Energy Emergency and will ensure the availability of generation needed to meet high electricity demand and minimize the risk of blackouts. The order is in effect from June 24 – June 25, 2025. Background: FPA Section 202(c) gives DOE the ability to support energy companies to serve their customers during times of emergencies when they would otherwise not be capable of supplying Americans with reliable, consistent power by providing a waiver of federal, state, or local environmental laws and regulations. The waivers have limitations to ensure public safety and interest are prioritized. ###

Pentagon-backed battery innovation facility opens at UT Dallas
Dive Brief: The University of Texas at Dallas earlier this month announced the opening of its Batteries and Energy to Advance Commercialization and National Security – or BEACONS – facility, which aims to help commercialize new battery technologies, China-proof the lithium-ion battery supply chain and bolster the national battery workforce. The facility is funded by a $30 million award from the U.S. Department of Defense, and is also collaborating with industry partners including Associated Universities Inc. and LEAP Manufacturing. “We want to have that supply chain resilience and independence from the Chinese supply chain,” said BEACONS Director Kyeongjae Cho. “So that even if things really go bad and China decides to cut off [access to] all of these critical mineral supplies, the [domestic battery supply] will not be impacted by that, especially those going to defense applications.” Dive Insight: DOD provides a lot of battery demand, Cho said, due to their need to operate energy-intensive technology in the field. The Pentagon’s battery supply chain is set to shrink after the 2024 National Defense Authorization Act barred DOD from procuring batteries from some Chinese-owned entities starting in October 2027. The banned suppliers are China’s Contemporary Amperex Technology, BYD, Envision Energy, EVE Energy Company, Gotion High-tech and Hithium Energy Storage. China currently dominates the “active materials production portion” of the lithium battery supply chain, according to a 2024 article from the Center for Strategic and International Studies. “Previously, a lot of defense applications were purchasing batteries from Chinese manufacturers,” Cho said. “So that’s creating this dependence on the Chinese supply, and under the unlikely but unfavorable scenario, our defense would be stuck in their supply chain. That’s something we want to avoid.” The program is particularly focused on advancing solid state battery technology, which is more commonly used for drones and defense applications

HPE announces GreenLake Intelligence, goes all-in with agentic AI
Like a teammate who never sleeps Agentic AI is coming to Aruba Central as well, with an autonomous supervisory module talking to multiple specialized models to, for example, determine the root cause of an issue and provide recommendations. David Hughes, SVP and chief product officer, HPE Aruba Networking, said, “It’s like having a teammate who can work while you’re asleep, work on problems, and when you arrive in the morning, have those proposed answers there, complete with chain of thought logic explaining how they got to their conclusions.” Several new services for FinOps and sustainability in GreenLake Cloud are also being integrated into GreenLake Intelligence, including a new workload and capacity optimizer, extended consumption analytics to help organizations control costs, and predictive sustainability forecasting and a managed service mode in the HPE Sustainability Insight Center. In addition, updates to the OpsRamp operations copilot, launched in 2024, will enable agentic automation including conversational product help, an agentic command center that enables AI/ML-based alerts, incident management, and root cause analysis across the infrastructure when it is released in the fourth quarter of 2025. It is now a validated observability solution for the Nvidia Enterprise AI Factory. OpsRamp will also be part of the new HPE CloudOps software suite, available in the fourth quarter, which will include HPE Morpheus Enterprise and HPE Zerto. HPE said the new suite will provide automation, orchestration, governance, data mobility, data protection, and cyber resilience for multivendor, multi cloud, multi-workload infrastructures. Matt Kimball, principal analyst for datacenter, compute, and storage at Moor Insights & strategy, sees HPE’s latest announcements aligning nicely with enterprise IT modernization efforts, using AI to optimize performance. “GreenLake Intelligence is really where all of this comes together. I am a huge fan of Morpheus in delivering an agnostic orchestration plane, regardless of operating stack

US, global cities tout emissions reductions
Dive Brief: Nearly 100 major city members of C40 Cities worldwide reduced per-capita emissions by 7.5% between 2015 and 2024, according to a report that group and the Global Covenant of Mayors for Climate and Energy released Monday. Representatives from C40 Cities, Climate Mayors, the U.S. Climate Alliance and America Is All In are attending global climate events this month to fill a void “when the U.S. federal government is stepping back from leadership on climate action,” the groups said in a joint press release. Climate Mayors Executive Director Kate Wright said that “despite the federal government abandoning its responsibilities on climate,” the U.S. could achieve 54% to 62% emissions reductions by 2035 “with strong climate leadership at the state and local levels.” Dive Insight: Phoenix Mayor Kate Gallego, the chair of Climate Mayors, a bipartisan network of nearly 350 U.S. mayors, said in a statement that cities like Phoenix are on the front lines of climate change and mayors “have a unique responsibility — and opportunity — to drive meaningful solutions.” “By partnering with other local leaders around the world, we can exchange ideas, scale innovations, and build a united front for meaningful action,” she said. Gallego and Wright are among the delegation of U.S. mayors and climate leaders attending the United Nations Framework on Climate Change’s June Climate Meetings in Bonn, Germany; London Climate Action Week; and a Paris Agreement 10-year anniversary event, all in the second half of June. Since the treaty was signed, global clean energy investment has increased tenfold, to $2 trillion, U.N. Climate Change Executive Secretary Simon Stiell said at the anniversary event in Bonn on Saturday. On Jan. 20, President Donald Trump signed an executive order withdrawing the U.S. from the U.N. Framework Convention on Climate Change’s Paris Agreement, an international treaty on

Nuclear regulators lighten microreactor restrictions
Dive Brief: Technicians can load nuclear fuel into certain factory-made microreactors without triggering the strict regulations that apply to conventional nuclear reactors in operation, the Nuclear Regulatory Commission said on June 17. The new policy allows manufacturers to fuel microreactors designed with safety features to prevent criticality, or a self-sustaining chain reaction within the fuel. A related policy the commissioners approved on June 17 allows operational testing of commercial microreactors under the more lenient regulations governing non-commercial research and test reactors. The commissioners also directed NRC staff to consider whether these and other proposed microreactor licensing and oversight strategies could be applicable to other reactor types, including large power reactors. Dive Insight: Microreactors are smaller and generate far less power than conventional large reactors and small modular reactors — typically 20 MW or less of thermal energy, according to the U.S. Department of Energy. Microreactors’ transportability, interchangeability and longer fuel lifecycles make them ideal for use in microgrids and for emergency response missions, DOE says. Microreactors may also be useful for non-electric, direct-heat applications like hydrogen production, water desalination and district heating, DOE says. At least three cities in Finland are considering microreactor-powered district heating systems using a locally developed, 50-MW thermal reactor design. Microreactors “offer a potential solution to achieve deep decarbonization of our larger energy system that powers our industries and our manufacturers,” NRC Commissioner Matthew Marzano said during an April 10 NRC stakeholder meeting on microreactor technology and regulation. There are no microreactors in commercial operation today, but small reactors have powered U.S. Navy ships and submarines for decades. The Department of Defense is pursuing multiple microreactor initiatives, including Project Pele, which could see a working reactor commissioned next year at Idaho National Laboratory. Current NRC regulations for commercial nuclear reactors developed over decades to manage the

DOE grants Duke Energy authority to exceed power plant permit limits during extreme heat
The U.S. Department of Energy on Tuesday issued an emergency order allowing Duke Energy to exceed emissions limits in its power plant permits in the Carolinas during a heat wave. The emergency order expires at 10 p.m. on Wednesday when the heat and humidity is expected to ease, according to DOE. The order, issued under the Federal Power Act’s section 202(c), will help reduce the risk of blackouts brought on by high temperatures across the Southeast region, the department said. Under the order, Duke will be allowed to exceed power plant emissions limits when it declares an Energy Emergency Alert Level 2, which it expects to do, DOE said. The North American Electric Reliability Corp. defines an EEA-2 as when a grid operator cannot provide its expected energy requirements, but is still able to maintain minimum contingency reserve requirements, according to the PJM Interconnection. Once Duke declares that the EEA Level 2 event has ended, its generating units would be required to immediately return to operation within their permitted limits, the department said. In its request to DOE for the emergency order, Duke said about 1,500 MW of its power plants in the Carolinas are offline while other generating units may be limited by conditions and limitations in their environmental permits, according to the department. DOE issued a similar order for Duke Energy Florida in October in response to Hurricane Milton. DOE has also issued 90-day emergency orders to keep generating units that were set to retire on May 30 operating this summer in Michigan and Pennsylvania.

Oil Prices Dropped ‘Significantly’ Following Ceasefire News
Oil prices dropped “significantly” following news of an Iran-Israel ceasefire, Erik Meyersson, Chief EM Strategist at Skandinaviska Enskilda Banken AB (SEB), said in a report sent to Rigzone by the SEB team on Tuesday. Meyersson warned in the report that future talks will be difficult if negotiation positions remain the same, adding that “all parties have an interest in ending the fighting, but need to overcome the crux of nuclear enrichment in Iran, as well as its ballistic missile program, in order for a durable deal”. “Snapback sanctions and its consequences still remain an issue, and a ceasefire without a diplomatic breakthrough opens for Iranian rearmament and rebuilding of its nuclear facilities,” he added. “This puts the current ceasefire on precarious grounds, and absent a diplomatic breakthrough, the Iran-Israel War may not yet be over,” Meyersson warned. Meyersson said in the report that the U.S. has from the onset sought a minimal participation in the war and noted that it will be keen to limit involvement to avoid dissent over the war within its own political base. “In addition, an ongoing conflict would pose risks to the global supply oil,” Meyersson highlighted in the report. “With an Iranian regime increasingly backed against a wall, concerns over whether it could target shipping or energy infrastructure in the region had been on the rise,” he pointed out. “Oil prices were trending higher which, if sustained, could have pushed up global inflation rates up and central banks toward a higher policy rate trajectory,” he added. In a market update sent to Rigzone by the Rystad Energy team late Monday, Rystad noted that oil prices tumbled yesterday after Iran’s retaliatory strikes on the U.S., “signaling a possible desire from Iran to de-escalate by inflicting minimal damage to U.S. infrastructure in the region”. “Rystad Energy maintains

Trump Says Israel, Iran Reached Ceasefire
US President Donald Trump said a ceasefire is now in place between Iran and Israel, moments after Israeli emergency services said at least three people were killed by Iranian strikes. “THE CEASEFIRE IS NOW IN EFFECT,” Trump said at around 9:10 a.m. Dubai time on Truth Social. “PLEASE DO NOT VIOLATE IT!” Prime Minister Benjamin Netanyahu confirmed a short while later Israel’s agreed to a truce and said his country’s achieved its war goals in Iran. The comments came after Tehran fired several waves of missiles on Israel on Tuesday morning. Israel also further attacked Iran. The truce followed an extraordinary night in which Tehran retaliated against a US attack over the weekend by launching missiles at an American air base in Qatar. The Islamic Republic’s move was telegraphed – with Qatar and the US being forewarned – and there were no casualties. Trump said the strike at Qatar was “weak” and that Iran had “gotten it out of their system.” He even thanked Tehran for “giving us early notice.” Oil prices plunged when it became clear the strikes on Qatar weren’t deadly, with traders taking it as a sign that Iran had no intention of escalating tensions with Washington, let alone engulfing other countries in the oil-rich region in a wider war. Brent fell more than 5 percent to around $67.90 a barrel in early trading on Tuesday, following a drop of more than 7 percent on Monday. It’s now back to the level it was before Israel started attacking Iran on June 13. Israel was still striking targets in Iran early on Tuesday, but the explosions in Tehran seemed to stop at about 4 a.m. local time, the BBC reported, citing local residents. While Israeli officials remained silent overnight, a senior White House official said Trump brokered the ceasefire in a direct

Iberdrola to Power Renfe through Onshore Wind
Iberdrola España S.A.U. has agreed to sell onshore wind energy to Renfe Group through a Virtual Power Purchase Agreement (VPPA). Over 10 years, Iberdrola will deliver 360 gigawatt hours (GWh) per year to Renfe. “With this agreement with Renfe, we can highlight electrification without emissions in transport, which is responsible for more than a third of all energy consumed in our country. Partnerships such as this one are essential to support the development of a renewable electricity mix. The PPAs, through their different modalities, are a key tool for customers who want to secure renewable energy at a fixed, long-term price”, David Martínez, director for clients in Spain at Iberdrola, said in a media release. Iberdrola said it has over a decade of PPA experience, managing agreements across Spain, Portugal, Germany, Italy, the UK, the US, Brazil, Mexico, and Australia for wind and solar projects. Iberdrola claims it is Europe’s top power utility by market value and second globally. For the second year, Iberdrola remains the leader in the European PPA market, with 1,251 MW contracted in 2024, a 38 percent increase from 2023, the company said. “This agreement is another step along Renfe’s energy management roadmap. This agreement promotes renewable energy production projects, while at the same time stabilizing the price of Renfe’s energy, which we consider very positive for the company’s management,” Marta Torralvo, Renfe’s Chief Financial Officer, said. “The high volatility of energy prices was causing significant alterations and uncertainty in the evolution of results, and in this way, these costs are now predictable. Agreements such as the one signed with Iberdrola España help us to advance along this path. I should mention that Renfe trains are the passenger and freight transport system in Spain that consumes the least energy per unit transported. In fact, their carbon

EDF, ESB Secure Rights for Massive Floating Wind Farm in Celtic Sea
EDF Renewables Ltd. and joint venture (JV) partner ESB Energy Ltd. have secured the rights to develop the Gwynt Glas Floating Offshore Wind Farm in the Celtic Sea through The Crown Estate’s Leasing Round 5. The JV said in a media release that the project has the potential to generate up to 1.5 gigawatts (GW) while bringing significant benefits to communities across South Wales and South West England. Following over three years of stakeholder engagement, the Gwynt Glas project was chosen through a competitive seabed tender process. As part of this process, Gwynt Glas presented proposals for developing the wind farm, alongside plans to maximize socio-economic and social value opportunities, the JV said. Matthieu Hue, CEO of EDF Renewables UK, said, “We look forward to further developing the Gwynt Glas offshore wind farm, helping the UK maintain a market-leading position in floating wind and recognizing the important role that floating wind can play in the UK’s ambition towards reaching net zero”. “The Celtic Sea is of strategic importance to ESB given its location adjacent to Ireland and the opportunities to develop a floating offshore project in what we believe to be an ideal area bode well for our ambitions to develop a portfolio of floating offshore wind projects in Ireland and UK to contribute to the net zero plans for both countries as well as those of ESB”, Jim Dollard, Executive Director at ESB, added. Gwynt Glas allows EDF Renewables to leverage its experience from building France’s first floating offshore wind farm. Development and consent activities will follow UK guidelines, including stakeholder consultation. The process is expected to take three to five years, with operation possibly starting in the early 2030s, the JV said. To contact the author, email [email protected] What do you think? We’d love to hear from you,

Goldman Flags Scope for Higher Oil and Gas on Mideast Scenarios
Goldman Sachs Group Inc. flagged the possibility of higher oil and gas prices after the US struck Iran, even as the bank’s base-case outlook hinges on major disruptions to supplies from the region. If oil flows through the Strait of Hormuz were to drop by half for a month, and remained 10% lower for another 11, Brent would spike briefly to as much as $110 a barrel, analysts including Daan Struyven said in a note. Should Iranian supply fall by 1.75 million barrels a day, Brent would peak at $90. The global oil market is trying to figure out the likely trajectory for energy prices as the crisis in the Middle East escalates. Crude futures are presently near $79 a barrel, having surged in early Asian trading after the US hit three Iranian nuclear sites at the weekend. Brent then pared some of its gains, with a renewed focus that actual flows are so far unhindered. “The economic incentives, including for the US and China, to try to prevent a sustained and very large disruption of the Strait of Hormuz would be strong,” the analysts said. The bank still assumes there’ll be no significant disruptions to flows, although “the downside risks to energy supply and the upside risk to our energy price forecasts have risen,” they said. Natural-gas markets are also seen at risk. European benchmark futures — known as the Title Transfer Facility, or TTF — may possibly rise closer to €74 per megawatt hours or about $25 per million British thermal units, a level that hurt demand during the 2022 European energy crisis, the analysts said. A hypothetical, large and sustained disruption of the strait would push natural gas toward €100 a megawatt hour, they said. The waterway connects the Persian Gulf to the Indian Ocean, and is a vital conduit

Oil Tumbles 7% After Iran Spares Energy Sites
Oil plunged by more than 7% as Iran’s response to US military strikes spared energy infrastructure, allaying investor concerns that the conflict would severely disrupt supplies from the Middle East. West Texas Intermediate slumped below $70 a barrel after Iran fired missiles at a US air base in Qatar in retaliation for President Donald Trump’s weekend airstrikes on three of its nuclear facilities. Traders had initially feared that Iran’s retaliatory response would involve a closure of the Strait of Hormuz chokepoint, through which about a fifth of the world’s oil passes. The barrage was intercepted and did not result in any casualties, Qatar said. Prices traded in a $10-a-barrel range on Monday, first rising by more than 6% only to drop even more, underscoring just how on edge traders are and how critical every development in the region is to global energy markets. “Crude is pulling back as the market digests signs that energy infrastructure is not Iran’s first choice for retaliation,” said Rebecca Babin, a senior energy trader at CIBC Private Wealth Group. “There are indications the US may have had advance warning of the strikes, suggesting this was more of a face-saving move than a true escalation.” The Middle East accounts for about a third of global crude production and there haven’t yet been any signs of disruption to physical oil flows, including for cargoes going through the Strait of Hormuz. Since Israel’s attacks began earlier this month, there have been signs that Iranian oil shipments out of the Gulf have risen rather than declined. While prices may be cooling for now, significant supply threats linger as tensions remain high across the Middle East. Saudi Arabia condemned Iran’s attack on Qatar and said it was ready to support Qatar with any measures it takes. Worries about demand are

AI means the end of internet search as we’ve known it
We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way. But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way. Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results. More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene. I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources. On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages. People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest). People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see.
Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate. Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know? In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good. Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed. And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first. But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad.
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing. Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search. “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly. It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be. But once you’ve used AI Overviews a bit, you realize they are different. Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web.
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.) “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.” That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language.
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video. When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai. There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out? I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too. “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web. “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful. “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.” But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result? Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend. “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says. Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.” Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.” “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.” He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew? A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says. OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more. “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience. Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does. “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.” When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them. “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed! The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers. It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.” We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge. The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets. Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed. “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information. In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses. But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on.

Subsea7 Scores Various Contracts Globally
Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Driving into the future
Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more. We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen. Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes.
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake. What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story.
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa. Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Oil Holds at Highest Levels Since October
Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

What to expect from NaaS in 2025
Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

UK battery storage industry ‘back on track’
UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

Beyond static AI: MIT’s new framework lets models teach themselves
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Researchers at MIT have developed a framework called Self-Adapting Language Models (SEAL) that enables large language models (LLMs) to continuously learn and adapt by updating their own internal parameters. SEAL teaches an LLM to generate its own training data and update instructions, allowing it to permanently absorb new knowledge and learn new tasks. This framework could be useful for enterprise applications, particularly for AI agents that operate in dynamic environments, where they must constantly process new information and adapt their behavior. The challenge of adapting LLMs While large language models have shown remarkable abilities, adapting them to specific tasks, integrating new information, or mastering novel reasoning skills remains a significant hurdle. Currently, when faced with a new task, LLMs typically learn from data “as-is” through methods like finetuning or in-context learning. However, the provided data is not always in an optimal format for the model to learn efficiently. Existing approaches don’t allow the model to develop its own strategies for best transforming and learning from new information. “Many enterprise use cases demand more than just factual recall—they require deeper, persistent adaptation,” Jyo Pari, PhD student at MIT and co-author of the paper, told VentureBeat. “For example, a coding assistant might need to internalize a company’s specific software framework, or a customer-facing model might need to learn a user’s unique behavior or preferences over time.” In such cases, temporary retrieval falls short, and the knowledge needs to be “baked into” the model’s weights so that it influences all future responses. Creating self-adapting language models “As a step towards scalable and efficient adaptation of language models, we propose equipping LLMs with the ability to generate their own training data

Salesforce launches Agentforce 3 with AI agent observability and MCP support
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Salesforce rolled out sweeping enhancements to its AI agent platform Monday, addressing the biggest hurdles enterprises face when deploying digital workers at scale: knowing what those agents are actually doing and ensuring they can work securely across corporate systems. The company’s Agentforce 3 release introduces a comprehensive “Command Center” that gives executives real-time visibility into AI agent performance, plus native support for emerging interoperability standards that allow agents to connect with hundreds of external business tools without custom coding. The timing reflects surging enterprise demand for AI agents. According to Salesforce data, AI agent usage has jumped 233% in six months, with over 8,000 customers signing up to deploy the technology. Early adopters are seeing measurable returns: Engine reduced customer case handling time by 15%, while 1-800Accountant achieved 70% autonomous resolution of administrative chat requests during peak tax season. “We have hundreds of live implementations, if not thousands, and they’re running at scale,” said Jayesh Govindarajan, EVP of Salesforce AI, in an exclusive interview with VentureBeat. The company has moved decisively beyond experimental deployments, he noted: “AI agents are no longer experimental. They have really moved deeply into the fabric of the enterprise.” “Over the past several months we’ve listened deeply to our customers and continued our rapid pace of technology innovation,” said Adam Evans, executive vice president and general manager of Salesforce AI, in announcing the platform upgrade during a live event Monday. “The result is Agentforce 3, a major leap forward for our platform that brings greater intelligence, higher performance, and more trust and accountability to every Agentforce deployment.” How global food giant PepsiCo is leading the enterprise AI agent revolution Among the companies embracing

Musk’s attempts to politicize his Grok AI are bad for users and enterprises — here’s why
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Let’s start by acknowledging some facts outside the tech industry for a moment: There is no “white genocide” in South Africa — the vast majority of recent murder victims have been Black, and even throughout the country’s long and bloody history, Black South Africans have been overwhelmingly victimized and oppressed by White European, predominantly Dutch and British, colonizers, in the now globally reviled system of segregation known as “Apartheid.” The vast majority of political violence in the U.S. throughout history and in recent times has been perpetrated by right-leaning extremists, including the assassinations of Democratic Minnesota State Representative Melissa Hortman, D-Minn., and her husband, Mark, and going back further to the Oklahoma City Bombing and many years of Klu Klux Klan lynchings. These are just simple, verifiable facts anyone can look up on a variety of trustworthy and long-established sources online and in print. Yet both seem to be stumbling blocks for Elon Musk, the wealthiest man in the world and tech baron in charge of at least six companies (xAI, social network X, SpaceX and its Starlink satellite internet service, Neuralink, Tesla, and The Boring Company), especially with regards to the functioning of his Grok AI large language model (LLM) chatbot built into his social network, X. Here’s what’s been happening, why it matters for businesses and any generative AI users, and why it is ultimately a terrible omen for the health of our collective information ecosystem. What the matter with Grok? Grok was launched from Musk’s AI startup xAI back in 2023 as a rival to OpenAI’s ChatGPT. Late last year, it was added to the social network X as a kind of digital assistant

A Chinese firm has just launched a constantly changing set of AI benchmarks
When testing an AI model, it’s hard to tell if it is reasoning or just regurgitating answers from its training data. Xbench, a new benchmark developed by the Chinese venture capital firm HSG, or Hongshan Capital Group, might help to sidestep that issue. That’s thanks to the way it evaluates models not only on the ability to pass arbitrary tests, like most other benchmarks, but also on the ability to execute real-world tasks, which is more unusual. It will be updated on a regular basis to try to keep it evergreen. This week the company is making part of its question set open-source and letting anyone use for free. The team has also released a leaderboard comparing how mainstream AI models stack up when tested on Xbench. (ChatGPT o3 ranked first across all categories, though ByteDance’s Doubao, Gemini 2.5 Pro, and Grok all still did pretty well, as did Claude Sonnet.) Development of the benchmark at Hongshan began in 2022, following ChatGPT’s breakout success, as an internal tool for assessing which models are worth investing in. Since then, led by partner Gong Yuan, the team has steadily expanded the system, bringing in outside researchers and professionals to help refine it. As the project grew more sophisticated, they decided to release it to the public. Xbench approached the problem with two different systems. One is similar to traditional benchmarking: an academic test that gauges a model’s aptitude on various subjects. The other is more like a technical interview round for a job, assessing how much real-world economic value a model might deliver.
Xbench’s methods for assessing raw intelligence currently include two components: Xbench-ScienceQA and Xbench-DeepResearch. ScienceQA isn’t a radical departure from existing postgraduate-level STEM benchmarks like GPQA and SuperGPQA. It includes questions spanning fields from biochemistry to orbital mechanics, drafted by graduate students and double-checked by professors. Scoring rewards not only the right answer but also the reasoning chain that leads to it. DeepResearch, by contrast, focuses on a model’s ability to navigate the Chinese-language web. Ten subject-matter experts created 100 questions in music, history, finance, and literature—questions that can’t just be googled but require significant research to answer. Scoring favors breadth of sources, factual consistency, and a model’s willingness to admit when there isn’t enough data. A question in the publicized collection is “How many Chinese cities in the three northwestern provinces border a foreign country?” (It’s 12, and only 33% of models tested got it right, if you are wondering.)
On the company’s website, the researchers said they want to add more dimensions to the test—for example, aspects like how creative a model is in its problem solving, how collaborative it is when working with other models, and how reliable it is. The team has committed to updating the test questions once a quarter and to maintain a half-public, half-private data set. To assess models’ real-world readiness, the team worked with experts to develop tasks modeled on actual workflows, initially in recruitment and marketing. For example, one task asks a model to source five qualified battery engineer candidates and justify each pick. Another asks it to match advertisers with appropriate short-video creators from a pool of over 800 influencers. The website also teases upcoming categories, including finance, legal, accounting, and design. The question sets for these categories have not yet been open-sourced. ChatGPT-o3 again ranks first in both of the current professional categories. For recruiting, Perplexity Search and Claude 3.5 Sonnet take second and third place, respectively. For marketing, Claude, Grok, and Gemini all perform well. “It is really difficult for benchmarks to include things that are so hard to quantify,” says Zihan Zheng, the lead researcher on a new benchmark called LiveCodeBench Pro and a student at NYU. “But Xbench represents a promising start.”

Why we’re focusing VB Transform on the agentic revolution – and what’s at stake for enterprise AI leaders
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Tomorrow in San Francisco, VentureBeat’s Transform 2025 kicks off. For years, this has been the leading independent gathering for enterprise technical decision-makers — the hands-on builders and architects on the front lines of applied AI. Our mission has always been to cut through the hype and focus on the most critical, execution-oriented challenges our audience faces, and this year, one conversation towers above all others: the agentic AI revolution. We’ve all been captivated by the potential. But a chasm has opened between the jaw-dropping demos from research labs and the messy reality of enterprise deployment. While agents are poised to become the new engine of the enterprise, a recent KPMG study found that only 11% of companies have actually integrated them into their workflows. This is the “Agentic Infrastructure Gap,” and closing it is the big challenge this year. It’s not about the agent itself, but about building the enterprise-grade chassis – the security, governance, data plumbing, and orchestration – required to manage a digital workforce. That’s why we’ve dedicated this year’s VB Transform agenda to being a real-world playbook for navigating this new frontier. It’s the event for leaders who need to move from concept to reality, and here’s how we’re tackling it. Architecting the new enterprise chassis The “Agentic OS” requires a new orchestration, both at the application level but also below it, lower in the stack. This about orchestrating the right compute for the right task. At Transform, we’re mapping this new landscape with the architects building it. The Great Re-routing: Influential analyst Dylan Patel will join Groq CEO Jonathan Ross and Cerebras CTO Sean Lie to debate the future of the AI inference

Scaling integrated digital health
Sponsored byRoche Around the world, countries are facing the challenges of aging populations, growing rates of chronic disease, and workforce shortages, leading to a growing burden on health care systems. From diagnosis to treatment, AI and other digital solutions can enhance the efficiency and effectiveness of health care, easing the burden on straining systems. According to the World Health Organization (WHO), spending an additional $0.24 per patient per year on digital health interventions could save more than two million lives from non-communicable diseases over the next decade. To work most effectively, digital solutions need to be scaled and embedded in an ecosystem that ensures a high degree of interoperability, data security, and governance. If not, the proliferation of point solutions— where specialized software or tools focus on just one specific area or function—could lead to silos and digital canyons, complicating rather than easing the workloads of health care professionals, and potentially impacting patient treatment. Importantly, technologies that enhance workforce productivity should keep humans in the loop, aiming to augment their capabilities, rather than replace them. Through a survey of 300 health care executives and a program of interviews with industry experts, startup leaders, and academic researchers, this report explores the best practices for success when implementing integrated digital solutions into health care, and how these can support decision-makers in a range of settings, including laboratories and hospitals. Key findings include:
Health care is primed for digital adoption. The global pandemic underscored the benefits of value-based care and accelerated the adoption of digital and AI-powered technologies in health care. Overwhelmingly, 96% of the survey respondents say they are “ready and resourced” to use digital health, while one in four say they are “very ready.” However, 91% of executives agree interoperability is a challenge, with a majority (59%) saying it will be “tough” to solve. Two in five leaders say balancing security with usability is the biggest challenge for digital health. With the adoption of cloud solutions, organizations can enjoy the benefits of modernized IT infrastructure: 36% of the survey respondents believe scalability is the main benefit, followed by improved security (28%). Digital health care can help health care institutions transform patient outcomes—if built on the right foundations. Solutions like AI-powered diagnostics, telemedicine, and remote monitoring can offer measurable impact across the patient journey, from improving early disease detection to reducing hospital readmission rates. However, these technologies can only support fully connected health care when scaled up and embedded in ecosystems with robust data governance, interoperability, and security.
Health care data has immense potential—but fragmentation and poor interoperability hinder impact. Health care systems generate vast quantities of data, yet much of it remains siloed or unusable due to inconsistent formats and incompatible IT systems, limiting scalability. Digital tools must augment, not overload, the workforce. With global health care workforce shortages worsening, digital solutions like clinical decision support tools, patient prediction, and remote monitoring can be seen as essential aids rather than threats to the workforce. Successful deployment depends on usability, clinician engagement, and training. Regulatory evolution, open data policies, and economic sustainability are key to scaling digital health. Even the best digital tools struggle to scale without reimbursement frameworks, regulatory support, and viable business models. Open data ecosystems are needed to unleash the clinical and economic value of innovation. Regulatory and reimbursement innovation is also critical to transitioning from pilot projects to high-impact, system-wide adoption. Download the full report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Secretary Wright Issues Emergency Order to Secure Southeast Power Grid Amid Heat Wave
WASHINGTON—The Department of Energy (DOE) today issued an emergency order authorized by Section 202(c) of the Federal Power Act to address potential grid shortfall issues in the Southeast U.S. The order, issued amid surging power demand, will help mitigate the risk of blackouts brought on by high temperatures across the Southeast region. “As electricity demand reaches its peak, Americans should not be forced to wonder if their power grid can support their homes and businesses. Under President Trump’s leadership, the Department of Energy will use all tools available to maintain a reliable, affordable, and secure energy system for the American people,” said U.S. Secretary of Energy Chris Wright. “This order ensures Duke Energy Carolinas can supply its customers with consistent and reliable power throughout peak summer demand.” The Order authorizes Duke Energy Carolina to utilize specific electric generating units located within the Duke Energy Carolina area to operate at their maximum generation output levels due to ongoing extreme weather conditions and to preserve the reliability of bulk electric power system. Orders such as this, issued by the Office of Cybersecurity, Energy Security, and Emergency Response (CESER), are in accordance with President Trump’s Executive Order: Declaring a National Energy Emergency and will ensure the availability of generation needed to meet high electricity demand and minimize the risk of blackouts. The order is in effect from June 24 – June 25, 2025. Background: FPA Section 202(c) gives DOE the ability to support energy companies to serve their customers during times of emergencies when they would otherwise not be capable of supplying Americans with reliable, consistent power by providing a waiver of federal, state, or local environmental laws and regulations. The waivers have limitations to ensure public safety and interest are prioritized. ###

Pentagon-backed battery innovation facility opens at UT Dallas
Dive Brief: The University of Texas at Dallas earlier this month announced the opening of its Batteries and Energy to Advance Commercialization and National Security – or BEACONS – facility, which aims to help commercialize new battery technologies, China-proof the lithium-ion battery supply chain and bolster the national battery workforce. The facility is funded by a $30 million award from the U.S. Department of Defense, and is also collaborating with industry partners including Associated Universities Inc. and LEAP Manufacturing. “We want to have that supply chain resilience and independence from the Chinese supply chain,” said BEACONS Director Kyeongjae Cho. “So that even if things really go bad and China decides to cut off [access to] all of these critical mineral supplies, the [domestic battery supply] will not be impacted by that, especially those going to defense applications.” Dive Insight: DOD provides a lot of battery demand, Cho said, due to their need to operate energy-intensive technology in the field. The Pentagon’s battery supply chain is set to shrink after the 2024 National Defense Authorization Act barred DOD from procuring batteries from some Chinese-owned entities starting in October 2027. The banned suppliers are China’s Contemporary Amperex Technology, BYD, Envision Energy, EVE Energy Company, Gotion High-tech and Hithium Energy Storage. China currently dominates the “active materials production portion” of the lithium battery supply chain, according to a 2024 article from the Center for Strategic and International Studies. “Previously, a lot of defense applications were purchasing batteries from Chinese manufacturers,” Cho said. “So that’s creating this dependence on the Chinese supply, and under the unlikely but unfavorable scenario, our defense would be stuck in their supply chain. That’s something we want to avoid.” The program is particularly focused on advancing solid state battery technology, which is more commonly used for drones and defense applications

HPE announces GreenLake Intelligence, goes all-in with agentic AI
Like a teammate who never sleeps Agentic AI is coming to Aruba Central as well, with an autonomous supervisory module talking to multiple specialized models to, for example, determine the root cause of an issue and provide recommendations. David Hughes, SVP and chief product officer, HPE Aruba Networking, said, “It’s like having a teammate who can work while you’re asleep, work on problems, and when you arrive in the morning, have those proposed answers there, complete with chain of thought logic explaining how they got to their conclusions.” Several new services for FinOps and sustainability in GreenLake Cloud are also being integrated into GreenLake Intelligence, including a new workload and capacity optimizer, extended consumption analytics to help organizations control costs, and predictive sustainability forecasting and a managed service mode in the HPE Sustainability Insight Center. In addition, updates to the OpsRamp operations copilot, launched in 2024, will enable agentic automation including conversational product help, an agentic command center that enables AI/ML-based alerts, incident management, and root cause analysis across the infrastructure when it is released in the fourth quarter of 2025. It is now a validated observability solution for the Nvidia Enterprise AI Factory. OpsRamp will also be part of the new HPE CloudOps software suite, available in the fourth quarter, which will include HPE Morpheus Enterprise and HPE Zerto. HPE said the new suite will provide automation, orchestration, governance, data mobility, data protection, and cyber resilience for multivendor, multi cloud, multi-workload infrastructures. Matt Kimball, principal analyst for datacenter, compute, and storage at Moor Insights & strategy, sees HPE’s latest announcements aligning nicely with enterprise IT modernization efforts, using AI to optimize performance. “GreenLake Intelligence is really where all of this comes together. I am a huge fan of Morpheus in delivering an agnostic orchestration plane, regardless of operating stack

US, global cities tout emissions reductions
Dive Brief: Nearly 100 major city members of C40 Cities worldwide reduced per-capita emissions by 7.5% between 2015 and 2024, according to a report that group and the Global Covenant of Mayors for Climate and Energy released Monday. Representatives from C40 Cities, Climate Mayors, the U.S. Climate Alliance and America Is All In are attending global climate events this month to fill a void “when the U.S. federal government is stepping back from leadership on climate action,” the groups said in a joint press release. Climate Mayors Executive Director Kate Wright said that “despite the federal government abandoning its responsibilities on climate,” the U.S. could achieve 54% to 62% emissions reductions by 2035 “with strong climate leadership at the state and local levels.” Dive Insight: Phoenix Mayor Kate Gallego, the chair of Climate Mayors, a bipartisan network of nearly 350 U.S. mayors, said in a statement that cities like Phoenix are on the front lines of climate change and mayors “have a unique responsibility — and opportunity — to drive meaningful solutions.” “By partnering with other local leaders around the world, we can exchange ideas, scale innovations, and build a united front for meaningful action,” she said. Gallego and Wright are among the delegation of U.S. mayors and climate leaders attending the United Nations Framework on Climate Change’s June Climate Meetings in Bonn, Germany; London Climate Action Week; and a Paris Agreement 10-year anniversary event, all in the second half of June. Since the treaty was signed, global clean energy investment has increased tenfold, to $2 trillion, U.N. Climate Change Executive Secretary Simon Stiell said at the anniversary event in Bonn on Saturday. On Jan. 20, President Donald Trump signed an executive order withdrawing the U.S. from the U.N. Framework Convention on Climate Change’s Paris Agreement, an international treaty on

Nuclear regulators lighten microreactor restrictions
Dive Brief: Technicians can load nuclear fuel into certain factory-made microreactors without triggering the strict regulations that apply to conventional nuclear reactors in operation, the Nuclear Regulatory Commission said on June 17. The new policy allows manufacturers to fuel microreactors designed with safety features to prevent criticality, or a self-sustaining chain reaction within the fuel. A related policy the commissioners approved on June 17 allows operational testing of commercial microreactors under the more lenient regulations governing non-commercial research and test reactors. The commissioners also directed NRC staff to consider whether these and other proposed microreactor licensing and oversight strategies could be applicable to other reactor types, including large power reactors. Dive Insight: Microreactors are smaller and generate far less power than conventional large reactors and small modular reactors — typically 20 MW or less of thermal energy, according to the U.S. Department of Energy. Microreactors’ transportability, interchangeability and longer fuel lifecycles make them ideal for use in microgrids and for emergency response missions, DOE says. Microreactors may also be useful for non-electric, direct-heat applications like hydrogen production, water desalination and district heating, DOE says. At least three cities in Finland are considering microreactor-powered district heating systems using a locally developed, 50-MW thermal reactor design. Microreactors “offer a potential solution to achieve deep decarbonization of our larger energy system that powers our industries and our manufacturers,” NRC Commissioner Matthew Marzano said during an April 10 NRC stakeholder meeting on microreactor technology and regulation. There are no microreactors in commercial operation today, but small reactors have powered U.S. Navy ships and submarines for decades. The Department of Defense is pursuing multiple microreactor initiatives, including Project Pele, which could see a working reactor commissioned next year at Idaho National Laboratory. Current NRC regulations for commercial nuclear reactors developed over decades to manage the

DOE grants Duke Energy authority to exceed power plant permit limits during extreme heat
The U.S. Department of Energy on Tuesday issued an emergency order allowing Duke Energy to exceed emissions limits in its power plant permits in the Carolinas during a heat wave. The emergency order expires at 10 p.m. on Wednesday when the heat and humidity is expected to ease, according to DOE. The order, issued under the Federal Power Act’s section 202(c), will help reduce the risk of blackouts brought on by high temperatures across the Southeast region, the department said. Under the order, Duke will be allowed to exceed power plant emissions limits when it declares an Energy Emergency Alert Level 2, which it expects to do, DOE said. The North American Electric Reliability Corp. defines an EEA-2 as when a grid operator cannot provide its expected energy requirements, but is still able to maintain minimum contingency reserve requirements, according to the PJM Interconnection. Once Duke declares that the EEA Level 2 event has ended, its generating units would be required to immediately return to operation within their permitted limits, the department said. In its request to DOE for the emergency order, Duke said about 1,500 MW of its power plants in the Carolinas are offline while other generating units may be limited by conditions and limitations in their environmental permits, according to the department. DOE issued a similar order for Duke Energy Florida in October in response to Hurricane Milton. DOE has also issued 90-day emergency orders to keep generating units that were set to retire on May 30 operating this summer in Michigan and Pennsylvania.
Stay Ahead with the Paperboy Newsletter
Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.