Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Nous Research drops Hermes 4 AI models that outperform ChatGPT without content restrictions

Nous Research, a secretive artificial intelligence startup that has emerged as a leading voice in the open-source AI movement, quietly released Hermes 4 on Monday, a family of large language models that the company claims can match the performance of leading proprietary systems while offering unprecedented user control and minimal content restrictions.The release represents a significant escalation in the battle between open-source AI advocates and major technology companies over who should control access to advanced artificial intelligence capabilities. Unlike models from OpenAI, Google, or Anthropic, Hermes 4 is designed to respond to nearly any request without the safety guardrails that have become standard in commercial AI systems.“Hermes 4 builds on our legacy of user-aligned models with expanded test-time compute capabilities,” Nous Research announced on X (formerly Twitter). “Special attention was given to making the models creative and interesting to interact with, unencumbered by censorship, and neutrally aligned while maintaining state of the art level math, coding, and reasoning performance for open weight models.”Hermes 4 introduces what Nous Research calls “hybrid reasoning,” allowing users to toggle between fast responses and deeper, step-by-step thinking processes. When activated, the models generate their internal reasoning within special tags before providing a final answer — similar to OpenAI’s o1 reasoning models but with full transparency into the AI’s thought process.

Read More »

Oil Climbs as Peace Talk Prospects Fade

Oil gained as the waning prospect of a peace agreement between Russia and Ukraine reduced the likelihood of more of Moscow’s supplies reaching broader markets in the near term. West Texas Intermediate crude rose 0.7% to top $64 a barrel, reversing earlier losses, after German Chancellor Friedrich Merz told reporters that a meeting between Ukrainian President Volodymyr Zelensky and Russia’s Vladimir Putin “won’t happen.” Talks between the leaders were seen as a step toward a peace deal that could pave the way for reduced restrictions on Russian crude exports. President Donald Trump is also set to release a statement on Russia and Ukraine later, leading traders to hedge for stricter penalties on Moscow’s energy shipments. “Trump is going to have to decide if he really wants to impose sanctions or give negotiations one more go,” said Rebecca Babin, a senior energy trader at CIBC Private Wealth Group. Still, “the market is used to the can being kicked down the road, so very minimal risk premium is being priced in.” Ukraine has ramped up drone attacks on Russia’s oil infrastructure over the past month, most recently hitting two refineries. Moscow’s crude exports slipped last week, tanker-tracking data compiled by Bloomberg showed, after Ukraine intensified its attacks. The development comes as White House trade adviser Peter Navarro stepped up pressure on India to halt purchases of Russian oil after Washington doubled a levy on imports from the country to 50%. Still, the outlook remains overall bearish. Oil markets are widely expected to move into a surplus toward the end of the year, as higher output from the OPEC+ alliance and outside of the grouping overwhelms demand. The producer group is due to meet on Sept. 7, but no talks have been held yet about its next moves, according to a senior OPEC

Read More »

Forget data labeling: Tencent’s R-Zero shows how LLMs can train themselves

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now A new training framework developed by researchers at Tencent AI Lab and Washington University in St. Louis enables large language models (LLMs) to improve themselves without requiring any human-labeled data. The technique, called R-Zero, uses reinforcement learning to generate its own training data from scratch, addressing one of the main bottlenecks in creating self-evolving AI systems. R-Zero works by having two independent models co-evolve by interacting with and challenging each other. Experiments show that R-Zero substantially improves reasoning capabilities across different LLMs, which could lower the complexity and costs of training advanced AI. For enterprises, this approach could accelerate the development of specialized models for complex reasoning tasks without the massive expense of curating labeled datasets. The idea behind self-evolving LLMs is to create AI systems that can autonomously generate, refine, and learn from their own experiences. This offers a scalable path toward more intelligent and capable AI. However, a major challenge is that training these models requires large volumes of high-quality tasks and labels, which act as supervision signals for the AI to learn from. Relying on human annotators to create this data is not only costly and slow but also creates a fundamental bottleneck. It effectively limits an AI’s potential capabilities to what humans can teach it. To address this, researchers have developed label-free methods that derive reward signals directly from a model’s own outputs, for example, by measuring its confidence in an answer. While these methods eliminate the need for explicit labels, they still rely on a pre-existing set of tasks, thereby limiting their applicability in truly self-evolving scenarios. AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI.

Read More »

Nvidia’s $46.7B Q2 proves the platform, but its next fight is ASIC economics on inference

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Nvidia reported $46.7 billion in revenue for fiscal Q2 2026 in their earnings announcement and call yesterday, with data center revenue hitting $41.1 billion, up 56% year over year. The company also released guidance for Q3, predicting a $54 billion quarter. Behind these confirmed earnings call numbers lies a more complex story of how custom application-specific integrated circuits (ASICs) are gaining ground in key Nvidia segments and will challenge their growth in the quarters to come. Bank of America’s Vivek Arya asked Nvidia’s president and CEO, Jensen Huang, if he saw any scenario where ASICs could take market share from Nvidia GPUs. ASICs continue to gain ground on performance and cost advantages over Nvidia, Broadcom projects 55% to 60% AI revenue growth next year. Huang pushed back hard on the earnings call. He emphasized that building AI infrastructure is “really hard” and most ASIC projects fail to reach production. That’s a fair point, but they have a competitor in Broadcom, which is seeing its AI revenue steadily ramp up, approaching a $20 billion annual run rate. Further underscoring the growing competitive fragmentation of the market is how Google, Meta and Microsoft all deploy custom silicon at scale. The market has spoken. AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are: Secure your spot to stay ahead: https://bit.ly/4mwGngO ASICs are redefining the competitive landscape in real-time Nvidia is more than capable of competing with new ASIC providers. Where they’re running into headwinds is how effectively ASIC competitors are positioning the combination of their use cases, performance claims

Read More »

Energy Secretary Issues Order to Secure Grid Reliability in Mid-Atlantic

WASHINGTON—U.S. Secretary of Energy Chris Wright issued an emergency order to minimize the risk of energy shortfalls in the Mid-Atlantic region of the United States. Secretary Wright’s order directs PJM Interconnection (PJM), in coordination with Constellation Energy, to ensure Units 3 and 4 of the Eddystone Generating Station in Pennsylvania remain available for operation. Ensuring these units remain operational minimizes the risk of generation shortfall that could lead to unnecessary power outages. “With unprecedented energy demand and resource retirements outpacing new generation additions, the country is facing an energy emergency. Today’s order proves that the Trump Administration is dedicated to confronting this critical issue,” said U.S. Secretary of Energy Chris Wright. “This administration considers power outages and soaring energy costs to be unacceptable.” As outlined in DOE’s Grid Reliability Evaluation, power outages could increase by 100 times in 2030 if the U.S. continues to take reliable power offline.  Secretary Wright ordered that the two Eddystone Generating Station units remain online past their planned retirement date in a May 30, 2025 emergency order. Keeping these units operational over the past 90 days has improved energy security in the PJM region, as demonstrated by the fact that PJM called on the Eddystone Units to generate electricity during heat waves that hit the region in June and July. The emergency conditions that led to the issuance of the first order persist.  This order is in effect beginning on August 28, 2025, and continues until November 26, 2025.  Background: PJM has voiced resource adequacy concerns for years. In its February 2023 report, PJM highlighted the increasing resource adequacy concerns and reliability risks in the coming years due to the potential timing mismatch between resource retirements, load growth and the pace of new generation entry.  In a December 2024 filing at the Federal Energy Regulatory Commission (FERC), PJM

Read More »

3.5 GW of offshore wind in New England could offset natural gas price spikes: report

Dive Brief: If the 3.5 GW of wind energy projects currently contracted offshore New England had been operational last winter, it could have offset the surge in natural gas prices that season and saved ratepayers a total of $400 million on their energy bills, according to a Wednesday report from Daymark Energy Advisors. The report estimated that “savings exceeded [power purchase agreement] costs across all scenarios, yielding annual bill savings of $1.32 to $2.68 per month for an average Eversource [Energy] residential customer.” RENEW Northeast, the group that commissioned the report, noted that ISO New England released a report last month which found gas prices in spring 2025 averaged $3.40 per million British thermal units, 112% higher than the spring 2024 price of $1.60/MMBtu. Dive Insight: The report examines the “potential regional market and Massachusetts ratepayer impacts” if 3.5 GW of offshore wind had been generating power between Dec. 2024 and Feb. 2025. “Even using the most conservative assumptions about cleared offers in Forward Capacity Auction 15 … clearing additional qualified capacity from OSW would have reduced FCA15 costs by at least $128 million, with 83% ($106 million) allocable to Massachusetts load zones,” the report said.  Daymark Energy Advisors found that “injecting near-zero marginal cost offshore wind into the energy market would have reduced ISO-NE Locational Marginal Prices by 11% ($12.60/MWh), reducing wholesale load costs across New England by roughly $400 million. Roughly $129 million of the regional savings would have been allocable to [Massachusetts electric distribution companies].” Last winter was the “first since 2014 to see below-normal temperatures over the course of an entire season,” ISO-NE said in an April release, and natural gas prices rose in accordance with demand.  President Donald Trump and New England leaders like Connecticut Gov. Ned Lamont, D, and New Hampshire Gov. Kelly Ayotte,

Read More »

Nous Research drops Hermes 4 AI models that outperform ChatGPT without content restrictions

Nous Research, a secretive artificial intelligence startup that has emerged as a leading voice in the open-source AI movement, quietly released Hermes 4 on Monday, a family of large language models that the company claims can match the performance of leading proprietary systems while offering unprecedented user control and minimal content restrictions.The release represents a significant escalation in the battle between open-source AI advocates and major technology companies over who should control access to advanced artificial intelligence capabilities. Unlike models from OpenAI, Google, or Anthropic, Hermes 4 is designed to respond to nearly any request without the safety guardrails that have become standard in commercial AI systems.“Hermes 4 builds on our legacy of user-aligned models with expanded test-time compute capabilities,” Nous Research announced on X (formerly Twitter). “Special attention was given to making the models creative and interesting to interact with, unencumbered by censorship, and neutrally aligned while maintaining state of the art level math, coding, and reasoning performance for open weight models.”Hermes 4 introduces what Nous Research calls “hybrid reasoning,” allowing users to toggle between fast responses and deeper, step-by-step thinking processes. When activated, the models generate their internal reasoning within special tags before providing a final answer — similar to OpenAI’s o1 reasoning models but with full transparency into the AI’s thought process.

Read More »

Oil Climbs as Peace Talk Prospects Fade

Oil gained as the waning prospect of a peace agreement between Russia and Ukraine reduced the likelihood of more of Moscow’s supplies reaching broader markets in the near term. West Texas Intermediate crude rose 0.7% to top $64 a barrel, reversing earlier losses, after German Chancellor Friedrich Merz told reporters that a meeting between Ukrainian President Volodymyr Zelensky and Russia’s Vladimir Putin “won’t happen.” Talks between the leaders were seen as a step toward a peace deal that could pave the way for reduced restrictions on Russian crude exports. President Donald Trump is also set to release a statement on Russia and Ukraine later, leading traders to hedge for stricter penalties on Moscow’s energy shipments. “Trump is going to have to decide if he really wants to impose sanctions or give negotiations one more go,” said Rebecca Babin, a senior energy trader at CIBC Private Wealth Group. Still, “the market is used to the can being kicked down the road, so very minimal risk premium is being priced in.” Ukraine has ramped up drone attacks on Russia’s oil infrastructure over the past month, most recently hitting two refineries. Moscow’s crude exports slipped last week, tanker-tracking data compiled by Bloomberg showed, after Ukraine intensified its attacks. The development comes as White House trade adviser Peter Navarro stepped up pressure on India to halt purchases of Russian oil after Washington doubled a levy on imports from the country to 50%. Still, the outlook remains overall bearish. Oil markets are widely expected to move into a surplus toward the end of the year, as higher output from the OPEC+ alliance and outside of the grouping overwhelms demand. The producer group is due to meet on Sept. 7, but no talks have been held yet about its next moves, according to a senior OPEC

Read More »

Forget data labeling: Tencent’s R-Zero shows how LLMs can train themselves

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now A new training framework developed by researchers at Tencent AI Lab and Washington University in St. Louis enables large language models (LLMs) to improve themselves without requiring any human-labeled data. The technique, called R-Zero, uses reinforcement learning to generate its own training data from scratch, addressing one of the main bottlenecks in creating self-evolving AI systems. R-Zero works by having two independent models co-evolve by interacting with and challenging each other. Experiments show that R-Zero substantially improves reasoning capabilities across different LLMs, which could lower the complexity and costs of training advanced AI. For enterprises, this approach could accelerate the development of specialized models for complex reasoning tasks without the massive expense of curating labeled datasets. The idea behind self-evolving LLMs is to create AI systems that can autonomously generate, refine, and learn from their own experiences. This offers a scalable path toward more intelligent and capable AI. However, a major challenge is that training these models requires large volumes of high-quality tasks and labels, which act as supervision signals for the AI to learn from. Relying on human annotators to create this data is not only costly and slow but also creates a fundamental bottleneck. It effectively limits an AI’s potential capabilities to what humans can teach it. To address this, researchers have developed label-free methods that derive reward signals directly from a model’s own outputs, for example, by measuring its confidence in an answer. While these methods eliminate the need for explicit labels, they still rely on a pre-existing set of tasks, thereby limiting their applicability in truly self-evolving scenarios. AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI.

Read More »

Nvidia’s $46.7B Q2 proves the platform, but its next fight is ASIC economics on inference

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Nvidia reported $46.7 billion in revenue for fiscal Q2 2026 in their earnings announcement and call yesterday, with data center revenue hitting $41.1 billion, up 56% year over year. The company also released guidance for Q3, predicting a $54 billion quarter. Behind these confirmed earnings call numbers lies a more complex story of how custom application-specific integrated circuits (ASICs) are gaining ground in key Nvidia segments and will challenge their growth in the quarters to come. Bank of America’s Vivek Arya asked Nvidia’s president and CEO, Jensen Huang, if he saw any scenario where ASICs could take market share from Nvidia GPUs. ASICs continue to gain ground on performance and cost advantages over Nvidia, Broadcom projects 55% to 60% AI revenue growth next year. Huang pushed back hard on the earnings call. He emphasized that building AI infrastructure is “really hard” and most ASIC projects fail to reach production. That’s a fair point, but they have a competitor in Broadcom, which is seeing its AI revenue steadily ramp up, approaching a $20 billion annual run rate. Further underscoring the growing competitive fragmentation of the market is how Google, Meta and Microsoft all deploy custom silicon at scale. The market has spoken. AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are: Secure your spot to stay ahead: https://bit.ly/4mwGngO ASICs are redefining the competitive landscape in real-time Nvidia is more than capable of competing with new ASIC providers. Where they’re running into headwinds is how effectively ASIC competitors are positioning the combination of their use cases, performance claims

Read More »

Energy Secretary Issues Order to Secure Grid Reliability in Mid-Atlantic

WASHINGTON—U.S. Secretary of Energy Chris Wright issued an emergency order to minimize the risk of energy shortfalls in the Mid-Atlantic region of the United States. Secretary Wright’s order directs PJM Interconnection (PJM), in coordination with Constellation Energy, to ensure Units 3 and 4 of the Eddystone Generating Station in Pennsylvania remain available for operation. Ensuring these units remain operational minimizes the risk of generation shortfall that could lead to unnecessary power outages. “With unprecedented energy demand and resource retirements outpacing new generation additions, the country is facing an energy emergency. Today’s order proves that the Trump Administration is dedicated to confronting this critical issue,” said U.S. Secretary of Energy Chris Wright. “This administration considers power outages and soaring energy costs to be unacceptable.” As outlined in DOE’s Grid Reliability Evaluation, power outages could increase by 100 times in 2030 if the U.S. continues to take reliable power offline.  Secretary Wright ordered that the two Eddystone Generating Station units remain online past their planned retirement date in a May 30, 2025 emergency order. Keeping these units operational over the past 90 days has improved energy security in the PJM region, as demonstrated by the fact that PJM called on the Eddystone Units to generate electricity during heat waves that hit the region in June and July. The emergency conditions that led to the issuance of the first order persist.  This order is in effect beginning on August 28, 2025, and continues until November 26, 2025.  Background: PJM has voiced resource adequacy concerns for years. In its February 2023 report, PJM highlighted the increasing resource adequacy concerns and reliability risks in the coming years due to the potential timing mismatch between resource retirements, load growth and the pace of new generation entry.  In a December 2024 filing at the Federal Energy Regulatory Commission (FERC), PJM

Read More »

3.5 GW of offshore wind in New England could offset natural gas price spikes: report

Dive Brief: If the 3.5 GW of wind energy projects currently contracted offshore New England had been operational last winter, it could have offset the surge in natural gas prices that season and saved ratepayers a total of $400 million on their energy bills, according to a Wednesday report from Daymark Energy Advisors. The report estimated that “savings exceeded [power purchase agreement] costs across all scenarios, yielding annual bill savings of $1.32 to $2.68 per month for an average Eversource [Energy] residential customer.” RENEW Northeast, the group that commissioned the report, noted that ISO New England released a report last month which found gas prices in spring 2025 averaged $3.40 per million British thermal units, 112% higher than the spring 2024 price of $1.60/MMBtu. Dive Insight: The report examines the “potential regional market and Massachusetts ratepayer impacts” if 3.5 GW of offshore wind had been generating power between Dec. 2024 and Feb. 2025. “Even using the most conservative assumptions about cleared offers in Forward Capacity Auction 15 … clearing additional qualified capacity from OSW would have reduced FCA15 costs by at least $128 million, with 83% ($106 million) allocable to Massachusetts load zones,” the report said.  Daymark Energy Advisors found that “injecting near-zero marginal cost offshore wind into the energy market would have reduced ISO-NE Locational Marginal Prices by 11% ($12.60/MWh), reducing wholesale load costs across New England by roughly $400 million. Roughly $129 million of the regional savings would have been allocable to [Massachusetts electric distribution companies].” Last winter was the “first since 2014 to see below-normal temperatures over the course of an entire season,” ISO-NE said in an April release, and natural gas prices rose in accordance with demand.  President Donald Trump and New England leaders like Connecticut Gov. Ned Lamont, D, and New Hampshire Gov. Kelly Ayotte,

Read More »

Expiring September Natural Gas Contract Rose 15 Cents

The expiring September natural gas contract rose 15.0 cents to roll off the board at $2.867 per million British thermal units (MMBtu) yesterday, sparking a relief rally across the NYMEX curve. That’s what Eli Rubin, an energy analyst at EBW Analytics Group, said in a report sent to Rigzone by the EBW team on Thursday. Rubin added in the report, however, that “fundamentally … the near-term outlook remains mired in mild weather and an anticipated surge in the storage surplus vs. five-year average above 200 billion cubic feet in early September”. “Production readings retreated early this week, contributing to the case for upside, with Marcellus spot pricing suggestive of producers curtailing supply on the margins. It is unclear whether recently softer Permian output figures are sustainable, however,” Rubin noted in the report. Rubin went on to state in the report that yesterday’s rally increases the stakes for this morning’s U.S. Energy Information Administration (EIA) storage report. “Consensus expectations suggest a 25-29 billion cubic feet injection,” Rubin said. “A second straight bullish EIA surprise may extend yesterday’s relief rally – but a bearish surprise may quash nascent upside. Traders may also be slow to establish sizable short-term positions heading into the Labor Day holiday weekend,” he added. The EIA’s latest weekly natural gas storage report at the time of writing was released on August 21 and included data for the week ending August 15. That report stated that “working gas in storage was 3,199 billion cubic feet as of Friday, August 15, 2025, according to EIA estimates”. “This represents a net increase of 13 billion cubic feet from the previous week. Stocks were 95 billion cubic feet less than last year at this time and 174 billion cubic feet above the five-year average of 3,025 billion cubic feet. At 3,199 billion

Read More »

USA-Sanctioned Russian LNG Lands in China for 1st Time

A tanker with a shipment of liquefied natural gas from a US-sanctioned export facility in Russia has docked at a Chinese terminal for the first time, the latest move by Moscow to expand fuel deliveries into Asia. The Arctic Mulan vessel, which is carrying fuel from the blacklisted Arctic LNG 2 plant in Russia’s north, docked at the Beihai LNG terminal on Thursday, according to ship-tracking data compiled by Bloomberg.  The plant, initially sanctioned by US President Joe Biden’s administration, began exporting fuel on shadow fleet vessels last year, but none ever docked at an import terminal as buyers feared US retaliation. The ship’s arrival at the PipeChina-operated LNG terminal comes ahead of Russian President Vladimir Putin’s visit to Beijing, which starts on Sunday, and at a time when the US has stepped efforts to end the Kremlin’s war in Ukraine. Arctic LNG 2, led by Novatek PJSC, is key to Russia’s plans to triple LNG exports by 2030 — and tap new gas markets after a sharp drop in pipeline sales to major traditional buyers in Europe.  Other than pressuring India over its purchases of Russian oil, the US has so far held off on further tightening measures against buyers of Russian LNG as it seeks to broker a ceasefire in Ukraine. US President Donald Trump said face-to-face discussions with Putin in August were “extremely productive.” The shipment to a yet-to-be identified buyer, if it’s unloaded, will be unusual also because China’s imports of the super-chilled fuel have been on the decline this year amid higher domestic output and piped supply, including from Russia. “The transaction comes in the context of nearly non-existent Chinese spot demand for LNG amid strong supply from other sources and sluggish demand,” said Jan-Eric Fähnrich, an analyst at Rystad Energy. “Thus, this move is not driven

Read More »

Amigo LNG Signs Contract to Deliver LNG to Macquarie

Amigo LNG S.A. de C.V. said it entered into a long-term sale and purchase agreement to deliver 0.6 million metric tons per annum (mtpa) of liquefied natural gas LNG to Macquarie Group’s Commodities and Global Markets business over a 15-year term. The supply of LNG is expected to begin with the start-up of Amigo LNG’s first liquefaction train, targeted for commercial operations in the second half of 2028, the company said in a news release. Amigo LNG’s export terminal, which is designed for a nameplate capacity of 7.8 mtpa, is located in Guaymas, Sonora, on Mexico’s west coast. Amigo LNG is the Mexican joint venture of Texas-based Epcilon LNG LLC and Singapore-based LNG Alliance. Financial terms of the contract were not disclosed. “It is a privilege to have Macquarie join our portfolio of LNG offtakers,” LNG Alliance CEO Muthu Chezhian said. “Their reputation as a trusted and innovative global energy player reinforces the strong fundamentals of our project and highlights the long-term value Amigo LNG will bring to global buyers”. Michael Bennett, managing director of Macquarie’s Commodities and Global Markets business, said, “LNG is a critical component of the global energy mix, providing a reliable and flexible fuel source. This agreement reflects our commitment to meeting the diverse energy needs of our clients worldwide and demonstrates the strength of our offering in this space. We’re proud to work with Amigo LNG in helping to provide energy security to those regions where demand is rapidly increasing”. Awarding of EPC Contract for FLNG Project Meanwhile, Amigo LNG said it was awarded an engineering, procurement, and construction (EPC) contract by Drydocks World for the fabrication and delivery of a floating LNG (FLNG) liquefaction facility and related floating storage units (FSU) infrastructure. Under the EPC contract, Drydocks World will carry out the conversion of

Read More »

USA Crude Oil Stocks Drop Week on Week

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 2.4 million barrels from the week ending August 15 to the week ending August 22, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. This report was released on August 27 and included data for the week ending August 22. It showed that crude oil stocks, not including the SPR, stood at 418.3 million barrels on August 22, 420.7 million barrels on August 15, and 425.2 million barrels on August 23, 2024. Crude oil in the SPR stood at 404.2 million barrels on August 22, 403.4 million barrels on August 15, and 377.9 million barrels on August 23, 2024, the report revealed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.662 billion barrels on August 22, the report highlighted. Total petroleum stocks were down 3.6 million barrels week on week and up 6.8 million barrels year on year, the report showed. “At 418.3 million barrels, U.S. crude oil inventories are about six percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories decreased by 1.2 million barrels from last week and are at the five year average for this time of year. Finished gasoline inventories increased and blending components inventories decreased last week,” it added. “Distillate fuel inventories decreased by 1.8 million barrels last week and are about 15 percent below the five year average for this time of year. Propane/propylene inventories increased by 1.7 million barrels from last week and are 13 percent above the five year average for this time of year,”

Read More »

Hibiscus Petroleum Signs HoA for Production Tie-In with PVEP

Hibiscus Petroleum Berhad said subsidiary Hibiscus Oil & Gas Malaysia Limited (HML) signed a heads of agreement (HoA) for the tie-in of Block 46/13 production to the PM3 Commercial Arrangement Area (CAA) Production Sharing Contract (PSC) facilities. The HoA was signed with PetroVietnam Exploration Production Corporation Ltd (PVEP), Kuala Lumpur, Malaysia-based Hibiscus said in a news release. HML is the operator of the PM3 CAA project, which involves offshore fields located within a 775-square-mile (2008-square-kilometer) area in the overlapping zone between Malaysia and Vietnam. The HoA outlines the terms for facilities’ tie-in engineering and construction, as well as product handling arrangements, enabling Block 46/13 production to be processed through the existing PM3 CAA PSC facilities, according to the release. The tie-in agreement optimizes the use of available capacity at PM3 CAA PSC facilities, with a commercial framework to govern production handling and cost allocation for Block 46/13, Hibiscus said. The final tie-in agreement will be subject to Petroliam Nasional Berhad (Petronas) and Vietnam National Industry – Energy Group (PetroVietnam) approvals. In April, Hibiscus subsidiaries HML and Hibiscus Oil & Gas Malaysia (PM3) Limited signed a key principles agreement with Petronas through Malaysia Petroleum Management and PetroVietnam for the continuation of the PM3 CAA PSC and upstream gas sales agreement for 20 years, starting January 2028. The two subsidiaries each hold a 35 percent equity interest in the asset, while Petronas Carigali Sdn. Bhd. holds 35 percent and PVEP holds 30 percent. The contract continuation will maintain production from the existing fields and allow for development of discovered fields, and further exploration within the Malaysia-Vietnam offshore CAA. The contract enables the company “to unlock the full residual value of the asset and add additional reserves and resources to its asset portfolio,” Hibiscus said in an earlier statement. PetroVietnam’s New Offshore Wind

Read More »

Singapore Starts Building New Loading Facility for Trucked LNG

State-owned Singapore LNG Corp. Pte. Ltd. (SLNG) has broken ground for a new truck loading facility on Jurong Island, targeted to be completed next year. “The new and enhanced LNG truck loading facility will be part of the SLNG Terminal, but segregated from the main terminal operations. It will feature two loading bays, boosting operational capacity and minimizing downtime, and is designed to accommodate 40-footer trucks, compared to the current facility which only supports 20-footer trucks, enabling better support for the growing trucked LNG demand in Singapore”, SLNG said in a statement on its website. “The facility will be equipped with hard loading arms optimized for single-operator use, which helps to reduce manpower deployment and enhance overall operational efficiency”. China International Water & Electric Corp. (S) Pte. Ltd. is the engineering, procurement and construction contractor. Presently the terminal supplies around 50 percent of the Southeast Asian city-state’s gas demand for power generation, the rest supplied by pipeline, according to SLNG. The terminal has an average gas supply capacity of nine million metric tons per annum (MMtpa) and a peak capacity of about 11 MMtpa.  The terminal started operations May 2013. The current facility, occupying 40 hectares on the southern tip of Jurong, has two jetties, three 180,000-cubic-meter (6.36 million cubic feet) storage tanks and a fourth storage tank of 260,000 cubic meters. The terminal can accommodate LNG vessels ranging from 2,000 cubic meters to 265,000 cubic meters in size, according to SLNG. Last year SLNG signed agreements with Mitsui OSK Lines Ltd. (MOL), Jurong Port Pte. Ltd. and Wood PLC to build Singapore’s second LNG terminal. MOL will charter a newbuild floating storage and regasification unit (FSRU) with a storage capacity of 200,000 cubic meters and a regasification capacity of five MMtpa. The FSRU is to be constructed by Hanwha Ocean. Expected to be put

Read More »

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.  But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.  Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.  More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.  I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.  On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.  People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest).  People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.  Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?  In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good.  Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed.  And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.   But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing.  Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search.  “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly.  It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be.  But once you’ve used AI Overviews a bit, you realize they are different.  Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.)  “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.”  That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video.  When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai.  There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out?  I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too.  “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web.  “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful.  “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.”  But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?   Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.   “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says.  Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.”  Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”  “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.”  He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew?  A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.   “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says.  OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more.  “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience.  Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does.  “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.”  When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them.  “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed!  The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers.  It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.”  We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge.  The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets.  Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed.  “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information.  In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses.  But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on. 

Read More »

Subsea7 Scores Various Contracts Globally

Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Read More »

Driving into the future

Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.  We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen.  Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes. 
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake.  What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story. 
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa.  Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Read More »

Oil Holds at Highest Levels Since October

Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

Read More »

What to expect from NaaS in 2025

Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

Read More »

UK battery storage industry ‘back on track’

UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing  and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was  fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

Read More »

Anthropic launches Claude for Chrome in limited beta, but prompt injection attacks remain a major concern

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Anthropic has begun testing a Chrome browser extension that allows its Claude AI assistant to take control of users’ web browsers, marking the company’s entry into an increasingly crowded and potentially risky arena where artificial intelligence systems can directly manipulate computer interfaces. The San Francisco-based AI company announced Tuesday that it would pilot “Claude for Chrome” with 1,000 trusted users on its premium Max plan, positioning the limited rollout as a research preview designed to address significant security vulnerabilities before wider deployment. The cautious approach contrasts sharply with more aggressive moves by competitors OpenAI and Microsoft, who have already released similar computer-controlling AI systems to broader user bases. The announcement underscores how quickly the AI industry has shifted from developing chatbots that simply respond to questions toward creating “agentic” systems capable of autonomously completing complex, multi-step tasks across software applications. This evolution represents what many experts consider the next frontier in artificial intelligence — and potentially one of the most lucrative, as companies race to automate everything from expense reports to vacation planning. Claude for Chrome allows users to instruct the AI to perform actions on their behalf within web browsers, such as scheduling meetings by checking calendars and cross-referencing restaurant availability, or managing email inboxes and handling routine administrative tasks. The system can see what’s displayed on screen, click buttons, fill out forms, and navigate between websites — essentially mimicking how humans interact with web-based software. AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are: Secure your spot to stay ahead: https://bit.ly/4mwGngO “We view browser-using

Read More »

Reimagining sound and space

On a typical afternoon, MIT’s new Edward and Joyce Linde Music Building hums with life. On the fourth floor, a jazz combo works through a set in a rehearsal suite as engineers adjust microphone levels in a nearby control booth. Downstairs, the layered rhythms of Senegalese drumming pulse through a room built to absorb its force. In the building’s makerspace, students solder circuits, prototype sensor systems, and build instruments. Just off the main lobby, beneath the 50-foot ­ceiling of the circular Thomas Tull Concert Hall, another group tests how the room, whose acoustics can be calibrated to shift with each performance, responds to its sound. Situated behind Kresge Auditorium on the site of a former parking lot, the Linde building doesn’t mark the beginning of a serious commitment to music at MIT—it amplifies an already strong program. Every year, more than 1,500 students enroll in music classes, and over 500 take part in one of the Institute’s 30 ensembles, from the MIT Symphony Orchestra to the Fabulous MIT Laptop Ensemble, which creates electronic music using laptops and synthesizers. They rehearse and perform in venues across campus, including Killian Hall, Kresge, and a network of practice rooms, but the Linde Building provides a dedicated home to meet the depth, range, and ambition of music at MIT. “It would be very difficult to teach biology or engineering in a studio designed for dance or music,” Jay Scheib, section head for Music and Theater Arts, told MIT News shortly before the building officially opened. “The same goes for teaching music in a mathematics or chemistry classroom. In the past, we’ve done it, but it did limit us.” He said the new space would allow MIT musicians to hear their music as it was intended to be heard and “provide an opportunity to convene people to inhabit the same space, breathe the same air, and exchange ideas and perspectives.” The building, made possible by a gift from the late philanthropists Edward ’62 and Joyce Linde, has already transformed daily music life on campus. Musicians, engineers, and designers now cross paths more often as they make use of its rehearsal rooms, performance spaces, studios, and makerspace, and their ideas have begun converging in distinctly MIT ways. Antonis Christou, a second-year master’s student in the Opera of the Future group at the MIT Media Lab and an Emerson/Harris Scholar, says he’s there “all the time” for classes, rehearsals, and composing.
“It’s really nice to have a dedicated space for music on campus. MIT does have very strong music and arts programs, so I think it reflects the strength of those programs,” says Valerie Chen ’22, MEng ’23, a cellist and PhD candidate in electrical engineering who works on interactive robotics. “But more than that, I think it makes a statement that technology and the arts, and music in particular, are very interconnected.” A building tuned for acoustics and performance Acoustic innovation shaped every aspect of the building’s 35,000 square feet of space. From the outset, the design team faced a fundamental challenge: how to create a facility where radically different types of music could coexist without interference. Keeril Makan, the Michael (1949) and Sonja Koerner Music Composition Professor and associate dean of MIT’s School of Humanities, Arts, and Social Sciences (SHASS), helped lead that effort.
“It was important to me that we could have classical music happening in one space, world music in another space, jazz somewhere else, and also very fine measurements of sound all happening at the same time. And it really does that,” says Makan. “But it took a lot of work to get there.” Keeril Makan, professor of composition and associate dean of SHASS, helped spearhead the effort to create a building in which radically different kinds of musicmaking could happen simultaneously.WINSLOW TOWNSON That work resulted in a building made up of three artfully interconnected blocks, creating three acoustically isolated zones: the Thomas Tull Concert Hall, the Erdely Music and Culture Space, and the Lim Music Maker Pavilion. Thick double shells of concrete enclose each zone, and their physical separation minimizes vibration transfer between them. One space for world music rests on a floating slab above the building’s underground parking garage and is constructed using a box-in-box method, with its inner room structurally isolated from the rest of the building. Other rooms use related techniques, with walls, floors, and ceilings separated by layers of sound-dampening materials and structural isolation systems to reduce sound transmission. The building was designed by the Japanese architecture firm SANAA, in close collaboration with Nagata Acoustics, the team behind Berlin’s Pierre Boulez Saal. Inspired in part by that German hall, the 390-seat Thomas Tull Concert Hall is meant to serve musicians’ varying acoustic needs. Inside, ceiling baffles and perimeter curtains make it possible to adapt the room on demand, shifting the acoustics from resonant and open for chamber music and classical performances to drier and more controlled for jazz or electronic music. Makan and the acoustics team pushed for a 50-foot ceiling, a requirement from Nagata for acoustic flexibility and performance quality. The result is a concert hall that breaks from traditional form. Instead of occupying a raised stage facing rows of seats, performers in Tull Hall are positioned at the bottom of the space, with the audience seated around and above them. This layout alters the relationship between listeners and performers; audience members can choose to sit next to the string section or behind the pianist, experiencing sounds and sights typically reserved for musicians. The circular configuration encourages movement, intimacy, and a more immersive musical experience.  “It’s a big opportunity for creativity,” says Ian Hattwick, a lecturer in music technology. “You can distribute musicians around the hall in interesting ways. I really encourage people in electronic music concerts to come up and get close. You can come up and peer over somebody’s shoulder while they’re playing. It’s definitely different. But I think it’s beautiful.” That sense of openness shaped one of the first performances in the new hall. As part of the building’s opening-weekend event in February, called “Sonic Jubilance,” the Fabulous MIT Laptop Ensemble (FaMLE), directed by Hattwick, took the stage, testing the venue’s variable acoustics and capacity for spatial experimentation as it employed laptops, gestural controllers, and other electronic devices to improvise and perform electronic music. “I was really struck by how good it sounded for what I do and for what FaMLE does,” says Hattwick. “There’s a surround system of speakers. It was really fun and really satisfying, so I’m super excited to spend some more time working on spatial audio applications.” That evening, a concert featured performances by a diverse array of additional ensembles and world premieres by four MIT composers. It was the first moment many performers heard what the hall could do—and the first time they’d shared a space designed for all of them. JONATHAN SACHS JONATHAN SACHS The community joined MIT music faculty, staff, and students for special workshops and short performances at the building’s public opening in February. Since then, the hall has hosted a wide range of performances, from student recitals to concerts featuring guest artists. In the span of two weeks in March, the Boston Chamber Music Society celebrated the music of Fauré and the Boston Symphony Chamber Players performed works by Aaron Copland, Brahms, and MIT’s own Makan. Other concerts have featured student compositions, historical instruments, and multichannel electronic works. 

Just a few steps from the entrance to Tull Concert Hall, across the brick- and glass-lined lobby, the Beatrice and Stephen Erdely Music and Culture Space supports a different kind of sound. It’s designed to host rehearsals of percussion groups like Rambax MIT, the Institute’s Senegalese drumming ensemble, which uses hand-carved sabar drums, each played with a stick and open palm to produce tightly woven polyrhythms. At other times, students gather there around bronze-keyed instruments as they play with the Gamelan Galak Tika ensemble, practicing the interlocking patterns of Balinese kotekan.  Such music was originally meant to be performed in the open. The Music and Culture Space provides the physical and sonic headroom these traditions require, using materials chosen not only to isolate sound but also to let it breathe. Inside, the room thrums with rhythm, while just outside its walls, the rest of the building stays silent. “We can imagine [world music] growing with this new home,” says Makan. Previously, these ensembles had rehearsed in a converted space inside the old MIT Museum building on Massachusetts Avenue, separated from the rest of the music program.  “They deserved their own space for so long,” says Hattwick, “and it’s really fantastic that they managed to get it and that it is integrated in the music building the way that it is.”  The soaring ceiling of the Beatrice and Stephen Erdely Music and Culture Space provides the physical and sonic headroom for percussion ensembles.ADAM DETOUR The building’s commitment to sound isolation extends beyond its rehearsal and performance spaces, and for faculty working in sound design and music technology, it has changed their daily rhythms. Mark Rau, an assistant professor of music technology with a joint appointment in electrical engineering and computer science (EECS), regularly uses speakers at high volume in his office—something that he says wouldn’t have been possible in MIT’s previous facilities. “All the rooms in the building have good sound isolation, even the offices—not just the performance rooms, which is pretty great,” says Rau, whose second-floor office in the Jae S. and Kyuho Lim Music Maker Pavilion features gray acoustic panels lining the walls and ceiling. “To be able to test the algorithms that I’m working on and things for homework assignments, and not bother my neighbors, is important.”  The attention to acoustic detail continues upstairs. On the fourth floor, Rau ran the first two sessions in the building’s new recording facilities, which were purpose-­built to support both ensemble work and critical listening. He says they offer professional-­quality recording. The recording suite includes a large main room that can accommodate up to a dozen players, a smaller isolation booth for separating instruments or voices, and a control room designed for precise monitoring. Each space is acoustically treated and linked to the building’s dedicated audio network, so sound can be routed from any room in the building to any other in real time.  
In the music technology research lab, undergraduate researchers (from left) Mouhammad Seck ’27, Anthony Wang ’28, and Alex Jin ’27 model the sounds of historic instruments— many of which are unplayable—from the collection of the MFA Boston.ADAM DETOUR “You could record an entire symphony orchestra, and almost everybody could be in a different room,” says Hattwick. Or you could have the orchestra playing together in the concert hall and record it in one of the studios. The whole building uses a digital audio protocol called Dante, which allows low-latency, high-fidelity ­transmission over Ethernet. MIT multimedia specialist Cuco Daglio, who helped oversee technical planning, advocated for that level of fidelity. “It’s a beautifully designed acoustic space,” says Hattwick. 
The building’s exterior reflects a similar attention to performance. The arch above its entryway facing the Johnson Athletic Center and the Zesiger Sports and Fitness Center forms a conical shell that shapes and reflects sound, creating a natural stage. On warm days, music drifts out into the open air as groups rehearse beneath the overhang or students gather to play informally in small groups.  New program, new space This fall, MIT is launching a new one-year master’s program in music technology, bringing together faculty from engineering and the arts. The Linde Music Building serves as the program’s home base, providing studios, tools, and collaborative spaces that students will use to design new instruments, software, and performance systems. Eran Egozy ’93, MEng ’95, professor of the practice in music technology and cofounder of Harmonix Music Systems, which developed Guitar Hero and Rock Band, directs the program. He developed the curriculum with Anna Huang, SM ’08, an associate professor with a joint appointment in music and EECS who did research on human-AI music collaboration technologies at Google, and he, Huang, and Rau are among its faculty. Eran Egozy ’93, MEng ’95, professor of the practice in music technology and one of the masterminds behind Guitar Hero and Rock Band, directs the Institute’s new master’s program in music technology.KATE LEMMON “It’s really about inventing new things,” says Egozy. “Asking questions like: What would the future musician want? What kinds of tools will a composer want?” Rachel Loh ’25, who double-majored in computer science and engineering and music, will be part of the inaugural cohort. A vocalist with Syncopasian, MIT’s East Asian a cappella group, she draws on performance experience in her research. Her current project explores how AI systems improvise alongside human musicians, using visualizations to provide insight into machine decision-making. “In high school, I knew I wanted to work at the intersection of music and computer science,” she says. “Now, this new music tech program is the perfect thing for me.” A performance in the Thomas Tull Concert Hall.KATE LEMMON A flexible workshop on the Music Maker Pavilion’s second floor will serve as a core space for the new program, outfitted with essentials like soldering stations, a laser cutter, and testing gear but left unfinished by design. Hattwick and Rau, who oversee the space, are allowing its exact form to emerge over time. 
“We’ve been spending this year outfitting it and starting to think about how we make all of these resources available to our students, and what the best way is to utilize this opportunity in this space,” Hattwick says. “[The makerspace] directly supports research and our specific coursework.”  Already, students have begun to push the makerspace into new territory. Some are designing analog circuits and signal-­boosting devices known as preamplifiers for musical instrument sensors. Others are experimenting with embedded systems that blur the boundary between physical and digital sound. In one class, students are building custom digital instruments from scratch—tools that don’t yet exist, shaped to suit musical ideas still in formation. The building’s infrastructure, including features like Dante, gives these projects unusual flexibility. In March, the building served as a backdrop for large-scale projections of animated visuals created by students in MIT’s Interactive Design and Projection for Live Performance class.AV PRODUCTIONS Ayyub Abdulrezak ’24, MEng ’25, one of Egozy’s students, worked in the makerspace to develop compact sensor boxes that combine a microphone, a Raspberry Pi board, and custom signal-processing software. Each device logs when and how long a campus piano is played, sending the data to a central server. The resulting heat maps could help inform tuning schedules, improve access, or guide planning for music spaces across MIT. The makerspace also supports repair, maintenance, and modification. Hattwick describes it as a place to “build and fix and maintain and explore new kinds of instruments,” where students can learn what it means to refine a musical system—not just in theory but in screws, solder, and code. Rau, who also builds guitars, is incorporating more hands-on fabrication into his courses, merging electronics with instrument making and repair to yield a unified design practice.
Alex Mazurenko ’28 is an undergraduate researcher working on slip casting, impedance testing, and musical instrument accessory designs. Here, he uses CAD software to design a custom saxophone mouthpiece.ADAM DETOUR After 3D-printing his model, Mazurenko reviews the design with his advisor, senior postdoctoral associate Benjamin Sabatini.ADAM DETOUR He then refines the prototype using tools in the makerspace, a workshop where students can fabricate analog circuits, musical sensors, and even custom instruments.ADAM DETOUR Mazurenko brings the prototype to the Laboratory for Manufacturing and Productivity, where he images it in an x-ray CT scanner built by Lumafield, a startup founded by MIT alumni. He will use the scan to create a digital model for further testing and iteration.ADAM DETOUR While the space is still growing into its full potential, its ethos is clear: experimentation at the intersection of sound, system, and student agency. These kinds of projects rely not only on equipment but on space where musicians can experiment, fail, and refine. As the new master’s program takes shape, that environment will be central to how students learn and create. Building sound and community For the first time, MIT musicians, technologists, composers, and researchers share a space designed to bring their disciplines into conversation. The building’s form encourages these exchanges. Its three wings connect through a glass-lined lobby filled with daylight and movement. Students pause there to talk, overhear a rehearsal in progress, or catch sight of a friend heading to a practice room.  Curves abound in the brick- and glass-lined lobby of the Edward and Joyce Linde Music Building. ADAM DETOUR “Music is such a community thing,” says Christou. “I’ve learned about concerts, or that someone is coming to visit, or I’ve seen friends just studying or practicing. It’s really nice to have a hub with musical activity.” Egozy sees these exchanges as central to the building’s mission. “It’s the idea cross-pollination that happens when you just happen to run into someone you know, literally by the water cooler, and you’re just chatting about this or that,” he says. “That’s my favorite part.” Many of these encounters occur in the makerspace, where students working on entirely different projects end up asking each other questions, swapping tools, or launching ideas together.  “Lots of students from all different walks of life have been building instruments, prototyping different devices,” says Makan, who adds that he wants the new building to be “a place for people to gather and hang out.” Many ensembles that once rehearsed in classrooms scattered across campus now work in adjoining rooms. “You feel like something is always happening,” Christou says. “It’s not just your practice or your rehearsal. It’s this sense of a shared rhythm.” New frontiers for MIT’s music culture Already, the Linde Music Building is affecting how music is conceived, taught, and experienced at MIT. Faculty members are rethinking syllabi to take advantage of the building’s multi-room routing capability and to delve more into spatial acoustics, interactive sound design, and even instrument making. Students are beginning to compose with acoustics in mind, treating the building itself as part of their instrument. For example, Rau is engaging students in projects that explore room dynamics and acoustics as integral to music. In one class, students listen for differences in how music sounds in various parts of Tull Hall and observe changes when the curtains are used. Then they conduct acoustic measurements of the hall’s reverberation and build a digital copy of the hall, creating a sonic blueprint of the space that lets them produce artificial reverberation. Egozy, meanwhile, is developing tools that let performers engage audiences in new ways.  This June, one of those ideas was scaled up. As part of the International Computer Music Conference, MIT premiered a piece that invited audience members to shape the sound in real time using their phones. Musicians performed in Tull Hall, surrounded by a circular array of 24 speakers, with the audio shifting throughout the space in response to the audience input.  Undulating walls and an overhanging ring of glass panels help engineers customize the acoustics for each performance in the Thomas Tull Concert Hall.ADAM DETOUR Performances like these are fueling growing interest in the building’s creative potential at MIT and beyond. Visiting composers have proposed site-specific works. Local ensembles are booking time to record in Tull Hall. Faculty are exploring how the building might support residencies that pair MIT researchers with performers working at the leading edges of both sound and computation. The circular Tull Hall allows countless configurations for both performers and audiences. Here singers perform from the upper level of the hall while instrumentalists play from center stage at the base of the room.CAROLINE ALDEN “This hall is really special. There’s nothing like it anywhere in the Boston area,” Egozy says. “We will have a lot of really amazing events that will draw people into MIT. We’re excited about what it’s going to do for the MIT students, but it’s also going to do a lot just for the whole Boston area.” Each day, students and faculty explore its possibilities—linking rehearsal with recording, sound design with performance, tradition with experiment. MIT is “a place to enable exploration of new vistas, and really letting everyone pursue their path to what their vision is,” Hattwick says. “The music building is just going to be like a huge boost to doing even more cool things in the future.”

Read More »

MIT is worth fighting for

As I write in late July, we’re contending with a major tax increase on the annual returns from MIT’s endowment as well as other investments and assets. This new tax burden will strain the resources we use to support research, innovation, and student scholarships and financial aid—the heart and soul of the Institute.  And the financial impact on us will be significant: This tax increase alone will cost in the range of 10% of MIT’s annual central budget.  Unfortunately, we face the prospect of further threats to our mission and financial model this fall when Congress considers drastic cuts to the research budgets of federal agencies. And all this comes on the heels of multiple US science agencies capping their reimbursement of research infrastructure and administration expenses well below actual costs. These reimbursements are critical to operating our world-class research enterprise, and that’s why we have challenged the government’s actions in court.  I don’t expect we all agree on the ideal contours of the Institute’s future. But I have to believe that we all agree it should have a future. For more information—and ways to help—you can consult these online resources: – Visit Understanding MIT for a comprehensive view of the Institute’s value to the nation and the world.   – Go to Stand up for MIT and find ways to take action. – And visit MIT’s Response to government activity page to keep up to date on what’s happening in Washington and how it’s affecting the nation’s great research enterprise.  MIT was built with the support of generations of alumni and friends—and it’s up to us to keep its foundations strong for those to come.  So I hope you will join me in standing up for MIT.

Read More »

Junior Peña, neutrino hunter

Growing up in South Central Los Angeles, Junior Peña learned to keep his eyes down and his schedule full. In his neighborhood, a glance could invite trouble, and many kids—including his older brother—were pulled into gang culture. He knew early on that he wanted something else. With his parents working long hours, he went to after-school programs, played video games, and practiced martial arts. But his friends had no idea that he also spent hours online poring over textbooks and watching lectures, teaching himself advanced mathematics and philosophy. “Being good at school wasn’t how people saw me,” he says.  One night in high school, he came across a YouTube video about the Higgs boson—the so-called “God particle,” thought to give mass to nearly everything in the universe. “I remember my mind being flooded with questions about life, the universe, and our existence,” he recalls. He’d already looked into philosophers’ answers to those questions but was drawn to the more concrete explanations of physics. After his independent study helped Peña pass AP calculus as a junior, his fascination with physics led him to the University of Southern California, the 2019 session of MIT’s Summer Research Program, and then MIT for grad school. Today, he’s working to shed light on neutrinos, the ghostly uncharged particles that slip effortlessly through matter. Particles that would require a wall of lead five light-years thick to stop. As a grad student in the lab of Joseph Formaggio, an experimental physicist known for pioneering new techniques in neutrino detection, Peña works alongside leading physicists designing technology to precisely measure what are arguably the universe’s most elusive particles. Emanating from such sources as the sun and supernovas (and generated artificially by particle accelerators and nuclear reactors), neutrinos reveal their presence through an absence. Their existence was initially posited in the 1930s by the physicist Wolfgang Pauli, who noticed that energy seemed to go missing when atoms underwent a process known as radioactive beta decay. According to the law of conservation of energy, the total energy of the particles emitted during radioactive decay must equal the energy of the decaying atom. To account for the missing energy, Pauli proposed the existence of an undetectable particle that was carrying it away. 
Einstein’s E = mc2 tells us that if energy is missing, then mass must be too. Yet according to the standard model of physics—which offers our most trusted theory for how particles behave—neutrinos should have no mass at all. Unlike other particles, they don’t interact with the Higgs field, a kind of cosmic molasses that slows particles down and gives them mass. Because they pass through it untouched, they should remain massless.  But by the early 2000s, researchers had discovered that neutrinos, which had first been detected in the 1950s, can shift between three types, a feat possible only if they have mass. So now the tantalizing question is: What is their mass? 
Determining neutrinos’ exact mass could explain why matter triumphed over antimatter, refine models of cosmic evolution, and clarify the particles’ role in dark matter and dark energy. And the Formaggio Lab is part of Project 8, an international collaboration of 71 scientists in 17 institutions working to make that measurement. To do this, the lab uses tritium, an unstable isotope of hydrogen that decays into helium, releasing both an electron and a particle called an antineutrino (“every particle has an antiparticle counterpart,” Formaggio explains). By precisely measuring the energy spectrum of those electrons, scientists can determine how much energy is missing, allowing them to infer the neutrinos’ mass. At the heart of this experiment is a novel detection method called cyclotron radiation emission spectroscopy (CRES), first proposed in 2008 by Formaggio and his then postdoc Benjamin Monreal, which “listens” to the faint radio signals emitted as electrons spiral through a magnetic field. Peña was instrumental in designing a crucial part of the tool that will make this possible: a copper cavity that he likens to a guitar, with the electrons released during beta decay acting like plucked strings. The cavity will amplify their signals, helping researchers to measure them exactly. Peña spent more than a year developing and refining a flashlight-size prototype of the device in collaboration with machinists and fellow physicists. Peña designed a prototype copper microwave resonator to amplify the signals of electrons emitted as tritium decays, allowing researchers to measure them exactly and infer the neutrino’s mass.JESSICA CHOMIK-MORALES, SM ’25 “He had to learn the [design and simulation] software, figure out how to interpret the signals, and test iteration after iteration,” says Formaggio, Peña’s advisor. “It’s been incredible watching him take this from a rough idea to a working design.” The design of Peña’s cavity had to balance competing demands. It needed a way to extract the electrons’ signals that was compatible with the researchers’ methods for calibrating the system, one of which involves using an electron gun to inject electrons of a known, precise energy into the cavity. And it also needed to preserve the properties of the electromagnetic fields within the cavity. In May, Peña sent his final prototype to the University of Washington, where it was installed in July. Researchers hope to begin calibration this fall. Then Peña’s cavity and the full experimental setup will be scaled up so in a few years they can begin collecting CRES data using tritium. “We’ve been working toward this for at least three years,” says Jeremy Gaison, a Project 8 physicist at the Pacific Northwest National Lab. “When we finally turn on the experiment, it’s going to be incredible to see if all of our simulations and studies actually hold up in real data.” Peña’s contribution to the effort “is the core of this experiment,” says Wouter Van De Pontseele, another Project 8 collaborator and former Formaggio Lab postdoc. “Junior took an idea and turned it into reality.”  Project 8 is still in its early stages. The next phase will scale up with larger, more complex versions of the technology Peña played a key role in developing, culminating in a vast facility designed to hunt for the neutrino’s mass. If that is successful, the findings could have profound implications for our understanding of the universe’s structure, the evolution of galaxies, and even the fundamental nature of matter itself. Eager to keep probing such open questions in fundamental physics, Peña is still exploring options for his postdoc work. One possibility is focusing on the emerging field of levitated nanosensors, which could advance gravitation experiments, efforts to detect dark matter, and searches for the sterile neutrino, a posited fourth variety that interacts even more rarely than the others. “Experimental particle physics is long-term work,” says Van De Pontseele. “Some of us will stay on this project for decades, but Junior can walk away knowing he made a lasting impact.” Peña also hopes to have a lasting impact as a professor, opening doors for students who, like him, never saw themselves reflected in the halls of academia. “A summer program brought me here,” he says. “I owe it to the next kid to show they belong.”

Read More »

Fix damaged art in hours with AI

Art restoration takes steady hands and a discerning eye. For centuries, conservators have identified areas needing repair and then mixed the exact shades needed to fill in one area at a time. Restoring a single painting can take anywhere from a few weeks to over a decade. Now an MIT graduate student in mechanical engineering has used artificial intelligence to speed up the process by orders of magnitude. Digital restoration tools are not new; computer vision, image recognition, and color matching have all helped generate repaired versions of damaged paintings in recent years. But until now, there has been no way to apply the results directly onto an original canvas. Instead, they are usually displayed virtually or printed as stand-alone works. In his study, Alex Kachkine, SM ’23, presents a new method he’s developed that involves printing the restoration on a very thin polymer film that can be carefully aligned with a painting and adhered to it or easily removed. As a demonstration, he used the method to repair a highly damaged 15th-century oil painting he owned. First he used traditional techniques to clean the painting and remove any past restoration efforts. Then he scanned the painting, including the many regions where paint had faded or cracked, and used existing algorithms to create a virtual version of what it may have looked like originally. Next, Kachkine used software he developed to create a map of regions on the original painting that require infilling, along with the exact colors needed. The method automatically identified 5,612 regions in need of repair and filled them in using 57,314 different shades. This map was then translated into a physical, two-layer mask printed onto polymer-based films. The first layer was printed in color, while the second layer was printed in the exact same pattern but in white.
“In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains. He used high-fidelity commercial inkjets to print the mask’s two layers, which he carefully aligned with the help of computational tools he developed. Then he overlaid them by hand onto the original painting and adhered them with a thin spray of conventional varnish. The films are made from materials that can be easily dissolved in case conservators need to reveal the original, damaged work. The entire process took 3.5 hours, which he estimates is about 66 times faster than traditional restoration methods. If this method is adopted widely, Kachkine emphasizes, conservators should be involved at every step, to ensure that the final work is in keeping with an artist’s style and intent. The digital file of the mask can also be saved to document exactly what was restored. “Because there’s a digital record of what mask was used, in 100 years, the next time someone is working with this, they’ll have an extremely clear understanding of what was done to the painting,” Kachkine says. “And that’s never really been possible in conservation before.” The result, he hopes, will be a new lease on life for many works that have not had a chance to be repaired by hand. “There is a lot of damaged art in storage that might never be seen,” he says. “Hopefully with this new method, there’s a chance we’ll see more art.” 

Read More »

Emergency help for low blood sugar

Most people with type 1 diabetes inject insulin to prevent their blood sugar levels from getting too high. However, if their blood sugar gets too low, it can lead to confusion, seizures, and even death. To combat this hypoglycemia, some patients carry syringes of glucagon, a hormone that stimulates release of glucose. Now MIT engineers have developed an alternative that could work even when people don’t realize they are becoming hypoglycemic. It could also help during sleep, or for children who are unable to inject themselves. “Our goal was to build a device that is always ready to protect patients,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering and the senior author of a study on the work. The implantable device, about the size of a quarter, contains a polymer reservoir holding powdered glucagon and sealed with a material that can be programmed to change shape when heated. It also includes an antenna that allows the user to remotely turn on a small electrical current, which heats that material until it bends and releases the drug. Because the device can receive wireless signals, it could also be triggered automatically by a glucose monitor. The researchers have successfully tested the implant in mice and say it could also be used to deliver epinephrine to treat heart attacks or prevent anaphylactic shock. 

Read More »

Nous Research drops Hermes 4 AI models that outperform ChatGPT without content restrictions

Nous Research, a secretive artificial intelligence startup that has emerged as a leading voice in the open-source AI movement, quietly released Hermes 4 on Monday, a family of large language models that the company claims can match the performance of leading proprietary systems while offering unprecedented user control and minimal content restrictions.The release represents a significant escalation in the battle between open-source AI advocates and major technology companies over who should control access to advanced artificial intelligence capabilities. Unlike models from OpenAI, Google, or Anthropic, Hermes 4 is designed to respond to nearly any request without the safety guardrails that have become standard in commercial AI systems.“Hermes 4 builds on our legacy of user-aligned models with expanded test-time compute capabilities,” Nous Research announced on X (formerly Twitter). “Special attention was given to making the models creative and interesting to interact with, unencumbered by censorship, and neutrally aligned while maintaining state of the art level math, coding, and reasoning performance for open weight models.”Hermes 4 introduces what Nous Research calls “hybrid reasoning,” allowing users to toggle between fast responses and deeper, step-by-step thinking processes. When activated, the models generate their internal reasoning within special tags before providing a final answer — similar to OpenAI’s o1 reasoning models but with full transparency into the AI’s thought process.

Read More »

Oil Climbs as Peace Talk Prospects Fade

Oil gained as the waning prospect of a peace agreement between Russia and Ukraine reduced the likelihood of more of Moscow’s supplies reaching broader markets in the near term. West Texas Intermediate crude rose 0.7% to top $64 a barrel, reversing earlier losses, after German Chancellor Friedrich Merz told reporters that a meeting between Ukrainian President Volodymyr Zelensky and Russia’s Vladimir Putin “won’t happen.” Talks between the leaders were seen as a step toward a peace deal that could pave the way for reduced restrictions on Russian crude exports. President Donald Trump is also set to release a statement on Russia and Ukraine later, leading traders to hedge for stricter penalties on Moscow’s energy shipments. “Trump is going to have to decide if he really wants to impose sanctions or give negotiations one more go,” said Rebecca Babin, a senior energy trader at CIBC Private Wealth Group. Still, “the market is used to the can being kicked down the road, so very minimal risk premium is being priced in.” Ukraine has ramped up drone attacks on Russia’s oil infrastructure over the past month, most recently hitting two refineries. Moscow’s crude exports slipped last week, tanker-tracking data compiled by Bloomberg showed, after Ukraine intensified its attacks. The development comes as White House trade adviser Peter Navarro stepped up pressure on India to halt purchases of Russian oil after Washington doubled a levy on imports from the country to 50%. Still, the outlook remains overall bearish. Oil markets are widely expected to move into a surplus toward the end of the year, as higher output from the OPEC+ alliance and outside of the grouping overwhelms demand. The producer group is due to meet on Sept. 7, but no talks have been held yet about its next moves, according to a senior OPEC

Read More »

Forget data labeling: Tencent’s R-Zero shows how LLMs can train themselves

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now A new training framework developed by researchers at Tencent AI Lab and Washington University in St. Louis enables large language models (LLMs) to improve themselves without requiring any human-labeled data. The technique, called R-Zero, uses reinforcement learning to generate its own training data from scratch, addressing one of the main bottlenecks in creating self-evolving AI systems. R-Zero works by having two independent models co-evolve by interacting with and challenging each other. Experiments show that R-Zero substantially improves reasoning capabilities across different LLMs, which could lower the complexity and costs of training advanced AI. For enterprises, this approach could accelerate the development of specialized models for complex reasoning tasks without the massive expense of curating labeled datasets. The idea behind self-evolving LLMs is to create AI systems that can autonomously generate, refine, and learn from their own experiences. This offers a scalable path toward more intelligent and capable AI. However, a major challenge is that training these models requires large volumes of high-quality tasks and labels, which act as supervision signals for the AI to learn from. Relying on human annotators to create this data is not only costly and slow but also creates a fundamental bottleneck. It effectively limits an AI’s potential capabilities to what humans can teach it. To address this, researchers have developed label-free methods that derive reward signals directly from a model’s own outputs, for example, by measuring its confidence in an answer. While these methods eliminate the need for explicit labels, they still rely on a pre-existing set of tasks, thereby limiting their applicability in truly self-evolving scenarios. AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI.

Read More »

Nvidia’s $46.7B Q2 proves the platform, but its next fight is ASIC economics on inference

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Nvidia reported $46.7 billion in revenue for fiscal Q2 2026 in their earnings announcement and call yesterday, with data center revenue hitting $41.1 billion, up 56% year over year. The company also released guidance for Q3, predicting a $54 billion quarter. Behind these confirmed earnings call numbers lies a more complex story of how custom application-specific integrated circuits (ASICs) are gaining ground in key Nvidia segments and will challenge their growth in the quarters to come. Bank of America’s Vivek Arya asked Nvidia’s president and CEO, Jensen Huang, if he saw any scenario where ASICs could take market share from Nvidia GPUs. ASICs continue to gain ground on performance and cost advantages over Nvidia, Broadcom projects 55% to 60% AI revenue growth next year. Huang pushed back hard on the earnings call. He emphasized that building AI infrastructure is “really hard” and most ASIC projects fail to reach production. That’s a fair point, but they have a competitor in Broadcom, which is seeing its AI revenue steadily ramp up, approaching a $20 billion annual run rate. Further underscoring the growing competitive fragmentation of the market is how Google, Meta and Microsoft all deploy custom silicon at scale. The market has spoken. AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are: Secure your spot to stay ahead: https://bit.ly/4mwGngO ASICs are redefining the competitive landscape in real-time Nvidia is more than capable of competing with new ASIC providers. Where they’re running into headwinds is how effectively ASIC competitors are positioning the combination of their use cases, performance claims

Read More »

Energy Secretary Issues Order to Secure Grid Reliability in Mid-Atlantic

WASHINGTON—U.S. Secretary of Energy Chris Wright issued an emergency order to minimize the risk of energy shortfalls in the Mid-Atlantic region of the United States. Secretary Wright’s order directs PJM Interconnection (PJM), in coordination with Constellation Energy, to ensure Units 3 and 4 of the Eddystone Generating Station in Pennsylvania remain available for operation. Ensuring these units remain operational minimizes the risk of generation shortfall that could lead to unnecessary power outages. “With unprecedented energy demand and resource retirements outpacing new generation additions, the country is facing an energy emergency. Today’s order proves that the Trump Administration is dedicated to confronting this critical issue,” said U.S. Secretary of Energy Chris Wright. “This administration considers power outages and soaring energy costs to be unacceptable.” As outlined in DOE’s Grid Reliability Evaluation, power outages could increase by 100 times in 2030 if the U.S. continues to take reliable power offline.  Secretary Wright ordered that the two Eddystone Generating Station units remain online past their planned retirement date in a May 30, 2025 emergency order. Keeping these units operational over the past 90 days has improved energy security in the PJM region, as demonstrated by the fact that PJM called on the Eddystone Units to generate electricity during heat waves that hit the region in June and July. The emergency conditions that led to the issuance of the first order persist.  This order is in effect beginning on August 28, 2025, and continues until November 26, 2025.  Background: PJM has voiced resource adequacy concerns for years. In its February 2023 report, PJM highlighted the increasing resource adequacy concerns and reliability risks in the coming years due to the potential timing mismatch between resource retirements, load growth and the pace of new generation entry.  In a December 2024 filing at the Federal Energy Regulatory Commission (FERC), PJM

Read More »

3.5 GW of offshore wind in New England could offset natural gas price spikes: report

Dive Brief: If the 3.5 GW of wind energy projects currently contracted offshore New England had been operational last winter, it could have offset the surge in natural gas prices that season and saved ratepayers a total of $400 million on their energy bills, according to a Wednesday report from Daymark Energy Advisors. The report estimated that “savings exceeded [power purchase agreement] costs across all scenarios, yielding annual bill savings of $1.32 to $2.68 per month for an average Eversource [Energy] residential customer.” RENEW Northeast, the group that commissioned the report, noted that ISO New England released a report last month which found gas prices in spring 2025 averaged $3.40 per million British thermal units, 112% higher than the spring 2024 price of $1.60/MMBtu. Dive Insight: The report examines the “potential regional market and Massachusetts ratepayer impacts” if 3.5 GW of offshore wind had been generating power between Dec. 2024 and Feb. 2025. “Even using the most conservative assumptions about cleared offers in Forward Capacity Auction 15 … clearing additional qualified capacity from OSW would have reduced FCA15 costs by at least $128 million, with 83% ($106 million) allocable to Massachusetts load zones,” the report said.  Daymark Energy Advisors found that “injecting near-zero marginal cost offshore wind into the energy market would have reduced ISO-NE Locational Marginal Prices by 11% ($12.60/MWh), reducing wholesale load costs across New England by roughly $400 million. Roughly $129 million of the regional savings would have been allocable to [Massachusetts electric distribution companies].” Last winter was the “first since 2014 to see below-normal temperatures over the course of an entire season,” ISO-NE said in an April release, and natural gas prices rose in accordance with demand.  President Donald Trump and New England leaders like Connecticut Gov. Ned Lamont, D, and New Hampshire Gov. Kelly Ayotte,

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE