Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Azule Energy discovers oil offshore Angola

Azule Energy and partners discovered oil in Block 15/06 in the offshore Lower Congo basin, offshore Angola. Preliminary estimates indicate oil in place of around 500 million bbl, and the presence of existing nearby production infrastructure—about 18 km from Olombendo FPSO—improves development prospects, the operator said in a release Feb. 13. The Algaita-01 exploration well, spudded on Jan. 10, 2026, was drilled by the Saipem 12000 drillship in a water depth of 667 m. The well encountered oil-bearing sandstones in multiple Upper Miocene intervals. Drilling operations were completed Jan. 26, followed by advanced formation evaluation logs to assess reservoir quality and fluid characteristics. Preliminary interpretation of wireline logging and fluid sampling indicates the presence of multiple reservoir intervals with excellent petrophysical properties and fluid mobilities, the company said. Azule Energy is an incorporated joint venture equally owned by bp plc Eni SpA. The company currently produces around 200,000 boe/d in Angola. Block 15/06 is operated by Azule Energy (36.84%), in partnership with SSI (26.32%) and Sonangol E&P (36.84%).

Read More »

ExxonMobil transporting, storing captured CO2 from second operation in Louisiana

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } ExxonMobil Corp. is now transporting and storing captured CO2 from the New Generation Gas Gathering (NG3) project in Gillis, La. Natural gas produced from East Texas and Louisiana is gathered through the NG3 gathering system for treatment at the NG3 Gillis plant, where up to 1.2 million metric tons/year (tpy) of CO2 is expected to be removed from the natural gas stream before the product is redelivered to Gulf Coast markets, including LNG plants, ExxonMobil said. <!–> –><!–> –> July 13, 2023 <!–> –><!–> –> June 1, 2023 <!–> –><!–> –> July 26, 2024 <!–> This startup marks the second active commercial carbon capture and storage (CCS) operation for ExxonMobil in Louisiana. In July 2025, the company began transporting and storing CO2 from Illinois-based CF Industries Holdings Inc.’s Donaldsonville Complex, enabling the production of low-carbon ammonia.  ]–> Photo from CF Industries CF Industries’ Donaldsonville Complex is located on 1,400 acres along the west bank of the Mississippi River in southeastern Louisiana.  <!–> ]–> <!–> The CO2 contracted for the company’s two active projects accounts for up to 3.2 million tpy, about one-third of ExxonMobil’s committed CCS volumes. The company is currently storing

Read More »

Ovintiv to divest Anadarko assets for $3 billion

In a release Feb. 17, Brendan McCracken, Ovintiv president and chief executive officer, said the company has “built one of the deepest premium inventory positions in our industry in the two most valuable plays in North America, the Permian and the Montney,” and that the Anadarko assets sale “positions [Ovintiv] to deliver superior returns for our shareholders for many years to come.” Ovintiv in 2025 had noted plans to sell the asset to help offset the cost of its acquisition of NuVista Energy Ltd. That $2.7-billion cash and stock deal, which closed earlier this month, added about 930 net 10,000-ft equivalent well locations and about 140,000 net acres (70% undeveloped) in the core of the oil-rich Alberta Montney.  Proceeds from the Anadarko assets sale are earmarked for accelerated debt reduction, the company said.  Ovintiv’s sale of its Anadarko assets is expected to close early in this year’s second quarter, subject to customary conditions, with an effective date of Jan. 1, 2026.

Read More »

Azule Energy starts Ndungu full field production offshore Angola

Azule Energy has started full field production from Ndungu, part of the Agogo Integrated West Hub Project (IWH) in the western area of Block 15/06, offshore Angola. Ndungo full field lies about 10 km from the NGOMA FPSO in a water depth of around 1,100 m and comprises seven production wells and four injection wells, with an expected production peak of 60,000 b/d of oil. The National Agency for Petroleum, Gas and Biofuels (ANPG) and Azule Energy noted the full field start-up with first oil of three production wells. The phased integration of IWH, with Ndungu full field producing first via N’goma FPSO and later via Agogo FPSO, is expected to reach a peak output of about 175,000 b/d across the two fields. The fields have combined estimated reserves of about 450 million bbl. The Agogo IWH project is operated by Azule Energy with a 36.84% stake alongside partners Sonangol E&P (36.84%) and Sinopec International (26.32%).   

Read More »

North Atlantic’s Gravenchon refinery scheduled for major turnaround

Canada-based North Atlantic Refining Ltd. France-based subsidiary North Atlantic France SAS is undertaking planned maintenance in March at its North Atlantic Energies-operated 230,000-b/d Notre-Dame-de-Gravenchon refinery in Port-Jérôme-sur-Seine, Normandy. Scheduled to begin on Mar. 3 with the phased shutdown of unidentified units at the refinery, the upcoming turnaround will involve thorough inspections of associated equipment designed for continuous operation, as well as unspecified works to improve energy efficiency, environmental performance, and overall competitiveness of the site, North Atlantic Energies said on Feb. 16. Part of the operator’s routine maintenance program aimed at meeting regulatory requirements to ensure the safety, compliance, and long-term performance of the refinery, North Atlantic Energies said the scheduled turnaround will not interrupt product supplies to customers during the shutdown period. While the company confirmed the phased shutdown of units slated for work during the maintenance event would last for several days, the operator did not reveal a definitive timeline for the entire duration of the turnaround. Further details regarding specific works to be carried out during the major maintenance event were not revealed. The upcoming turnaround will be the first to be executed under North Atlantic Group’s ownership, which completed its purchase of the formerly majority-owned ExxonMobil Corp. refinery and associated petrochemical assets at the site in November 2025.

Read More »

CFEnergía to supply natural gas to low-carbon methanol plant in Mexico

CFEnergía, a subsidiary of Mexico’s Federal Electricity Commission (CFE), has agreed to supply natural gas to Transition Industries LLC for its Pacifico Mexinol project near Topolobampo, Sinaloa, Mexico. Under the signed agreement, which enables the start of Pacifico Mexinol’s construction phase, CFEnergía will supply about 160 MMcfd of natural gas for an unspecified timeframe noted as “long term,” Transition Industries said in a release Feb. 16. The natural gas—to be sourced from the US and supplied at market prices via existing infrastructure—will be used as “critical input for Mexinol’s production of ultra-low carbon methanol,” the company said. Pacifico Mexinol The $3.3-billion Mexinol project, when it begins operations in late 2029 to early 2030, is expected to be the world’s largest ultra-low carbon chemicals plant with production of about 1.8 million tonnes of blue methanol and 350,000 tonnes of green methanol annually. Supply is aimed at markets in Asia, including Japan, while also boosting the development of the domestic market and the Mexican chemical industry. Mitsubishi Gas Chemical has committed to purchasing about 1 million tonnes/year of methanol from the project, about 50% of the project’s planned production. Transition Industries is jointly developing Pacifico Mexinol with the International Finance Corporation (IFC), a member of the World Bank Group. Last year, the company signed a contingent engineering, procurement, and construction (EPC) contract with the consortium of Samsung E&A Co., Ltd., Grupo Samsung E&A Mexico SA de CV, and Techint Engineering and Construction for the project. MAIRE group’s technology division NextChem, through its subsidiary KT TECH SpA, also signed a basic engineering, critical and proprietary equipment supply agreement with Samsung E&A in connection with its proprietary NX AdWinMethanol®Zero technology supply to the project.

Read More »

Azule Energy discovers oil offshore Angola

Azule Energy and partners discovered oil in Block 15/06 in the offshore Lower Congo basin, offshore Angola. Preliminary estimates indicate oil in place of around 500 million bbl, and the presence of existing nearby production infrastructure—about 18 km from Olombendo FPSO—improves development prospects, the operator said in a release Feb. 13. The Algaita-01 exploration well, spudded on Jan. 10, 2026, was drilled by the Saipem 12000 drillship in a water depth of 667 m. The well encountered oil-bearing sandstones in multiple Upper Miocene intervals. Drilling operations were completed Jan. 26, followed by advanced formation evaluation logs to assess reservoir quality and fluid characteristics. Preliminary interpretation of wireline logging and fluid sampling indicates the presence of multiple reservoir intervals with excellent petrophysical properties and fluid mobilities, the company said. Azule Energy is an incorporated joint venture equally owned by bp plc Eni SpA. The company currently produces around 200,000 boe/d in Angola. Block 15/06 is operated by Azule Energy (36.84%), in partnership with SSI (26.32%) and Sonangol E&P (36.84%).

Read More »

ExxonMobil transporting, storing captured CO2 from second operation in Louisiana

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } ExxonMobil Corp. is now transporting and storing captured CO2 from the New Generation Gas Gathering (NG3) project in Gillis, La. Natural gas produced from East Texas and Louisiana is gathered through the NG3 gathering system for treatment at the NG3 Gillis plant, where up to 1.2 million metric tons/year (tpy) of CO2 is expected to be removed from the natural gas stream before the product is redelivered to Gulf Coast markets, including LNG plants, ExxonMobil said. <!–> –><!–> –> July 13, 2023 <!–> –><!–> –> June 1, 2023 <!–> –><!–> –> July 26, 2024 <!–> This startup marks the second active commercial carbon capture and storage (CCS) operation for ExxonMobil in Louisiana. In July 2025, the company began transporting and storing CO2 from Illinois-based CF Industries Holdings Inc.’s Donaldsonville Complex, enabling the production of low-carbon ammonia.  ]–> Photo from CF Industries CF Industries’ Donaldsonville Complex is located on 1,400 acres along the west bank of the Mississippi River in southeastern Louisiana.  <!–> ]–> <!–> The CO2 contracted for the company’s two active projects accounts for up to 3.2 million tpy, about one-third of ExxonMobil’s committed CCS volumes. The company is currently storing

Read More »

Ovintiv to divest Anadarko assets for $3 billion

In a release Feb. 17, Brendan McCracken, Ovintiv president and chief executive officer, said the company has “built one of the deepest premium inventory positions in our industry in the two most valuable plays in North America, the Permian and the Montney,” and that the Anadarko assets sale “positions [Ovintiv] to deliver superior returns for our shareholders for many years to come.” Ovintiv in 2025 had noted plans to sell the asset to help offset the cost of its acquisition of NuVista Energy Ltd. That $2.7-billion cash and stock deal, which closed earlier this month, added about 930 net 10,000-ft equivalent well locations and about 140,000 net acres (70% undeveloped) in the core of the oil-rich Alberta Montney.  Proceeds from the Anadarko assets sale are earmarked for accelerated debt reduction, the company said.  Ovintiv’s sale of its Anadarko assets is expected to close early in this year’s second quarter, subject to customary conditions, with an effective date of Jan. 1, 2026.

Read More »

Azule Energy starts Ndungu full field production offshore Angola

Azule Energy has started full field production from Ndungu, part of the Agogo Integrated West Hub Project (IWH) in the western area of Block 15/06, offshore Angola. Ndungo full field lies about 10 km from the NGOMA FPSO in a water depth of around 1,100 m and comprises seven production wells and four injection wells, with an expected production peak of 60,000 b/d of oil. The National Agency for Petroleum, Gas and Biofuels (ANPG) and Azule Energy noted the full field start-up with first oil of three production wells. The phased integration of IWH, with Ndungu full field producing first via N’goma FPSO and later via Agogo FPSO, is expected to reach a peak output of about 175,000 b/d across the two fields. The fields have combined estimated reserves of about 450 million bbl. The Agogo IWH project is operated by Azule Energy with a 36.84% stake alongside partners Sonangol E&P (36.84%) and Sinopec International (26.32%).   

Read More »

North Atlantic’s Gravenchon refinery scheduled for major turnaround

Canada-based North Atlantic Refining Ltd. France-based subsidiary North Atlantic France SAS is undertaking planned maintenance in March at its North Atlantic Energies-operated 230,000-b/d Notre-Dame-de-Gravenchon refinery in Port-Jérôme-sur-Seine, Normandy. Scheduled to begin on Mar. 3 with the phased shutdown of unidentified units at the refinery, the upcoming turnaround will involve thorough inspections of associated equipment designed for continuous operation, as well as unspecified works to improve energy efficiency, environmental performance, and overall competitiveness of the site, North Atlantic Energies said on Feb. 16. Part of the operator’s routine maintenance program aimed at meeting regulatory requirements to ensure the safety, compliance, and long-term performance of the refinery, North Atlantic Energies said the scheduled turnaround will not interrupt product supplies to customers during the shutdown period. While the company confirmed the phased shutdown of units slated for work during the maintenance event would last for several days, the operator did not reveal a definitive timeline for the entire duration of the turnaround. Further details regarding specific works to be carried out during the major maintenance event were not revealed. The upcoming turnaround will be the first to be executed under North Atlantic Group’s ownership, which completed its purchase of the formerly majority-owned ExxonMobil Corp. refinery and associated petrochemical assets at the site in November 2025.

Read More »

CFEnergía to supply natural gas to low-carbon methanol plant in Mexico

CFEnergía, a subsidiary of Mexico’s Federal Electricity Commission (CFE), has agreed to supply natural gas to Transition Industries LLC for its Pacifico Mexinol project near Topolobampo, Sinaloa, Mexico. Under the signed agreement, which enables the start of Pacifico Mexinol’s construction phase, CFEnergía will supply about 160 MMcfd of natural gas for an unspecified timeframe noted as “long term,” Transition Industries said in a release Feb. 16. The natural gas—to be sourced from the US and supplied at market prices via existing infrastructure—will be used as “critical input for Mexinol’s production of ultra-low carbon methanol,” the company said. Pacifico Mexinol The $3.3-billion Mexinol project, when it begins operations in late 2029 to early 2030, is expected to be the world’s largest ultra-low carbon chemicals plant with production of about 1.8 million tonnes of blue methanol and 350,000 tonnes of green methanol annually. Supply is aimed at markets in Asia, including Japan, while also boosting the development of the domestic market and the Mexican chemical industry. Mitsubishi Gas Chemical has committed to purchasing about 1 million tonnes/year of methanol from the project, about 50% of the project’s planned production. Transition Industries is jointly developing Pacifico Mexinol with the International Finance Corporation (IFC), a member of the World Bank Group. Last year, the company signed a contingent engineering, procurement, and construction (EPC) contract with the consortium of Samsung E&A Co., Ltd., Grupo Samsung E&A Mexico SA de CV, and Techint Engineering and Construction for the project. MAIRE group’s technology division NextChem, through its subsidiary KT TECH SpA, also signed a basic engineering, critical and proprietary equipment supply agreement with Samsung E&A in connection with its proprietary NX AdWinMethanol®Zero technology supply to the project.

Read More »

Russia’s crude exports signal narrowing buyer pool

A growing number of vessels sailing to unknown destinations and a sharp rise in Russian oil held on water—up as much as 49 million bbl since November 2025—suggest a shrinking pool of willing buyers. Russian crude exports declined by 350,000 b/d m-o-m, reversing most of December’s 360,000 b/d increase. The bulk of the drop came from the Black Sea, while product exports rose by 260,000 b/d, largely driven by heavy product flows (+200,000 b/d). Higher prices boosted revenues across both crude and products. Product revenues climbed by $330 million, more than offsetting a $210 million decline in crude export revenues. Separately, Russia reported a 24% year-on-year decline in 2025 oil and gas tax revenues to about $110 billion. Under the European Union (EU)’s revised mechanism, the price cap on Russian crude was lowered to $44.10/bbl as of Feb. 2. Urals Primorsk averaged $40.06/bbl in January. Of total crude exports, 65% were sold by Russian proxy companies, 13% by sanctioned firms, and 21% by other companies. Among the proxy companies, Redwood Global FZE LLC—Rosneft’s substitute—remained the largest crude exporter, supplying 1 million b/d to China and India last month. Russian crude imports EU enforcement measures are beginning to reshape trade flows. Since Jan. 21, EU buyers have been required to more rigorously verify the origin of imported products. In 2025, the EU-27 and UK sourced 12% of their middle distillate imports from refineries in India and Türkiye processing Russian crude. India’s Jamnagar refinery halted Russian crude imports in mid-December to comply, as Europe accounted for 40% of its middle distillate exports last year. As a result, EU and UK reliance on seaborne Russian-origin molecules fell to 1.6% in January, with most cargoes shipped before Jan. 21 and largely originating from Türkiye. Meanwhile, EU middle distillate imports from the US rose by

Read More »

Commonwealth LNG signs 20-year LNG supply deal with Aramco Trading

Commonwealth LNG, a Caturus company, signed a long-term LNG supply agreement with Aramco Trading, a subsidiary of Saudi Aramco. Under a sale and purchase agreement, Aramco Trading will purchase 1 million tonnes/year (tpy) of LNG from the 9.5-million tpy Commonwealth LNG plant currently under development on the Gulf Coast in Cameron Parish, La. Caturus is working to secure the project’s remaining capacity as it aims for a final investment decision (FID) on the plant. The company holds long-term offtake contracts with Glencore, JERA, Petronas, Mercuria, and EQT. In December 2025, the company authorized full purchase orders to certain industry partners supporting development of the project. The purchase orders are being executed via Commonwealth’s engineering, procurement and construction partner Technip Energies. The purchase orders address long lead time equipment needed to advance construction features of Commonwealth’s modular approach. They include orders with Baker Hughes for six mixed-refrigerant compressors driven by LM9000 gas turbines; Honeywell, to supply six main cryogenic heat exchangers; and Solar Turbines, providing four Titan 350 gas turbine-generators.  At the time, Caturus said the FID on the project was expected in first-quarter 2026.

Read More »

Exxon Mobil Guyana prepares Errea Wittu FPSO at Uaru field

Exxon Mobil Guyana Ltd. subcontractor Jumbo Offshore, on behalf of Modec, has completed mooring pre-installation for the Errea Wittu floating, production, storage, and offloading (FPSO) unit at Uaru field, Stabroek block, offshore Guyana. Jumbo Offshore performed installation engineering, procurement, mobilization, and marshaling activities to support the deepwater pre-lay mooring project. The offshore campaign was executed using the Fairplayer J-class installation vessel. Errea Wittu is expected to produce 250,000 b/d of oil and will have a gas treatment capacity of 540 MMcfd. The unit will have a water injection capacity of 350,000 b/d, a produced water capacity of 300,000 b/d, and a storage capacity of 2 million bbl of crude oil. Uaru field lies 200 km offshore Guyana at a depth of 1,750 m. The fifth project on Guyana’s offshore Stabroek block, Uaru is estimated to hold more than 800 million bbl of oil. First oil is expected this year.  As part of its fourth-quarter 2025 earnings call Jan. 30, 2026, the company noted record full-year production from Guyana of more than 700,000 b/d with its first four developments.

Read More »

US rig count unchanged, Canada rig count dips

The active rig count in the US was unchanged from last week with 551 rigs running for the week ended Feb. 13, according to Baker Hughes data. The number of working oil-directed rigs in the US decreased by 3 units to 409 for the week. The count is down 72 units year-over-year. Gas-directed rigs increased by 3 units to 133, up 32 units year-over-year. Nine rigs considered unclassified remained active during the week, unchanged from last week. The number of working US land-based rigs declined by 1 to 531. Horizontal rigs decreased 2 units to 481. Directional drilling rigs increased by 2 to 57 for the week. The vertical rig count was unchanged this week at 13 rigs working. The number of rigs working offshore increased by 1 to end the week with 17 working rigs. Louisiana saw its rig count increase by 2 units to end the week with 41 rigs. New Mexico, Pennsylvania, and Wyoming each saw rig counts increase by a single unit this week to respective counts of 102, 20, and 17. Texas dropped 3 rigs to leave 229 running for the week. Rig counts in Oklahoma and North Dakota fell by one unit each, leaving 45 rigs running in Oklahoma and 26 in North Dakota. Canada’s rig count fell by 6 rigs to 222. The count is down 23 units from this time a year-ago. The number of gas-directed rigs decreased by 4 units to 69. The oil-directed rig count fell by 2 units to leave 153 units working.

Read More »

No action in US-Iran conflict reduces market risk premium

Oil, fundamental analysis With little progress being made in the US-Iran talks and, with no military action by either side, traders reduced the market risk premium this week with a ‘wait-and-see’ attitude. An unexpectedly large gain in crude inventory and an increase in gasoline stocks provided bearish momentum for prices to move lower. US prices still remained above the key $60/bbl level. WTI had a High of $65.85/bbl on Wednesday with a weekly Low of $61.15 on Friday. Brent crude’s High was $70.70/bbl on Wednesday while its Low was $66.90 Friday. Both grades settled lower week-on-week. The WTI/Brent spread has widened to ($4.90) on the earlier week rally. Look for this to tighten next week. US-Iran talks are scheduled to continue which lends an optimistic tone however, a second US aircraft carrier is reported to be heading into the Middle East, a move that could add risk premium back into oil markets. US-flagged ships were told to avoid Iranian waters when  traversing the Strait of Hormuz. Israeli PM Netanyahu visited the White House this week to present his countries demands for limitations on Iran’s uranium enrichment and its backing of rebel groups like Hamas and Hezbollah. With near-term concerns regarding supply disruption abating, the market has returned to a focus on over supply. The International Energy Agency (IEA) in Paris has lowered its forecast for global crude demand for this year while stating that global inventories last year grew at their strongest pace since 2020. On the other hand, the OPEC+ group output for January fell by 440,000 b/d.  As part of a wider trade deal, India has agreed to halt its purchases of Russian crude. In return, the US will slash the tariffs on imports from India from the punitive 50% back down to the 18% level. US exploration and

Read More »

Energy Secretary Prevents Closure of Coal Plant That Provided Essential Power During Winter Storm

WASHINGTON—U.S. Secretary of Energy Chris Wright renewed an emergency order to address critical grid reliability issues facing the Midwestern region of the United States. The emergency order directs the Midcontinent Independent System Operator (MISO), in coordination with Consumers Energy, to ensure that the J.H. Campbell coal-fired power plant (Campbell Plant) in West Olive, Michigan shall take all steps necessary to remain available to operate and to employ economic dispatch to minimize costs for the American people. The Campbell Plant was originally scheduled to shut down on May 31, 2025 — 15 years before the end of its scheduled design life. “The energy sources that perform when you need them most are inherently the most valuable—that’s why beautiful, clean coal was the MVP of recent winter storms,” Secretary Wright said. “Hundreds of American lives have likely been saved because of President Trump’s actions saving America’s coal plants, including this Michigan coal plant which ran daily during Winter Storm Fern. This emergency order will mitigate the risk of blackouts and maintain affordable, reliable, and secure electricity access across the region.” The Campbell Plant was integral in stabilizing the grid during the recent winter storms. The plant operated at over 650 megawatts every day before and during Winter Storm Fern, January 21-February 1, proving that allowing it to cease operations would needlessly contribute to grid fragility. Thanks to President Trump’s leadership, coal plants across the country are reversing plans to shut down. In 2025, more than 17 gigawatts of coal-powered electricity generation were saved ahead of Winter Storm Fern. Since the Department of Energy’s (DOE) original order issued on May 23, the Campbell Plant has proven critical to MISO’s operations, operating regularly during periods of high energy demand and low levels of intermittent energy production. Subsequent orders were issued on August 20, 2025 and November 18, 2025. As outlined in DOE’s Resource

Read More »

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.  But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.  Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.  More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.  I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.  On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.  People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest).  People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.  Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?  In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good.  Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed.  And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.   But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing.  Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search.  “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly.  It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be.  But once you’ve used AI Overviews a bit, you realize they are different.  Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.)  “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.”  That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video.  When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai.  There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out?  I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too.  “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web.  “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful.  “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.”  But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?   Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.   “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says.  Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.”  Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”  “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.”  He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew?  A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.   “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says.  OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more.  “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience.  Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does.  “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.”  When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them.  “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed!  The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers.  It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.”  We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge.  The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets.  Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed.  “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information.  In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses.  But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on. 

Read More »

Subsea7 Scores Various Contracts Globally

Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Read More »

Driving into the future

Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.  We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen.  Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes. 
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake.  What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story. 
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa.  Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Read More »

Oil Holds at Highest Levels Since October

Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

Read More »

What to expect from NaaS in 2025

Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

Read More »

UK battery storage industry ‘back on track’

UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing  and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was  fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

Read More »

Tuning into the future of collaboration 

In partnership withShure When work went remote, the sound of business changed. What began as a scramble to make home offices functional has evolved into a revolution in how people hear and are heard. From education to enterprises, companies across industries have reimagined what clear, reliable communication can mean in a hybrid world. For major audio and communications enterprises like Shure and Zoom, that transformation has been powered by artificial intelligence, new acoustic technologies, and a shared mission: making connection effortless.  Necessity during the pandemic accelerated years of innovation in months.   “Audio and video just working is a baseline for collaboration,” says chief ecosystem officer at Zoom, Brendan Ittelson. “That expectation has shifted from connecting people to enhancing productivity and creativity across the entire ecosystem.”   Audio is a foundation for trust, understanding, and collaboration. Poor sound quality can distort meaning and fatigue listeners, while crisp audio and intelligent processing can make digital interactions feel nearly as natural as in-person exchanges. 
“If you think about the fundamental need here,” adds chief technology officer at Shure, Sam Sabet, “It’s the ability to amplify the audio and the information that’s really needed, and diminish the unwanted sounds and audio so that we can enhance that experience and make it seamless for people to communicate.”   For both Ittelson and Sabet, AI now sits at the center of this progress. For Shure, machine learning powers real-time noise suppression, adaptive beamforming, and spatial audio that tunes itself to a room’s acoustics. For Zoom, AI underpins every layer of its platform, from dynamic noise reduction to automated meeting summaries and intelligent assistants that anticipate user needs. These tools are transforming communication from reactive to proactive, enabling systems that understand intent, context, and emotion. 
“Even if you’re not working from home and coming into the office, the types of spaces and environments you try to collaborate in today are constantly changing because our needs are constantly changing,” says Sabet. “Having software and algorithms that adapt seamlessly and self-optimize based on the acoustics of the room, based on the different layouts of the spaces where people collaborate in is instrumental.”  The future, they suggest, is one where technology fades into the background. As audio devices and AI companions learn to self-optimize, users won’t think about microphones or meeting links. Instead, they’ll simply connect. Both companies are now exploring agentic AI systems and advanced wireless solutions that promise to make collaboration seamless across spaces, whether in classrooms, conference rooms, or virtual environments yet to come.  “It’s about helping people focus on strategy and creativity instead of administrative busy work,” says Ittelson.  This episode of Business Lab is produced in partnership with Shure.  Full Transcript  Megan Tatum: From MIT Technology Review, I’m Megan Tatum and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.   This episode is produced in partnership with Shure.   Now as the pandemic ushered in the cultural shift that led to our increasingly virtual world, it also sparked a flurry of innovation in the audio and video industries to keep employees and customers connected and businesses running. Today we’re going to talk about the AI technologies behind those innovations, the impact on audio innovation, and the continuing emerging opportunities for further advances in audio capabilities.  

Two words for you: elevated audio.   My guests today are Sam Sabet, chief technology officer at Shure, and Brendan Ittelson, chief ecosystem officer at Zoom.   Welcome Sam, welcome Brendan.  Sam Sabet: Thank you, Megan. It’s a pleasure to be here and I’m looking forward to this conversation with both you and Brendan. It should be a very exciting conversation.  Brendan Ittelson: Thank you so much for having me today. I’m looking forward to the conversation and all the topics we have to dive into on this area.  Megan: Fantastic. Lovely to have you both here. And Sam, just to set some context, I wonder if we could start with the pandemic and the innovation that really was born out of necessity. I mean, when it became clear that we were all going to be virtual for the foreseeable future, I wonder what was the first technological mission for Shure?  Sam: Yeah, very good question. The pandemic really accelerated a lot of innovation around virtual communications and fundamentally how we perform our everyday jobs remotely. One of our first technological mission when the pandemic happened and everybody ended up going home and performing their functions remotely was to make sure that people could continue to communicate effectively, whether that’s for business meetings, virtual events, or educational purposes. We focused on collaboration and enhancing collaboration tools. And ideally what we were aiming to do, or we focused on, was to basically improve the ease of use and configuration of audio tool sets.  Because unlike the office environment where it might be a lot more controlled, people are working from non-traditional areas like home offices or other makeshift solutions, we needed to make sure that people could still get pristine audio and that studio level audio even in uncontrolled environments that are not really made for that. We expedited development in our software solutions. We created tool sets that allowed for ease of deployment and remote configuration and management so we could enable people to continue doing the things they needed to do without having to worry about the underlying technology. 
Megan: And Brendan, during that time, it seemed everyone became a Zoom user of some sort. I mean, what was the first mission at Zoom when virtual connection became this necessity for everyone?  Brendan: Well, our mission fundamentally didn’t change. It’s always been about delivering frictionless communications. What shifted was the urgency and the magnitude of what we were doing. Our focus shifted on how we do this reliably, securely, and to scale to ensure these millions of new users could connect instantly without friction. We really shifted our thinking of being just a business continuity tool to becoming a lifeline for so many individuals and industries. The stories that we heard across education, healthcare, and just general human connection, the number of those moments that matter to people that we were able to help facilitate just became so important. We really focused on how can we be there and make it frictionless so folks can focus on that human connection. And that accelerated our thinking in terms of innovation and reinforced the thought that we need to focus on the simplicity, accessibility, and trust in communication technology so that people could focus on that connection and not the technology that makes it possible. 
Megan: That’s so true. It did really just become an absolute lifeline for people, didn’t it? And before we dive into the technologies beyond these emerging capabilities, I wonder if we could first talk about just the importance of clear audio. I mean, Sam, as much as we all worry over how we look on Zoom, is how we sound perhaps as or even more impactful?  Sam: Yeah, you’re absolutely correct. I mean, clear audio is absolutely critical for effective communications. Video quality is very important absolutely, but poor audio can really hinder understanding and engagement. As a matter of fact, there’s studies and research from areas such as Yale University that say that poor audio can make understanding somewhat more challenged and even affect retention of information. Especially in an educational type environment where there’s a lot of background noise and very differing types of spaces like auditoriums and lecture halls, it really becomes a high priority that you have great audio quality. And during the pandemic, as you said, and as Brendan rightly said, it became one of our highest priorities to focus on technologies like beamforming mics and ways to focus on the speaker’s voice and minimize that unwanted background noise so that we could ensure that the communication was efficient, was well understood, and that it removed the distraction so people could be able to actually communicate and retain the information that was being shared.  Megan: It is incredible just how impactful audio can be, can’t it? Brendan, I mean as you said, remote and hybrid collaboration is part of Zoom’s DNA. What observations can you share about how users have grown along with the technological advancements and maybe how their expectations have grown as well?  Brendan: Definitely. I mean, users now expect seamless and intelligent experiences. Audio and video just working is a baseline for collaboration. That expectation has shifted from connecting people to enhancing productivity and creativity across the entire ecosystem. When we look at it, we’re really looking at these trends in terms of how people want to be better when they’re at home. For example, AI-powered tools like Smart Summaries, translation and noise suppression to help people stay productive and connected no matter where they’re working. But then this also comes into play at the office. We’re starting to see folks that dive into our technology like Intelligent Director and Smart Name Tags that create that meeting equity even when they’re in a conference room.  So, the remote experience and the room experience all are similar and create that same ability to be seen, heard, and contribute. And we’re now diving further into this that it’s beyond just meetings. Zoom is really transforming into an AI-first work platform that’s focused on human connection. And so that goes beyond the meetings into things like Chat, Zoom Docs, Zoom Events and Webinars, the Zoom Contact Center and more. And all of this being brought together using our AI Companion at its core to help connect all of those different points of connection for individuals.  Megan: I mean, so Brendan, we know it wasn’t only workplaces that were affected by the pandemic, it was also the education sector that had to undergo a huge change. I wondered if you could talk a little bit about how Zoom has operated in that higher education sphere as well. 
Brendan: Definitely. Education has always been a focus for Zoom and an area that we’ve believed in. Because education and learning is something as a company we value and so we have invested in that sector. And personally being the son of academics, it is always an area that I find fascinating. We continue to invest in terms of how do we make the classroom a stronger space? And especially now that the classroom has changed, where it can be in person, it can be virtual, it can be a mix. And using Zoom and its tools, we’re able to help bridge all those different scenarios to make learning accessible to students no matter their means.  That’s what truly excites us, is being able to have that technology that allows people to pursue their desires, their interests, and really up-level their pursuits and inspire more. We’re constantly investing in how to allow those messages to get out and to integrate in the flow of communication and collaboration that higher education uses, whether that’s being integrated into the classroom, into learning management systems, to make that a seamless flow so that students and their educators can just collaborate seamlessly. And also that we can support all the infrastructure and administration that helps make that possible.  Megan: Absolutely. Such an important thing. And Sam, Shure as well, could you talk to us a bit about how you worked in that kind of education space as well from an audio point of view?  Sam: Absolutely. Actually, this is a topic that’s near and dear to my heart because I’m actually an adjunct professor in my free time. 
Megan: Oh, wow. Very impressive.  Sam: And the challenges of trying to do this sort of a hybrid lecture, if you will. And Shure has been particularly well suited for this environment and we’ve been focused on it and investing in technologies there for decades. If you think about how a lecture hall is structured, it’s a little different than just having a meeting around the conference table. And Shure has focused on creating products that allow this combination of a presenter scenario along with a meeting space plus the far end where users or students are remote, they can hear intelligibly what’s happening in the lecture hall, but they can also participate.  Between our products like the Ceiling Mic Arrays and our wireless microphones that are purpose built for presenters and educators like our MXW neXt product line, we’ve created technologies that allow those two previously separate worlds to integrate together. And then add that onto integrating with Zoom and other products that allow for that collaboration has been very instrumental. And again, being a user and providing those lectures, I can see a night and day difference and just how much more effective my lectures are today from where they were five to six years ago. And that’s all just made possible by all the technologies that are purpose built for these scenarios and integrating more with these powerful tools that just make the job so much more seamless.  Megan: Absolutely fascinating that you got to put the technology to use yourself as well to check that it was all working well. And you mentioned AI there, of course. I mean, Sam, what AI technologies have had the most significant impact on recent audio advancements too?  Sam: Yeah. Absolutely. If you think about the fundamental need here, it’s the ability to amplify the audio and the information that’s really needed and diminish the unwanted sounds and audio so that we can enhance that experience and make it seamless for people to communicate. With our innovations at Shure, we’ve leveraged the cutting-edge technologies to both enhance communication effectiveness and to align seamlessly with evolving features in unified communications like the ones that Brandon just mentioned in the Zoom platforms.   We partner with industry leaders like Zoom to ensure that we’re providing the ability to be able to focus on that needed audio and eliminate all the background distractions. AI has transformed that audio technology with things like machine learning algorithms that enable us to do more real-time audio processing and significantly enhancing things like noise reduction and speech isolation. Just to give you a simple example, our IntelliMix Room audio processing software that we’ve released as well as part of a complete room solution uses AI to optimize sound in different environments.  And really that’s one of the fundamental changes in this period, whether that’s pandemic or post-pandemic, is that the key is really flexibility and being able to adapt to changing work environments. Even if you’re not working from home and coming into the office, the types of spaces and environments you try to collaborate in today are constantly changing because our needs are constantly changing. And so having software and algorithms that adapt seamlessly and are able to self-optimize based on the acoustics of the room, based on the different layouts of the spaces where people collaborate in is instrumental.   And then last but not least, AI has transformed the way audio and video integrate. For example, we utilize voice recognition systems that integrate with intelligent cameras so that we enable voice tracking technology so that cameras can not only identify who’s speaking, but you have the ability to hear and see people clearly. And that in general just enhances the overall communication experience.  Megan: Wow. It’s just so much innovation in quite a short space of time really. I mean, Brendan, you mentioned AI a little bit there beforehand, but I wonder what other AI technologies have had the biggest impact as Zoom builds out its own emerging capabilities?  Brendan: Definitely. And I couldn’t agree more with Sam that, I mean, AI has made such a big shift and it’s really across the spectrum. And when I think about it, there’s almost three tiers when you look at the stack. You start off at the raw audio where AI is doing those things like noise suppression, echo cancellation, voice enhancements. All of that just makes this amazing audio signal that can then go into the next layer, which is the speech AI and natural language processing. Which starts to open up those items such as the real-time transcription, translation, searchable content to make the communication not just what’s heard, but making it more accessible to more individuals and inclusive by providing that content in a format that is best for them.  And then you take those two layers and put the generative and agentic AI on top of that, that can start surfacing insights, summarize the conversation, and even take actions on someone’s behalf. It really starts to change the way that people work and how they have access and allows them to connect. I think it is a huge shift and I’m very excited by how those three levels start to interact to really enable people to do more and to connect thanks to AI.  Megan: Yeah. Absolutely. So much rich information that can come out from a single call now because of those sorts of tools. And following on from that, Brendan, I mean, you mentioned before the Zoom AI Companion. I wondered if you could talk a bit about what were your top priorities when building that product to ensure it was truly useful for your customers?  Brendan: Definitely. When we developed AI Companion, we had two priority focus areas from day one, trust and security, and then accuracy and relevance. On the trust side, it was a non-negotiable that customer data wouldn’t be used to train our models. People need to know that their conversations and content are private and secure.  Megan: Of course.  Brendan: And then with accuracy, we needed to ensure AI outputs weren’t generic but grounded in the actual context of a meeting, a chat or a product. But the real story here when I think about AI Companion is the customer value that it delivers. AI Companion helps people save time with meeting recaps, task generation, and proactive prep for the next session. It reduces that friction in hybrid work, whether you’re in a meeting room, a Zoom room, or collaborating across different collaboration tools like Microsoft or Google. And it enables more equitable participation by surfacing the right context for everyone no matter where and how they’re working.   All this leads to a result where it’s practical, trustworthy, and embedded where work happens. And it’s just not another tool to manage, it’s there in someone’s flow of work to help them along the way.  Megan: Yeah. That trust piece is just so important, isn’t it, today? And Sam, as much as AI has impacted audio innovation, audio has also had an impact on AI capabilities. I wondered if you could talk a little bit about audio as a data input and the advancements technologies like large language models, LLMs, are enabling.  Sam: Absolutely. Audio is really a rich data source that’s added a new dimension to AI capabilities. If you think about speech recognition or natural language processing, they’ve had significant advances due to audio data that’s provided for them. And to Brendan’s point about trust and accuracy, I like to think of the products that Shure enables customers with as essentially the eyes and ears in the room for leading AI companions just like the Zoom AI Companion. You really need that pristine audio input to be able to trust the accuracy of what the AI generates. These AI Companions have been very instrumental in the way we do business every day. I mean, between transcription, speaker attributions, the ability to add action items within a meeting and be able to track what’s happening in our interactions, all of that really has to rely on that accurate and pristine input from audio into the AI. I feel that further improves the trust that our end users have to the results of AI and be able to leverage it more.   If you think about it, if you look at how AI audio inputs enhance that interactive AI system, it enables more natural and intuitive interactions with AI. And it really allows for that seamless integration and the ability for users to use it without having to worry about, is the room set up correctly? Is the audio level proper? And when we talk even about agentic AI, we’re working on future developments where systems can self-heal or detect that there are issues in the environment so that they can autocorrect and adapt in all these different environments and further enable the AI to be able to do a much more effective job, if you will.  Megan: Sam, you touched on future developments there. I wonder if we could close our conversation today with a bit of a future forward look, if we could. Brendan, can you share innovations that Zoom is working on now and what are you most excited to see come to fruition?  Brendan: Well, your timing for this question is absolutely perfect because we’ve just wrapped up Zoomtopia 2025.  Megan: Oh, wow.  Brendan: And this is where we discussed a lot of the new AI innovations that we have coming to Zoom. Starting off, there’s AI Companion 3.0. And we’ve launched this next generation of agentic AI capabilities in Zoom Workplace. And with 3.0 when it releases, it isn’t just about transcribing, it’s turned into really a platform that helps you with follow-up task, prep for your next conversation, and even proactively suggest how to free up your time. For example, AI Companion can help you schedule meetings intelligently across time zones, suggest which meetings you can skip, and still stay informed and even prepare you with context and insights before you walk into the conversation. It’s about helping people focus on strategy and creativity instead of administrative busy work. And for hybrid work specifically, we introduced Zoomie Group Assistant, which will be a big leap for hybrid collaboration.  Acting as an assistant for a group chat and meetings, you can simply ask, “@Zoomie, what’s the latest update on the project?” Or “@Zoomie, what are the team’s action items?” And then get instant answers. Or because we’re talking about audio here, you can go into a conference room and say, “Hey, Zoomie,” and get help with things like checking into a room, adjusting lights, temperature, or even sharing your screen. And while all these are built-in features, we’re also expanding the platform to allow custom AI agents through our AI Studio, so organizations can bring their own agents or integrate with third-party ones.   Zoom has always believed in an open platform and philosophy and that is continuing. Folks using AI Companion 3.0 will be able to use agents across platforms to work with the workflows that they have across all the different SaaS vendors that they might have in their environment, whether that’s Google, Microsoft, ServiceNow, Cisco, and so many other tools.  Megan: Fantastic. It certainly sounds like a tool I could use in my work, so I look forward to hearing more about that. And Sam, we’ve touched on there are so many exciting things happening in audio too. What are you working on at Shure? And what are you most excited to see come to fruition?  Sam: At Shure, our engineering teams are really working on a range of exciting projects, but particularly we’re working on developing new collaboration solutions that are integral for IT end users. And these integrate obviously with the leading UC platforms.   We’re integrating audio and video technologies that are scalable, reliable solutions. And we want to be able to seamlessly connect these to cloud services so that we can leverage both AI technologies and the tool sets available to optimize every type of workspace essentially. Not just meeting rooms, but lecture halls, work from home scenarios, et cetera.   The other area that we really focus on in terms of our reliability and quality really comes from our DNA in the pro audio world. And that’s really all-around wireless audio technologies. We’re developing our next-generation wireless systems and these are going to offer even greater reliability and range. And they really become ideal for everything from a large-scale event to personal home use and the gamut across that whole spectrum. And I think all of that in partnership with our partners like Zoom will help just facilitate the modern workspace.  Megan: Absolutely. So much exciting innovation clearly going on behind the scenes. Thank you both so much.   That was Sam Sabet, chief technology officer at Shure, and Brendan Ittelson, chief ecosystem officer at Zoom, whom I spoke with from Brighton in England.   That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.   This show is available wherever you get your podcasts. And if you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review and this episode was produced by Giro Studios. Thanks for listening. 

Read More »

The Download: unraveling a death threat mystery, and AI voice recreation for musicians

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Hackers made death threats against this security researcher. Big mistake. In April 2024, a mysterious someone using the online handles “Waifu” and “Judische” began posting death threats on Telegram and Discord channels aimed at a cybersecurity researcher named Allison Nixon.These anonymous personas targeted Nixon because she had become a formidable threat: As chief research officer at the cyber investigations firm Unit 221B, named after Sherlock Holmes’s apartment, she had built a career tracking cybercriminals and helping get them arrested.Though she’d done this work for more than a decade, Nixon couldn’t understand why the person behind the accounts was suddenly threatening her. And although she had taken an interest in the Waifu persona in years past for crimes he boasted about committing, he hadn’t been on her radar for a while when the threats began, because she was tracking other targets.Now Nixon resolved to unmask Waifu/Judische and others responsible for the death threats—and take them down for crimes they admitted to committing. Read the full story. —Kim Zetter
This story is from the next print issue of MIT Technology Review magazine, which is all about crime. If you haven’t already, subscribe now to receive future issues once they land. 
ALS stole this musician’s voice. AI let him sing again. There are tears in the audience as Patrick Darling’s song begins to play. It’s a heartfelt song written for his great-grandfather, whom he never got the chance to meet. But this performance is emotional for another reason: It’s Darling’s first time on stage with his bandmates since he lost the ability to sing two years ago. The 32-year-old musician was diagnosed with amyotrophic lateral sclerosis (ALS) when he was 29 years old. Like other types of motor neuron disease, it affects nerves that supply the body’s muscles. People with ALS eventually lose the ability to control their muscles, including those that allow them to move, speak, and breathe. Darling’s last stage performance was over two years ago. By that point, he had already lost the ability to stand and play his instruments and was struggling to sing or speak. But recently, he was able to re-create his lost voice using an AI tool trained on snippets of old audio recordings. Another AI tool has enabled him to use this “voice clone” to compose new songs. Darling is able to make music again. Read the full story. —Jessica Hamzelou The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The creator of OpenClaw is joining OpenAISam Altman was sufficiently impressed by Peter Steinberger’s ideas to get agents to interact with each other. (The Verge)+ The move demonstrates how seriously OpenAI is taking agents. (FT $)+ Moltbook was peak AI theater. (MIT Technology Review) 2 How North Korea is illegally funding its nuclear programA defector explains precisely how he duped remote IT workers into funneling money into its missiles.(WSJ $)+ Nukes are a hot topic across Europe right now. (The Atlantic $)3 Radio host David Greene is convinced Google stole his voiceHe’s suing the company over similarities between his own distinctive vocalizations and the AI voice used in its NotebookLM app. (WP $)+ People are using Google study software to make AI podcasts. (MIT Technology Review)4 US automakers are worried by the prospect of a Chinese invasionThey fear Trump may greenlight Chinese carmakers to build plants in the US. (FT $)+ China figured out how to sell EVs. Now it has to deal with their aging batteries. (MIT Technology Review)5 Google downplays safety warnings on its AI-generated medical adviceIt only displays extended warnings when a user clicks to ‘Show more.’ (The Guardian)+ Here’s another reason why you should keep a close eye on AI Overviews. (Wired $)+ AI companies have stopped warning you that their chatbots aren’t doctors. (MIT Technology Review) 6 How to make Lidar affordable for all carsA compact device could prove the key. (IEEE Spectrum) 7 Robot fight nights are all the rage in San FranciscoStep aside, Super Bowl! (Rest of World)+ Humanoid robots will take to the stage for Chinese New Year celebrations. (Reuters) 8 Influencers and TikTokers are feeding their babies butterBut there’s no scientific evidence to back up some of their claims. (NY Mag $) 9 This couple can’t speak the same languageMicrosoft Translator has helped them to sustain a marriage. (NYT $)+ AI romance scams are on the rise. (Vox)10 AI promises to make better, more immersive video gamesBut those are lofty goals that may never be achieved. (The Verge)+ Google DeepMind is using Gemini to train agents inside Goat Simulator 3. (MIT Technology Review) Quote of the day
“Right now this is a baby version. But I think it’s incredibly concerning for the future.” —Scott Shambaugh, a software engineer who recently became the subject of a scathing blog post written by an AI bot accusing him of hypocrisy and prejudice, tells the Wall Street Journal why this could be the tip of the iceberg.
One more thing Why do so many people think the Fruit of the Loom logo had a cornucopia?Quick question: Does the Fruit of the Loom logo feature a cornucopia?Many of us have been wearing the company’s T-shirts for decades, and yet the question of whether there is a woven brown horn of plenty on the logo is surprisingly contentious.According to a 2022 poll, 55% of Americans believe the logo does include a cornucopia, 25% are unsure, and only 21% are confident that it doesn’t, even though this last group is correct.There’s a name for what’s happening here: the “Mandela effect,” or collective false memory, so called because a number of people misremember that Nelson Mandela died in prison. Yet while many find it easy to let their unconfirmable beliefs go, some spend years seeking answers—and vindication. Read the full story. —Amelia Tait We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + When dating apps and book lovers collide, who knows what could happen.+ It turns out humans have a secret third set of teeth, which is completely wild.+ We may never know the exact shape of the universe. But why is that?+ If your salad is missing a certain something, some crispy lentils may be just the ticket.

Read More »

The scientist using AI to hunt for antibiotics just about everywhere

When he was just a teenager trying to decide what to do with his life, César de la Fuente compiled a list of the world’s biggest problems. He ranked them inversely by how much money governments were spending to solve them. Antimicrobial resistance topped the list.  Twenty years on, the problem has not gone away. If anything, it’s gotten worse. Infections caused by bacteria, fungi, and viruses that have evolved ways to evade treatments are now associated with more than 4 million deaths per year, and a recent analysis, published in the Lancet, predicts that number could surge past 8 million by 2050. In a July 2025 essay in Physical Review Letters, de la Fuente, now a bioengineer and computational biologist, and synthetic biologist James Collins warned of a looming “post­antibiotic” era in which infections from drug-resistant strains of common bacteria like Escherichia coli or Staphylococcus aureus, which can often still be treated by our current arsenal of medications, become fatal. “The antibiotic discovery pipeline remains perilously thin,” they wrote, “impeded by high development costs, lengthy timelines, and low returns on investment.” But de la Fuente is using artificial intelligence to bring about a different future. His team at the University of Pennsylvania is training AI tools to search genomes far and deep for peptides with antibiotic properties. His vision is to assemble those peptides—molecules made of up to 50 amino acids linked together—into various configurations, including some never seen in nature. The results, he hopes, could defend the body against microbes that withstand traditional treatments.  His quest has unearthed promising candidates in unexpected places. In August 2025 his team, which includes 16 scientists in Penn’s Machine Biology Group, described peptides hiding in the genetic code of ancient single-celled organisms called archaea. Before that, they’d excavated a list of candidates from the venom of snakes, wasps, and spiders. And in an ongoing project de la Fuente calls “molecular de-­extinction,” he and his collaborators have been scanning published genetic sequences of extinct species for potentially functional molecules. Those species include hominids like Neanderthals and Denisovans and charismatic megafauna like woolly mammoths, as well as ancient zebras and penguins. In the history of life on Earth, de la Fuente reasons, maybe some organism evolved an antimicrobial defense that could be helpful today. Those long-gone codes have given rise to resurrected compounds with names like ­mammuthusin-2 (from woolly mammoth DNA), mylodonin-2 (from the giant sloth), and hydrodamin-1 (from the ancient sea cow). Over the last few years, this molecular binge has enabled de la Fuente to amass a library of more than a million genetic recipes.
At 40 years old, de la Fuente has also collected a trophy case of awards from the American Society for Microbiology, the American Chemical Society, and other organizations. (In 2019, this magazine named him one of “35 Innovators Under 35” for bringing computational approaches to antibiotic discovery.) He’s widely recognized as a leader in the effort to harness AI for real-world problems. “He’s really helped pioneer that space,” says Collins, who is at MIT. (The two have not collaborated in the laboratory, but Collins has long been at the forefront of using AI for drug discovery, including the search for antibiotics. In 2020, Collins’s team used an AI model to predict a broad-­spectrum antibiotic, halicin, that is now in preclinical development.)  The world of antibiotic development needs as much creativity and innovation as researchers can muster, says Collins. And de la Fuente’s work on peptides has pushed the field forward: “César is marvelously talented, very innovative.” 
A messy, noisy endeavor De la Fuente describes antimicrobial resistance as an “almost impossible” problem, but he sees plenty of room for exploration in the word almost. “I like challenges,” he says, “and I think this is the ultimate challenge.”  The use, overuse, and misuse of antibiotics, he says, drives antimicrobial resistance. And the problem is growing unchecked because conventional ways to find, make, and test the drugs are prohibitively expensive and often lead to dead ends. “A lot of the companies that have attempted to do antibiotic development in the past have ended up folding because there’s no good return on investment at the end of the day,” he says. Antibiotic discovery has always been a messy, noisy endeavor, driven by serendipity and fraught with uncertainty and misdirection. For decades, researchers have largely relied on brute-force mechanical methods. “Scientists dig into soil, they dig into water,” says de la Fuente. “And then from that complex organic matter they try to extract antimicrobial molecules.”  But molecules can be extraordinarily complex. Researchers have estimated the number of possible organic combinations that could be synthesized at somewhere around 1060. For reference, Earth contains an estimated 1018 grains of sand. “Drug discovery in any domain is a statistics game,” says Jonathan Stokes, a chemical biologist at McMaster University in Canada, who has been using generative AI to design potential new antibiotics that can be synthesized in a lab, and who worked with Collins on halicin. “You need enough shots on goal to happen to get one.”  Those have to be good shots, though. And AI seems well suited to improving researchers’ aim. Biology is an information source, de la Fuente explains: “It’s like a bunch of code.” The code of DNA has four letters; proteins and peptides have 20, where each “letter” represents an amino acid. De la Fuente says his work amounts to training AI models to recognize sequences of letters that encode antimicrobial peptides, or AMPs. “If you think about it that way,” he says, “you can devise algorithms to mine the code and identify functional molecules, which can be antimicrobials. Or antimalarials. Or anticancer agents.”  Practically speaking, we’re still not there: These peptides haven’t yet been transformed into usable drugs that help people, and there are plenty of details—dosage, delivery, specific targets—that need to be sorted out, says de la Fuente. But AMPs are appealing because the body already uses them.They’re a critical part of the immune system and often the first line of defense against pathogenic infections. Unlike conventional antibiotics, which typically have one trick for killing bacteria, AMPs often exhibit a multimodal approach. They may disrupt the cell wall and the genetic material inside as well as a variety of cellular processes. A bacterial pathogen may evolve resistance to a conventional drug’s single mode of action, but maybe not to a multipronged AMP attack. From discovery to delivery De la Fuente’s group is one of many pushing the boundaries of using AI for antibiotics. Where he focuses primarily on peptides, Collins works on small-molecule discovery. So does Stokes, at McMaster, whose models identify promising new molecules and predict whether they can be synthesized. “It’s only been a few years since folks have been using AI meaningfully in drug discovery,” says Collins.  Even in that short time the tools have changed, says James Zou, a computer scientist at Stanford University, who has worked with Stokes and Collins. Researchers have moved from using predictive models to developing generative approaches. With a predictive approach, Zou says, researchers screen large libraries of candidates that are known to be promising. Generative approaches offer something else: the appeal of designing a new molecule from scratch. Last year, for example, de la Fuente’s team used one generative AI model to design a suite of synthetic peptides and another to assess them. The group tested two of the resulting compounds on mice infected with a drug-resistant strain of Acinetobacter baumannii, a germ that the World Health Organization has identified as a “critical priority” in research on antimicrobial resistance. Both successfully and safely treated the infection. 

But the field is still in the discovery phase. In his current work, de la Fuente is trying to get candidates closer to clinical testing. To that end, his team is developing an ambitious multimodal model called ApexOracle that’s designed to analyze a new pathogen, pinpoint its genetic weaknesses, match it to antimicrobial peptides that might work against it, and then predict how an antibiotic, built from those peptides, would fare in lab tests. It “converges understanding in chemistry, genomics, and language,” he says. It’s preliminary, he adds, but even if it doesn’t work perfectly, it will help steer the next generation of AI models toward the ultimate goal of resisting resistance.  Using AI, he believes, human researchers now have a fighting chance at catching up to the giant threat before them. The technology has already saved decades of human research time. Now he wants it to save lives, too: “This is the world that we live in today, and it’s incredible.”  Stephen Ornes is a science writer in Nashville, Tennessee.

Read More »

Hackers made death threats against this security researcher. Big mistake.

The threats started in spring.  In April 2024, a mysterious someone using the online handles “Waifu” and “Judische” began posting death threats on Telegram and Discord channels aimed at a cybersecurity researcher named Allison Nixon.  “Alison [sic] Nixon is gonna get necklaced with a tire filled with gasoline soon,” wrote Waifu/Judische, both of which are words with offensive connotations. “Decerebration is my fav type of brain death, thats whats gonna happen to alison Nixon.”  It wasn’t long before others piled on. Someone shared AI-generated nudes of Nixon.
These anonymous personas targeted Nixon because she had become a formidable threat: As chief research officer at the cyber investigations firm Unit 221B, named after Sherlock Holmes’s apartment, she had built a career tracking cybercriminals and helping get them arrested. For years she had lurked quietly in online chat channels or used pseudonyms to engage with perpetrators directly while piecing together clues they’d carelessly drop about themselves and their crimes. This had helped her bring to justice a number of cybercriminals—especially members of a loosely affiliated subculture of anarchic hackers who call themselves the Com. But members of the Com aren’t just involved in hacking; some of them also engage in offline violence against researchers who track them. This includes bricking (throwing a brick through a victim’s window) and swatting (a dangerous type of hoax that involves reporting a false murder or hostage situation at someone’s home so SWAT teams will swarm it with guns drawn). Members of a Com offshoot known as 764 have been accused of even more violent acts—including animal torture, stabbings, and school shootings—or of inciting others in and outside the Com to commit these crimes.
Nixon started tracking members of the community more than a decade ago, when other researchers and people in law enforcement were largely ignoring them because they were young—many in their teens. Her early attention allowed her to develop strategies for unmasking them. Ryan Brogan, a special agent with the FBI, says Nixon has helped him and colleagues identify and arrest more than two dozen members of the community since 2011, when he first began working with her, and that her skills in exposing them are unparalleled. “If you get on Allison’s and my radar, you’re going [down]. It’s just a matter of time,” he says. “No matter how much digital anonymity and tradecraft you try to apply, you’re done.” Though she’d done this work for more than a decade, Nixon couldn’t understand why the person behind the Waifu/Judische accounts was suddenly threatening her. She had given media interviews about the Com—most recently on 60 Minutes—but not about her work unmasking members to get them arrested, so the hostility seemed to come out of the blue. And although she had taken an interest in the Waifu persona in years past for crimes he boasted about committing, he hadn’t been on her radar for a while when the threats began, because she was tracking other targets.  Now Nixon resolved to unmask Waifu/Judische and others responsible for the death threats—and take them down for crimes they admitted to committing. “Prior to them death-threatening me, I had no reason to pay attention to them,” she says.  Com beginnings Most people have never heard of the Com, but its influence and threat are growing. It’s an online community comprising loosely affiliated groups of, primarily, teens and twentysomethings in North America and English-speaking parts of Europe who have become part of what some call a cybercrime youth movement.  International laws and norms, and fears of retaliation, prevent states from going all out in cyber operations. That doesn’t stop the anarchic Com. Over the last decade, its criminal activities have escalated from simple distributed denial-of-service (DDoS) attacks that disrupt websites to SIM-swapping hacks that hijack a victim’s phone service, as well as crypto theft, ransomware attacks, and corporate data theft. These crimes have affected AT&T, Microsoft, Uber, and others. Com members have also been involved in various forms of sextortion aimed at forcing victims to physically harm themselves or record themselves doing sexually explicit activities. The Com’s impact has also spread beyond the digital realm to kidnapping, beatings, and other violence.  One longtime cybercrime researcher, who asked to remain anonymous because of his work, says the Com is as big a threat in the cyber realm as Russia and China—for one unusual reason.

“There’s only so far that China is willing to go; there’s only so far that Russia or North Korea is willing to go,” he says, referring to international laws and norms, and fears of retaliation, that prevent states from going all out in cyber operations. That doesn’t stop the anarchic Com, he says. FRANZISKA BARCZYK “It is a pretty significant threat, and people tend to … push it under the rug [because] it’s just a bunch of kids,” he says. “But look at the impact [they have].” Brogan says the amount of damage they do in terms of monetary losses “can become staggering very quickly.” There is no single site where Com members congregate; they spread across a number of web forums and Telegram and Discord channels. The group follows a long line of hacking and subculture communities that emerged online over the last two decades, gained notoriety, and then faded or vanished after prominent members were arrested or other factors caused their decline. They differed in motivation and activity, but all emerged from “the same primordial soup,” says Nixon. The Com’s roots can be traced to the Scene, which began as a community of various “warez” groups engaged in pirating computer games, music, and movies. When Nixon began looking at the Scene, in 2011, its members were hijacking gaming accounts, launching DDoS attacks, and running booter services. (DDoS attacks overwhelm a server or computer with traffic from bot-controlled machines, preventing legitimate traffic from getting through; booters are tools that anyone can rent to launch a DDoS attack against a target of choice.) While they made some money, their primary goal was notoriety. This changed around 2018. Cryptocurrency values were rising, and the Com—or the Community, as it sometimes called itself—emerged as a subgroup that ultimately took over the Scene. Members began to focus on financial gain—cryptocurrency theft, data theft, and extortion. The pandemic two years later saw a surge in Com membership that Nixon attributes to social isolation and the forced movement of kids online for schooling. But she believes economic conditions and socialization problems have also driven its growth. Many Com members can’t get jobs because they lack skills or have behavioral issues, she says. A number who have been arrested have had troubled home lives and difficulty adapting to school, and some have shown signs of mental illness. The Com provides camaraderie, support, and an outlet for personal frustrations. Since 2018, it has also offered some a solution to their money problems. Loose-knit cells have sprouted from the community—Star Fraud, ShinyHunters, Scattered Spider, Lapsus$—to collaborate on clusters of crime. They usually target high-profile crypto bros and tech giants and have made millions of dollars from theft and extortion, according to court records. 
But dominance, power, and bragging rights are still motivators, even in profit operations, says the cybercrime researcher, which is partly why members target “big whales.” “There is financial gain,” he says, “but it’s also [sending a message that] I can reach out and touch the people that think they’re untouchable.” In fact, Nixon says, some members of the Com have overwhelming ego-driven motivations that end up conflicting with their financial motives.
“Often their financial schemes fall apart because of their ego, and that phenomenon is also what I’ve made my career on,” she says. The hacker hunter emerges Nixon has straight dark hair, wears wire-rimmed glasses, and has a slight build and bookish demeanor that, on first impression, could allow her to pass for a teen herself. She talks about her work in rapid cadences, like someone whose brain is filled with facts that are under pressure to get out, and she exudes a sense of urgency as she tries to make people understand the threat the Com poses. She doesn’t suppress her happiness when someone she’s been tracking gets arrested. In 2011, when she first began investigating the communities from which the Com emerged, she was working the night shift in the security operations center of the security firm SecureWorks. The center responded to tickets and security alerts emanating from customer networks, but Nixon coveted a position on the company’s counter-threats team, which investigated and published threat-intelligence reports on mostly state-sponsored hacking groups from China and Russia. Without connections or experience, she had no path to investigative work. But Nixon is an intensely curious person, and this created its own path. Allison Nixon is chief research officer at the cybersecurity investigations firm Unit 221B, where she tracks cybercriminals and helps bring them to justice.YLVA EREVALL Where the threat team focused on the impact hackers had on customer networks—how they broke in, what they stole—Nixon was more interested in their motivations and the personality traits that drove their actions. She assumed there must be online forums where criminal hackers congregated, so she googled “hacking forums” and landed on a site called Hack Forums. “It was really stupid simple,” she says. She was surprised to see members openly discussing their crimes there. She reached out to someone on the SecureWorks threat team to see if he was aware of the site, and he dismissed it as a place for “script kiddies”—a pejorative term for unskilled hackers.
This was a time when many cybersecurity pros were shifting their focus away from cybercrime to state-sponsored hacking operations, which were more sophisticated and getting a lot of attention. But Nixon likes to zig where others zag, and her colleague’s dismissiveness fueled her interest in the forums. Two other SecureWorks colleagues shared that interest, and the three studied the forums during downtime on their shifts. They focused on trying to identify the people running DDoS booters.  What Nixon loved about the forums was how accessible they were to a beginner like herself. Threat-intelligence teams require privileged access to a victim’s network to investigate breaches. But Nixon could access everything she needed in the public forums, where the hackers seemed to think no one was watching. Because of this, they often made mistakes in operational security, or OPSEC—letting slip little biographical facts such as the city where they lived, a school they attended, or a place they used to work. These details revealed in their chats, combined with other information, could help expose the real identities behind their anonymous masks.  “It was a shock to me that it was relatively easy to figure out who [they were],” she says.  She wasn’t bothered by the immature boasting and petty fights that dominated the forums. “A lot of people don’t like to do this work of reading chat logs. I realize that this is a very uncommon thing. And maybe my brain is built a little weird that I’m willing to do this,” she says. “I have a special talent that I can wade through garbage and it doesn’t bother me.” 
Nixon soon realized that not all the members were script kiddies. Some exhibited real ingenuity and “powerful” skills, she says, but because they were applying these to frivolous purposes—hijacking gamer accounts instead of draining bank accounts—researchers and law enforcement were ignoring them. Nixon began tracking them, suspecting that they would eventually direct their skills at more significant targets—an intuition that proved to be correct. And when they did, she had already amassed a wealth of information about them.  She continued her DDoS research for two years until a turning point in 2013, when the cybersecurity journalist Brian Krebs, who made a career tracking cybercriminals, got swatted.  About a dozen people from the security community worked with Krebs to expose the perpetrator, and Nixon was invited to help. Krebs sent her pieces of the puzzle to investigate, and eventually the group identified the culprit (though it would take two years for him to be arrested). When she was invited to dinner with Krebs and the other investigators, she realized she’d found her people. “It was an amazing moment for me,” she says. “I was like, wow, there’s all these like-minded people that just want to help and are doing it just for the love of the game, basically.” Staying one step ahead It was porn stars who provided Nixon with her next big research focus—one that underscored her skill at spotting Com actors and criminal trends in their nascent stages, before they emerged as major threats. In 2018, someone was hijacking the social media accounts of certain adult-film stars and using those accounts to blast out crypto scams to their large follower bases. Nixon couldn’t figure out how the hackers had hijacked the social media profiles, but she promised to help the actors regain access to their accounts if they agreed to show her the private messages the hackers had sent or received during the time they controlled them. These messages led her to a forum where members were talking about how they stole the accounts. The hackers had tricked some of these actors into disclosing the mobile phone numbers of others. Then they used a technique called SIM swapping to reset passwords for social media accounts belonging to those other stars, locking them out.  In SIM swapping, fraudsters get a victim’s phone number assigned to a SIM card and phone they control, so that calls and messages intended for the victim go to them instead. This includes one-time security codes that sites text to account holders to verify themselves when accessing their account or changing its password. In some of the cases involving the porn stars, the hackers had manipulated telecom workers into making the SIM swaps for what they thought were legitimate reasons, and in other cases they bribed the workers to make the change. The hackers were then able to alter the password on the actors’ social media accounts, lock out the owners, and use the accounts to advertise their crypto scams.  SIM swapping is a powerful technique that can be used to hijack and drain entire cryptocurrency and bank accounts, so Nixon was surprised to see the fraudsters using it for relatively unprofitable schemes. But SIM swapping had rarely been used for financial fraud at that point, and like the earlier hackers Nixon had seen on Hack Forums, the ones hijacking porn star accounts didn’t seem to grasp the power of the technique they were using. Nixon suspected that this would change and SIM swapping would soon become a major problem, so she shifted her research focus accordingly. It didn’t take long for the fraudsters to pivot as well. Nixon’s skill at looking ahead in this way has served her throughout her career. On multiple occasions a hacker or hacking group would catch her attention—for using a novel hacking approach in some minor operation, for example—and she’d begin tracking their online posts and chats in the belief that they’d eventually do something significant with that skill.  They usually did. When they later grabbed headlines with a showy or impactful operation, these hackers would seem to others to have emerged from nowhere, sending researchers and law enforcement scrambling to understand who they were. But Nixon would already have a dossier compiled on them and, in some cases, had unmasked their real identity as well. Lizard Squad was an example of this. The group burst into the headlines in 2014 and 2015 with a series of high-profile DDoS campaigns, but Nixon and colleagues at the job where she worked at the time had already been watching its members as individuals for a while. So the FBI sought their assistance in identifying them. “The thing about these young hackers is that they … keep going until they get arrested, but it takes years for them to get arrested,” she says. “So a huge aspect of my career is just sitting on this information that has not been actioned [yet].” It was during the Lizard Squad years that Nixon began developing tools to scrape and record hacker communications online, though it would be years before she began using these concepts to scrape the Com chatrooms and forums. These channels held a wealth of data that might not seem useful during the nascent stage of a hacker’s career but could prove critical later, when law enforcement got around to investigating them; yet the contents were always at risk of being deleted by Com members or getting taken down by law enforcement when it seized websites and chat channels. Nixon’s work is unique because she engages with the actors in chat spaces to draw out information from them that “would not be otherwise normally available.” Over several years, she scraped and preserved whatever chatrooms she was investigating. But it wasn’t until early 2020, when she joined Unit 221B, that she got the chance to scrape the Telegram and Discord channels of the Com. She pulled all of this data together into a searchable platform that other researchers and law enforcement could use. The company hired two former hackers to help build scraping tools and infrastructure for this work; the result is eWitness, a community-driven, invitation-­only platform. It was initially seeded only with data Nixon had collected after she arrived at Unit 221B, but has since been augmented with data that other users of the platform have scraped from Com social spaces as well, some of which doesn’t exist in public forums anymore. Brogan, of the FBI, says it’s an incredibly valuable tool, made more so by Nixon’s own contributions. Other security firms scrape online criminal spaces as well, but they seldom share the content with outsiders, and Brogan says Nixon’s work is unique because she engages with the actors in chat spaces to draw out information from them that “would not be otherwise normally available.”  The preservation project she started when she got to Unit 221B could not have been better timed, because it coincided with the pandemic, the surge in new Com membership, and the emergence of two disturbing Com offshoots, CVLT and 764. She was able to capture their chats as these groups first emerged; after law enforcement arrested leaders of the groups and took control of the servers where their chats were posted, this material went offline. CVLT—pronounced “cult”—was reportedly founded around 2019 with a focus on sextortion and child sexual abuse material. 764 emerged from CVLT and was spearheaded by a 15-year-old in Texas named Bradley Cadenhead, who named it after the first digits of his zip code. Its focus was extremism and violence.  In 2021, because of what she observed in these groups, Nixon turned her attention to sextortion among Com members. The type of sextortion they engaged in has its roots in activity that began a decade ago as “fan signing.” Hackers would use the threat of doxxing to coerce someone, usually a young female, into writing the hacker’s handle on a piece of paper. The hacker would use a photo of it as an avatar on his online accounts—a kind of trophy. Eventually some began blackmailing victims into writing the hacker’s handle on their face, breasts, or genitals. With CVLT, this escalated even further; targets were blackmailed into carving a Com member’s name into their skin or engaging in sexually explicit acts while recording or livestreaming themselves. During the pandemic a surprising number of SIM swappers crossed into child sexual abuse material and sadistic sextortion, according to Nixon. She hates tracking this gruesome activity, but she saw an opportunity to exploit it for good. She had long been frustrated at how leniently judges treated financial fraudsters because of their crimes’ seemingly nonviolent nature. But she saw a chance to get harsher sentences for them if she could tie them to their sextortion and began to focus on these crimes.  At this point, Waifu still wasn’t on her radar. But that was about to change. Endgame Nixon landed in Waifu’s crosshairs after he and fellow members of the Com were involved in a large hack involving AT&T customer call records in April 2024. Waifu’s group gained access to dozens of cloud accounts with Snowflake, a company that provides online data storage for customers. One of those customers had more than 50 billion call logs of AT&T wireless subscribers stored in its Snowflake account.  They tried to re-extort the telecom, threatening on social media to leak the records. They tagged the FBI in the post. “It’s like they were begging to be investigated,” says Nixon. Among the subscriber records were call logs for FBI agents who were AT&T customers. Nixon and other researchers believe the hackers may have been able to identify the phone numbers of agents through other means. Then they may have used a reverse-lookup program to identify the owners of phone numbers that the agents called or that called them and found Nixon’s number among them. This is when they began harassing her. But then they got reckless. They allegedly extorted nearly $400,000 from AT&T in exchange for promising to delete the call records they’d stolen. Then they tried to re-extort the telecom, threatening on social media to leak the records they claimed to have deleted if it didn’t pay more. They tagged the FBI in the post. “It’s like they were begging to be investigated,” says Nixon. The Snowflake breaches and AT&T records theft were grabbing headlines at the time, but Nixon had no idea her number was in the stolen logs or that Waifu/Judische was a prime suspect in the breaches. So she was perplexed when he started taunting and threatening her online. FRANZISKA BARCZYK Over several weeks in May and June, a pattern developed. Waifu or one of his associates would post a threat against her and then post a message online inviting her to talk. She assumes now that they believed she was helping law enforcement investigate the Snowflake breaches and hoped to draw her into a dialogue to extract information from her about what authorities knew. But Nixon wasn’t helping the FBI investigate them yet. It was only after she began looking at Waifu for the threats that she became aware of his suspected role in the Snowflake hack. It wasn’t the first time she had studied him, though. Waifu had come to her attention in 2019 when he bragged about framing another Com member for a hoax bomb threat and later talked about his involvement in SIM-swapping operations. He made an impression on her. He clearly had technical skills, but Nixon says he also often appeared immature, impulsive, and emotionally unstable, and he was desperate for attention in his interactions with other members. He bragged about not needing sleep and using Adderall to hack through the night. He was also a bit reckless about protecting personal details. He wrote in private chats to another researcher that he would never get caught because he was good at OPSEC, but he also told the researcher that he lived in Canada—which turned out to be true. Nixon’s process for unmasking Waifu followed a general recipe she used to unmask Com members: She’d draw a large investigative circle around a target and all the personas that communicated with that person online, and then study their interactions to narrow the circle to the people with the most significant connections to the target. Some of the best leads came from a target’s enemies; she could glean a lot of information about their identity, personality, and activities from what the people they fought with online said about them. “The enemies and the ex-girlfriends, generally speaking, are the best [for gathering intelligence on a suspect],” she says. “I love them.” While she was doing this, Waifu and his group were reaching out to other security researchers, trying to glean information about Nixon and what she might be investigating. They also attempted to plant false clues with the researchers by dropping the names of other cybercriminals in Canada who could plausibly be Waifu. Nixon had never seen cybercriminals engage in counterintelligence tactics like this. Amid this subterfuge and confusion, Nixon and another researcher working with her did a lot of consulting and cross-checking with other researchers about the clues they were gathering to ensure they had the right name before they gave it to the FBI. By July she and the researcher were convinced they had their guy: Connor Riley Moucka, a 25-year-old high school dropout living with his grandfather in Ontario. On October 30, Royal Canadian Mounted Police converged on Moucka’s home and arrested him. According to an affidavit filed in Canadian court, a plainclothes Canadian police officer visited Moucka’s house under some pretense on the afternoon of October 21, nine days before the arrest, to secretly capture a photo of him and compare it with an image US authorities had provided. The officer knocked and rang the bell; Moucka opened the door looking disheveled and told the visitor: “You woke me up, sir.” He told the officer his name was Alex; Moucka sometimes used the alias Alexander Antonin Moucka. Satisfied that the person who answered the door was the person the US was seeking, the officer left. Waifu’s online rants against Nixon escalated at this point, as did his attempts at misdirection. She believes the visit to his door spooked him. Nixon won’t say exactly how they unmasked Moucka—only that he made a mistake. “I don’t want to train these people in how to not get caught [by revealing his error],” she says. The Canadian affidavit against Moucka reveals a number of other violent posts he’s alleged to have made online beyond the threats he made against her. Some involve musings about becoming a serial killer or mass-mailing sodium nitrate pills to Black people in Michigan and Ohio; in another, his online persona talks about obtaining firearms to “kill Canadians” and commit “suicide by cop.”  Prosecutors, who list Moucka’s online aliases as including Waifu, Judische, and two more in the indictment, say he and others extorted at least $2.5 million from at least three victims whose data they stole from Snowflake accounts. Moucka has been charged with nearly two dozen counts, including conspiracy, unauthorized access to computers, extortion, and wire fraud. He has pleaded not guilty and was extradited to the US last July. His trial is scheduled for October this year, though hacking cases usually end in plea agreements rather than going to trial.  It took months for authorities to arrest Moucka after Nixon and her colleague shared their findings with the authorities, but an alleged associate of his in the Snowflake conspiracy, a US Army soldier named Cameron John Wagenius (Kiberphant0m online), was arrested more quickly.  On November 10, 2024, Nixon and her team found a mistake Wagenius made that helped identify him, and on December 20 he was arrested. Wagenius has already pleaded guilty to two charges around the sale or attempted sale of confidential phone records and will be sentenced this March. These days Nixon continues to investigate sextortion among Com members. But she says that remaining members of Waifu’s group still taunt and threaten her. “They are continuing to persist in their nonsense, and they are getting taken out one by one,” she says. “And I’m just going to keep doing that until there’s no one left on that side.”  Kim Zetter is a journalist who covers cybersecurity and national security. She is the author of Countdown to Zero Day.

Read More »

ALS stole this musician’s voice. AI let him sing again.

There are tears in the audience as Patrick Darling’s song begins to play. It’s a heartfelt song written for his great-grandfather, whom he never got the chance to meet. But this performance is emotional for another reason: It’s Darling’s first time on stage with his bandmates since he lost the ability to sing two years ago. The 32-year-old musician was diagnosed with amyotrophic lateral sclerosis (ALS) when he was 29 years old. Like other types of motor neuron disease (MND), it affects nerves that supply the body’s muscles. People with ALS eventually lose the ability to control their muscles, including those that allow them to move, speak, and breathe. Darling’s last stage performance was over two years ago. By that point, he had already lost the ability to stand and play his instruments and was struggling to sing or speak. But recently, he was able to re-create his lost voice using an AI tool trained on snippets of old audio recordings. Another AI tool has enabled him to use this “voice clone” to compose new songs. Darling is able to make music again. “Sadly, I have lost the ability to sing and play my instruments,” Darling said on stage at the event, which took place in London on Wednesday, using his voice clone. “Despite this, most of my time these days is spent still continuing to compose and produce my music. Doing so feels more important than ever to me now.”
Losing a voice Darling says he’s been a musician and a composer since he was around 14 years old. “I learned to play bass guitar, acoustic guitar, piano, melodica, mandolin, and tenor banjo,” he said at the event. “My biggest love, though, was singing.” He met bandmate Nick Cocking over 10 years ago, while he was still a university student, says Cocking. Darling joined Cocking’s Irish folk outfit, the Ceili House Band, shortly afterwards, and their first gig together was in April 2014. Darling, who joined the band as a singer and guitarist, “elevated the musicianship of the band,” says Cocking.
Patrick Darling (second from left) with his former bandmates, including Nick Cocking (far right).COURTESY OF NICK COCKING But a few years ago, Cocking and his other bandmates started noticing changes in Darling. He became clumsy, says Cocking. He recalls one night when the band had to walk across the city of Cardiff in the rain: “He just kept slipping and falling, tripping on paving slabs and things like that.”  He didn’t think too much of it at the time, but Darling’s symptoms continued to worsen. The disease affected his legs first, and in August 2023, he started needing to sit during performances. Then he started to lose the use of his hands. “Eventually he couldn’t play the guitar or the banjo anymore,” says Cocking. By April 2024, Darling was struggling to talk and breathe at the same time, says Cocking. For that performance, the band carried Darling on stage. “He called me the day after and said he couldn’t do it anymore,” Cocking says, his voice breaking. “By June 2024, it was done.” It was the last time the band played together. Re-creating a voice Darling was put in touch with a speech therapist, who raised the possibility of “banking” his voice. People who are losing the ability to speak can opt to record themselves speaking and use those recordings to create speech sounds that can then be activated with typed text, whether by hand or perhaps using a device controlled by eye movements. Some users have found these tools to be robotic sounding. But Darling had another issue. “By that stage, my voice had already changed,” he said at the event. “It felt like we were saving the wrong voice.” Then another speech therapist introduced him to a different technology. Richard Cave is a speech and language therapist and a researcher at University College London. He is also a consultant for ElevenLabs, an AI company that develops agents and audio, speech, video, and music tools. One of these tools can create “voice clones”—realistic mimics of real voices that can be generated from minutes, or even seconds, of a person’s recorded voice. Last year, ElevenLabs launched an impact program with a promise to provide free licenses to these tools for people who have lost their voices to ALS or other diseases, like head and neck cancer or stroke.  The tool is already helping some of those users. “We’re not really improving how quickly they’re able to communicate, or all of the difficulties that individuals with MND are going through physically, with eating and breathing,” says Gabi Leibowitz, a speech therapist who leads the program. “But what we are doing is giving them a way … to create again, to thrive.” Users are able to stay in their jobs longer and “continue to do the things that make them feel like human beings,” she says.

Cave worked with Darling to use the tool to re-create his lost speaking voice from older recordings. “The first time I heard the voice, I thought it was amazing,” Darling said at the event, using the voice clone. “It sounded exactly like I had before, and you literally wouldn’t be able to tell the difference,” he said. “I will not say what the first word I made my new voice say, but I can tell you that it began with ‘f’ and ended in ‘k.’” COURTESY OF PATRICK DARLING Re-creating his singing voice wasn’t as easy. The tool typically requires around 10 minutes of clear audio to generate a clone. “I had no high-quality recordings of myself singing,” Darling said. “We had to use audio from videos on people’s phones, shot in noisy pubs, and a couple of recordings of me singing in my kitchen.” Still, those snippets were enough to create a “synthetic version of [Darling’s] singing voice,” says Cave. In the recordings, Darling sounded a little raspy and “was a bit off” on some of the notes, says Cave. The voice clone has the same qualities. It doesn’t sound perfect, Cave says—it sounds human. “The ElevenLabs voice that we’ve created is wonderful,” Darling said at the event. “It definitely sounds like me—[it] just kind of feels like a different version of me.” ElevenLabs has also developed an AI music generator called Eleven Music. The tool allows users to compose tracks, using text prompts to choose the musical style. Several well-known artists have also partnered with the company to license AI clones of their voices, including the actor Michael Caine, whose voice clone is being used to narrate an upcoming ElevenLabs documentary. Last month, the company released an album of 11 tracks created using the tool. “The Liza Minnelli track is really a banger,” says Cave. Eleven Music can generate a song in a minute, but Darling and Cave spent around six weeks fine-tuning Darling’s song. Using text prompts, any user can “create music and add lyrics in any style [they like],” says Cave. Darling likes Irish folk, but Cave has also worked with a man in Colombia who is creating Colombian folk music. (The ElevenLabs tool is currently available in 74 languages.) Back on stage Last month, Cocking got a call from Cave, who sent him Darling’s completed track. “I heard the first two or three words he sang, and I had to turn it off,” he says. “I was just in bits, in tears. It took me a good half a dozen times to make it to the end of the track.”
Darling and Cave were making plans to perform the track live at the ElevenLabs summit in London on Wednesday, February 11. So Cocking and bandmate Hari Ma each arranged accompanying parts to play on the mandolin and fiddle. They had a couple of weeks to rehearse before they joined Darling on stage, two years after their last performance together. “I wheeled him out on stage, and neither of us could believe it was happening,” says Cave. “He was thrilled.” The song was played as Darling remained on stage, and Cocking and Ma played their instruments live. Cocking and Cave say Darling plans to continue to use the tools to make music. Cocking says he hopes to perform with Darling again but acknowledges that, given the nature of ALS, it is difficult to make long-term plans. “It’s so bittersweet,” says Cocking. “But getting up on stage and seeing Patrick there filled me with absolute joy. I know Patrick really enjoyed it as well. We’ve been talking about it … He was really, really proud.” ELEVENLABS/AMPLIFY

Read More »

The Download: an exclusive chat with Jim O’Neill, and the surprising truth about heists

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. US deputy health secretary: Vaccine guidelines are still subject to change Over the past year, Jim O’Neill has become one of the most powerful people in public health. As the US deputy health secretary, he holds two roles at the top of the country’s federal health and science agencies. He oversees a department with a budget of over a trillion dollars. And he signed the decision memorandum on the US’s deeply controversial new vaccine schedule.He’s also a longevity enthusiast. In an exclusive interview with MIT Technology Review earlier this month, O’Neill described his plans to increase human healthspan through longevity-focused research supported by ARPA-H, a federal agency dedicated to biomedical breakthroughs. Fellow longevity enthusiasts said they hope he will bring attention and funding to their cause. At the same time, O’Neill defended reducing the number of broadly recommended childhood vaccines, a move that has been widely criticized by experts in medicine and public health. Read the full story.
—Jessica Hamzelou
The myth of the high-tech heist Making a movie is a lot like pulling off a heist. That’s what Steven Soderbergh—director of the Ocean’s franchise, among other heist-y classics—said a few years ago. You come up with a creative angle, put together a team of specialists, figure out how to beat the technological challenges, rehearse, move with Swiss-watch precision, and—if you do it right—redistribute some wealth. But conversely, pulling off a heist isn’t much like the movies. Surveillance cameras, computer-controlled alarms, knockout gas, and lasers hardly ever feature in big-ticket crime. In reality, technical countermeasures are rarely a problem, and high-tech gadgets are rarely a solution. Read the full story. —Adam Rogers This story is from the next print issue of MIT Technology Review magazine, which is all about crime. If you haven’t already, subscribe now to receive future issues once they land.  RFK Jr. follows a carnivore diet. That doesn’t mean you should. Americans have a new set of diet guidelines. Robert F. Kennedy Jr. has taken an old-fashioned food pyramid, turned it upside down, and plonked a steak and a stick of butter in prime positions.

Kennedy and his Make America Healthy Again mates have long been extolling the virtues of meat and whole-fat dairy, so it wasn’t too surprising to see those foods recommended alongside vegetables and whole grains (despite the well-established fact that too much saturated fat can be extremely bad for you).Some influencers have taken the meat trend to extremes, following a “carnivore diet.” A recent review of research into nutrition misinformation on social media found that a lot of shared diet information is nonsense. But what’s new is that some of this misinformation comes from the people who now lead America’s federal health agencies. Read the full story. —Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 The Trump administration has revoked a landmark climate rulingIn its absence, it can erase the limits that restrict planet-warming emissions. (WP $)+ Environmentalists and Democrats have vowed to fight the reversal. (Politico)+ They’re seriously worried about how it will affect public health. (The Hill) 2 An unexplained wave of bot traffic is sweeping the webSites across the world are witnessing automated traffic that appears to originate from China. (Wired $)
3 Amazon’s Ring has axed its partnership with FlockLaw enforcement will no longer be able to request Ring doorbell footage from its users. (The Verge)+ Ring’s recent TV ad for a dog-finding feature riled viewers. (WSJ $)+ How Amazon Ring uses domestic violence to market doorbell cameras. (MIT Technology Review) 4 Americans are taking the hit for almost all of Trump’s tariffsConsumers and companies in the US, not overseas, are shouldering 90% of levies. (Reuters)+ Trump has long insisted that his tariffs costs will be borne by foreign exporters. (FT $)+ Sweeping tariffs could threaten the US manufacturing rebound. (MIT Technology Review)5 Meta and Snap say Australia’s social media ban hasn’t affected businessThey’re still making plenty of money amid the country’s decision to ban under-16s from the platforms. (Bloomberg $)+ Does preventing teens from going online actually do any good? (Economist $)6 AI workers are selling their shares before their firms go publicCashing out early used to be a major Silicon Valley taboo. (WSJ $)
7 Elon Musk posted about race almost every day last monthHis fixation on a white racial majority appears to be intensifying. (The Guardian)+ Race is a recurring theme in the Epstein emails, too. (The Atlantic $) 8 The man behind a viral warning about AI used AI to write itBut he stands behind its content.. (NY Mag $)+ How AI-generated text is poisoning the internet. (MIT Technology Review) 9 Influencers are embracing Chinese traditions ahead of the New Year 🧧On the internet, no one knows you’re actually from Wisconsin. (NYT $)10 Australia’s farmers are using AI to count sheep 🐑No word on whether it’s helping them sleep easier, though. (FT $) Quote of the day “Ignoring warning signs will not stop the storm. It puts more Americans directly in its path.”
—Former US secretary of state John Kerry takes aim at the US government’s decision to repeal the key rule that allows it to regulate climate-heating pollution, the Guardian reports. One more thing The Vera C. Rubin Observatory is ready to transform our understanding of the cosmosHigh atop Chile’s 2,700-meter Cerro Pachón, the air is clear and dry, leaving few clouds to block the beautiful view of the stars. It’s here that the Vera C. Rubin Observatory will soon use a car-size 3,200-megapixel digital camera—the largest ever built—to produce a new map of the entire night sky every three days.Findings from the observatory will help tease apart fundamental mysteries like the nature of dark matter and dark energy, two phenomena that have not been directly observed but affect how objects are bound together—and pushed apart.A quarter-­century in the making, the observatory is poised to expand our understanding of just about every corner of the universe. Read the full story.—Adam Mann
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Why 2026 is shaping up to be the year of the pop comeback.+ Almost everything we thought we knew about Central America’s Maya has turned out to be completely wrong.+ The Bigfoot hunters have spoken!+ This fun game puts you in the shoes of a distracted man trying to participate in a date while playing on a GameBoy.

Read More »

Azule Energy discovers oil offshore Angola

Azule Energy and partners discovered oil in Block 15/06 in the offshore Lower Congo basin, offshore Angola. Preliminary estimates indicate oil in place of around 500 million bbl, and the presence of existing nearby production infrastructure—about 18 km from Olombendo FPSO—improves development prospects, the operator said in a release Feb. 13. The Algaita-01 exploration well, spudded on Jan. 10, 2026, was drilled by the Saipem 12000 drillship in a water depth of 667 m. The well encountered oil-bearing sandstones in multiple Upper Miocene intervals. Drilling operations were completed Jan. 26, followed by advanced formation evaluation logs to assess reservoir quality and fluid characteristics. Preliminary interpretation of wireline logging and fluid sampling indicates the presence of multiple reservoir intervals with excellent petrophysical properties and fluid mobilities, the company said. Azule Energy is an incorporated joint venture equally owned by bp plc Eni SpA. The company currently produces around 200,000 boe/d in Angola. Block 15/06 is operated by Azule Energy (36.84%), in partnership with SSI (26.32%) and Sonangol E&P (36.84%).

Read More »

ExxonMobil transporting, storing captured CO2 from second operation in Louisiana

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } ExxonMobil Corp. is now transporting and storing captured CO2 from the New Generation Gas Gathering (NG3) project in Gillis, La. Natural gas produced from East Texas and Louisiana is gathered through the NG3 gathering system for treatment at the NG3 Gillis plant, where up to 1.2 million metric tons/year (tpy) of CO2 is expected to be removed from the natural gas stream before the product is redelivered to Gulf Coast markets, including LNG plants, ExxonMobil said. <!–> –><!–> –> July 13, 2023 <!–> –><!–> –> June 1, 2023 <!–> –><!–> –> July 26, 2024 <!–> This startup marks the second active commercial carbon capture and storage (CCS) operation for ExxonMobil in Louisiana. In July 2025, the company began transporting and storing CO2 from Illinois-based CF Industries Holdings Inc.’s Donaldsonville Complex, enabling the production of low-carbon ammonia.  ]–> Photo from CF Industries CF Industries’ Donaldsonville Complex is located on 1,400 acres along the west bank of the Mississippi River in southeastern Louisiana.  <!–> ]–> <!–> The CO2 contracted for the company’s two active projects accounts for up to 3.2 million tpy, about one-third of ExxonMobil’s committed CCS volumes. The company is currently storing

Read More »

Ovintiv to divest Anadarko assets for $3 billion

In a release Feb. 17, Brendan McCracken, Ovintiv president and chief executive officer, said the company has “built one of the deepest premium inventory positions in our industry in the two most valuable plays in North America, the Permian and the Montney,” and that the Anadarko assets sale “positions [Ovintiv] to deliver superior returns for our shareholders for many years to come.” Ovintiv in 2025 had noted plans to sell the asset to help offset the cost of its acquisition of NuVista Energy Ltd. That $2.7-billion cash and stock deal, which closed earlier this month, added about 930 net 10,000-ft equivalent well locations and about 140,000 net acres (70% undeveloped) in the core of the oil-rich Alberta Montney.  Proceeds from the Anadarko assets sale are earmarked for accelerated debt reduction, the company said.  Ovintiv’s sale of its Anadarko assets is expected to close early in this year’s second quarter, subject to customary conditions, with an effective date of Jan. 1, 2026.

Read More »

Azule Energy starts Ndungu full field production offshore Angola

Azule Energy has started full field production from Ndungu, part of the Agogo Integrated West Hub Project (IWH) in the western area of Block 15/06, offshore Angola. Ndungo full field lies about 10 km from the NGOMA FPSO in a water depth of around 1,100 m and comprises seven production wells and four injection wells, with an expected production peak of 60,000 b/d of oil. The National Agency for Petroleum, Gas and Biofuels (ANPG) and Azule Energy noted the full field start-up with first oil of three production wells. The phased integration of IWH, with Ndungu full field producing first via N’goma FPSO and later via Agogo FPSO, is expected to reach a peak output of about 175,000 b/d across the two fields. The fields have combined estimated reserves of about 450 million bbl. The Agogo IWH project is operated by Azule Energy with a 36.84% stake alongside partners Sonangol E&P (36.84%) and Sinopec International (26.32%).   

Read More »

North Atlantic’s Gravenchon refinery scheduled for major turnaround

Canada-based North Atlantic Refining Ltd. France-based subsidiary North Atlantic France SAS is undertaking planned maintenance in March at its North Atlantic Energies-operated 230,000-b/d Notre-Dame-de-Gravenchon refinery in Port-Jérôme-sur-Seine, Normandy. Scheduled to begin on Mar. 3 with the phased shutdown of unidentified units at the refinery, the upcoming turnaround will involve thorough inspections of associated equipment designed for continuous operation, as well as unspecified works to improve energy efficiency, environmental performance, and overall competitiveness of the site, North Atlantic Energies said on Feb. 16. Part of the operator’s routine maintenance program aimed at meeting regulatory requirements to ensure the safety, compliance, and long-term performance of the refinery, North Atlantic Energies said the scheduled turnaround will not interrupt product supplies to customers during the shutdown period. While the company confirmed the phased shutdown of units slated for work during the maintenance event would last for several days, the operator did not reveal a definitive timeline for the entire duration of the turnaround. Further details regarding specific works to be carried out during the major maintenance event were not revealed. The upcoming turnaround will be the first to be executed under North Atlantic Group’s ownership, which completed its purchase of the formerly majority-owned ExxonMobil Corp. refinery and associated petrochemical assets at the site in November 2025.

Read More »

CFEnergía to supply natural gas to low-carbon methanol plant in Mexico

CFEnergía, a subsidiary of Mexico’s Federal Electricity Commission (CFE), has agreed to supply natural gas to Transition Industries LLC for its Pacifico Mexinol project near Topolobampo, Sinaloa, Mexico. Under the signed agreement, which enables the start of Pacifico Mexinol’s construction phase, CFEnergía will supply about 160 MMcfd of natural gas for an unspecified timeframe noted as “long term,” Transition Industries said in a release Feb. 16. The natural gas—to be sourced from the US and supplied at market prices via existing infrastructure—will be used as “critical input for Mexinol’s production of ultra-low carbon methanol,” the company said. Pacifico Mexinol The $3.3-billion Mexinol project, when it begins operations in late 2029 to early 2030, is expected to be the world’s largest ultra-low carbon chemicals plant with production of about 1.8 million tonnes of blue methanol and 350,000 tonnes of green methanol annually. Supply is aimed at markets in Asia, including Japan, while also boosting the development of the domestic market and the Mexican chemical industry. Mitsubishi Gas Chemical has committed to purchasing about 1 million tonnes/year of methanol from the project, about 50% of the project’s planned production. Transition Industries is jointly developing Pacifico Mexinol with the International Finance Corporation (IFC), a member of the World Bank Group. Last year, the company signed a contingent engineering, procurement, and construction (EPC) contract with the consortium of Samsung E&A Co., Ltd., Grupo Samsung E&A Mexico SA de CV, and Techint Engineering and Construction for the project. MAIRE group’s technology division NextChem, through its subsidiary KT TECH SpA, also signed a basic engineering, critical and proprietary equipment supply agreement with Samsung E&A in connection with its proprietary NX AdWinMethanol®Zero technology supply to the project.

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE