Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Google Research touts memory-compression breakthrough for AI processing

The last time the market witnessed a shakeup like this was China’s DeepSeek, but doubts emerged quickly about its efficacy. Developers found DeepSeek’s efficiency gains required deep architectural decisions that had to be built in from the start. TurboQuant requires no retraining or fine-tuning. You just drop it straight into existing inference pipelines, at least in theory. If it works in production systems with no retrofitting, then data center operators will get tremendous performance gains on existing hardware. Data center operators won’t have to throw hardware at the performance problem. However, analysts urge caution before jumping to conclusions. “This is a research breakthrough, not a shipping product,” said Alex Cordovil, research director for physical infrastructure at The Dell’Oro Group. “There’s often a meaningful gap between a published paper and real-world inference workloads.” Also, Dell’Oro notes that efficiency gains in AI compute tend to get consumed by more demand, known as the Jevons paradox. “Any freed-up capacity would likely be absorbed by frontier models expanding their capabilities rather than reducing their hardware footprint.” Jim Handy, president of Objective Analysis, agrees on that second part. “Hyperscalers won’t cut their spending – they’ll just spend the same amount and get more bang for their buck,” he said. “Data centers aren’t looking to reach a certain performance level and subsequently stop spending on AI. They’re looking to out-spend each other to gain market dominance. This won’t change that.” Google plans to present a paper outlining TurboQuant at the ICLR conference in Rio de Janeiro running from April 23 through April 27.

Read More »

Energy Department Authorizes Additional Exports of LNG from Elba Island Terminal, Strengthening Global Energy Supply with U.S. LNG

WASHINGTON—U.S. Secretary of Energy Chris Wright today authorized an immediate 22% increase in exports of liquefied natural gas (LNG) from the Elba Island Terminal in Chatham County, Georgia. With today’s order, Kinder Morgan subsidiary Southern LNG Company L.L.C., operator of the Elba Island LNG Terminal, is now authorized to export up to an additional 28.25 (Bcf/yr) to non-free trade agreement countries, strengthening global natural gas supplies with reliable U.S. LNG. Elba Island was previously authorized to export up to 130 billion cubic feet per year (Bcf/yr) of natural gas as LNG to non-free trade agreement countries and has been exporting U.S. LNG since 2019. The project is positioned to export the additional approved volumes immediately.  “At a time when global energy supply routes face disruption, the United States remains a reliable energy partner to our allies and trading partners,” said DOE Assistant Secretary of the Hydrocarbons and Geothermal Energy Office, Kyle Haustveit. “DOE is using all available authorities to ensure American energy can reach global markets when it is needed most, supporting energy security and helping stabilize global energy supplies.”  The action comes as global oil and LNG supply routes face disruption from tensions in the Middle East and attacks carried out by Iran and its proxies, threatening the reliable flow of energy through critical maritime corridors. The Department will continue to act, using its full set of authorities, to ensure U.S. LNG remains a dependable energy source in global energy markets and a stabilizing presence in times of disruption.  Thanks to President Trump’s leadership and American innovation, the United States is the world’s largest natural gas producer and exporter, with exports reaching all-time highs in March 2026. Since President Trump ended the previous administration’s LNG export approval ban, the Department has approved more than 19 Bcf/d of LNG export authorizations. With recent final investment decisions for additional export capacity, U.S. LNG exports are set

Read More »

Why can’t we have nice routers anymore?

In the Volt Typhoon and Flax Typhoon attacks, the routers themselves weren’t compromised because they were foreign-made routers. Far from it! They were compromised because they were unpatched, Internet-exposed, and end-of-life. The router manufacturers were no more guilty of opening the doors to these attacks than Microsoft is for your company’s Windows 7 PCs being hacked in 2026. Only the Salt Typhoon assault on Cisco IOS XE software, which was running on enterprise-grade routers—specifically, ASR 1000 Series, ISR 4000 Series, and Catalyst 8000 Series edge platforms—can be linked directly to Chinese-made routers. Guess what, though? You can still buy, use, and deploy this Cisco hardware, which is used as core routers by top American telecoms such as AT&T, Verizon, and T-Mobile. Uncle Joe wants to replace his router with a brand-new Wi-Fi 7 model router? Nope, he can’t do it. Multi-billion-dollar companies decide to replace vital infrastructure routers that carry billions of messages every day? Sure, go for it! You know, if it were me, I’d be taking a long, hard look at the actual modern enterprise networking gear that we know has been breached. Why isn’t the FCC doing this? Darned if I know. Even the FCC acknowledges that some of Cisco’s problems have nothing to do with who made the hardware and where it was built. For example, the truly awful CVE-2023-20198 vulnerability, with its CVSS score of 10, was all about a boneheaded security hole in Cisco IOS XE Web UI, not the firmware or hardware. The FCC argues, however, that consumer routers pose unique risks because they’re deployed in millions of homes with minimal security oversight, thus making them ideal for botnet infrastructure. I can’t argue with that. But that has nothing to do with who made these devices and where.

Read More »

Amazon Middle East datacenter suffers second drone hit as Iran steps up attacks

Amazon was contacted for comment on the latest Bahrain drone incident, but said it had nothing to add beyond the statement in its current advisory. Denial of infrastructure Doing the damage is the Shaheed 136, a small and unsophisticated drone designed to overwhelm defenders with numbers. If only one in twenty reaches its target, the price-performance still exceeds that of more expensive systems. When aimed at critical infrastructure such as datacenters, the effect is also psychological; the threat of an attack on its own can be enough to make it difficult for organizations to continue using an at-risk facility.  Iran’s targeting of the Bahrain datacenter is unlikely to be random. Amazon opened its ME-SOUTH-1 AWS presence in 2019, and it is still believed to be the company’s largest site in the Middle East. Earlier this week, the Islamic Revolutionary Guard Corps (IRGC) Telegram channel explicitly threatened to target at least 18 US companies operating in the region, including Microsoft, Google, Nvidia, and Apple. This follows similar threats to an even longer list of US companies made on the IRGC-affiliated Tasnim News Agency in recent weeks. That strategy doesn’t bode well for US companies that have made large investments in Middle Eastern datacenter infrastructure in recent years, drawn by the growing wealth and influence of countries in the region. This includes Amazon, which has announced plans to build a $5.3 billion datacenter in Saudi Arabia, due to become available in 2026. If this is now under threat, whether by warfare or the hypothetical possibility of attack, that will create uncertainty.

Read More »

Gemma 4: Byte for byte, the most capable open models

At the edge, our E2B and E4B models redefine on-device utility, prioritizing multimodal capabilities, low-latency processing and seamless ecosystem integration over raw parameter count.Powerful, accessible, openTo power the next generation of pioneering research and products, we’ve sized the Gemma 4 models specifically to run and fine-tune efficiently on hardware — from billions of Android devices worldwide, to laptop GPUs, all the way up to developer workstations and accelerators.By using these highly optimized models, you can fine-tune Gemma 4 to achieve state-of-the-art performance on your specific tasks. We’ve already seen incredible success with this approach; for instance, INSAIT created a pioneering Bulgarian-first language model (BgGPT), and we worked with Yale University on Cell2Sentence-Scale to discover new pathways for cancer therapy, among many others.Here is what makes Gemma 4 our most capable open model family yet:Advanced reasoning: Capable of multi-step planning and deep logic, Gemma 4 demonstrates significant improvements in math and instruction-following benchmarks that require it.Agentic workflows: Native support for function-calling, structured JSON output, and native system instructions enables you to build autonomous agents that can interact with different tools and APIs and execute workflows reliably.Code generation: Gemma 4 supports high-quality offline code, turning your workstation into a local-first AI code assistant.Vision and audio: All models natively process video and images, supporting variable resolutions, and excelling at visual tasks like OCR and chart understanding. Additionally, the E2B and E4B models feature native audio input for speech recognition and understanding.Longer context: Process long-form content seamlessly. The edge models feature a 128K context window, while the larger models offer up to 256K, allowing you to pass repositories or long documents in a single prompt.140+ languages: Natively trained on over 140 languages, Gemma 4 helps developers build inclusive, high-performance applications for a global audience.Versatile models for diverse hardwareWe are releasing the Gemma 4 model weights in sizes tailored for specific hardware and use cases, ensuring you get frontier-class reasoning wherever you need it:26B and 31B models: Frontier intelligence, offline on your personal computersOptimized to provide researchers and developers with state-of-the-art reasoning on accessible hardware, our unquantized bfloat16 weights fit efficiently on a single 80GB NVIDIA H100 GPU. For local setups, quantized versions run natively on consumer GPUs to power your IDEs, coding assistants and agentic workflows. Our 26B Mixture of Experts (MoE) focus on latency, activating only 3.8 billion of its total parameters during inference to deliver exceptionally fast tokens-per-second, while our 31B Dense is maximizing raw quality and provides a powerful foundation for fine-tuning.

Read More »

New tool on AWS makes it easier to develop quantum error correction

Constellation is available via Quantum Elements and runs on AWS, says Izhar Medalsy, co-founder and CEO at Quantum Elements. And it is designed to help quantum researchers develop and test error correction strategies. Alternatives, such as the popular Stim simulator from Google Quantum AI, don’t simulate all the potential sources of errors, says Medalsy. “Stim uses a lot of approximations, which makes it very fast,” adds Tong Shen, research scientist at Quantum Elements, who worked on Constellation. “It’s low latency. But it’s just inaccurate.” “Imagine you’re a captain of a boat, and you want to train your team to get from point A to point B,” Medalsy says. If the training simulator doesn’t account for ocean currents or wind conditions, the team won’t be able to navigate once they hit the real world. Currently, he says, Constellation has modeled computers of up to 97 qubits, and it can be used to go even higher. “We know how to make qubits work,” he says. “Now we see it as the engineering task to increase the number of qubits and reduce the noise.” And with a digital twin, researchers can experiment with error-correction techniques even before the physical computers are ready. “You can solve the problem so once the hardware is ready, you plug it in, and you’re good to go,” he says.

Read More »

Google Research touts memory-compression breakthrough for AI processing

The last time the market witnessed a shakeup like this was China’s DeepSeek, but doubts emerged quickly about its efficacy. Developers found DeepSeek’s efficiency gains required deep architectural decisions that had to be built in from the start. TurboQuant requires no retraining or fine-tuning. You just drop it straight into existing inference pipelines, at least in theory. If it works in production systems with no retrofitting, then data center operators will get tremendous performance gains on existing hardware. Data center operators won’t have to throw hardware at the performance problem. However, analysts urge caution before jumping to conclusions. “This is a research breakthrough, not a shipping product,” said Alex Cordovil, research director for physical infrastructure at The Dell’Oro Group. “There’s often a meaningful gap between a published paper and real-world inference workloads.” Also, Dell’Oro notes that efficiency gains in AI compute tend to get consumed by more demand, known as the Jevons paradox. “Any freed-up capacity would likely be absorbed by frontier models expanding their capabilities rather than reducing their hardware footprint.” Jim Handy, president of Objective Analysis, agrees on that second part. “Hyperscalers won’t cut their spending – they’ll just spend the same amount and get more bang for their buck,” he said. “Data centers aren’t looking to reach a certain performance level and subsequently stop spending on AI. They’re looking to out-spend each other to gain market dominance. This won’t change that.” Google plans to present a paper outlining TurboQuant at the ICLR conference in Rio de Janeiro running from April 23 through April 27.

Read More »

Energy Department Authorizes Additional Exports of LNG from Elba Island Terminal, Strengthening Global Energy Supply with U.S. LNG

WASHINGTON—U.S. Secretary of Energy Chris Wright today authorized an immediate 22% increase in exports of liquefied natural gas (LNG) from the Elba Island Terminal in Chatham County, Georgia. With today’s order, Kinder Morgan subsidiary Southern LNG Company L.L.C., operator of the Elba Island LNG Terminal, is now authorized to export up to an additional 28.25 (Bcf/yr) to non-free trade agreement countries, strengthening global natural gas supplies with reliable U.S. LNG. Elba Island was previously authorized to export up to 130 billion cubic feet per year (Bcf/yr) of natural gas as LNG to non-free trade agreement countries and has been exporting U.S. LNG since 2019. The project is positioned to export the additional approved volumes immediately.  “At a time when global energy supply routes face disruption, the United States remains a reliable energy partner to our allies and trading partners,” said DOE Assistant Secretary of the Hydrocarbons and Geothermal Energy Office, Kyle Haustveit. “DOE is using all available authorities to ensure American energy can reach global markets when it is needed most, supporting energy security and helping stabilize global energy supplies.”  The action comes as global oil and LNG supply routes face disruption from tensions in the Middle East and attacks carried out by Iran and its proxies, threatening the reliable flow of energy through critical maritime corridors. The Department will continue to act, using its full set of authorities, to ensure U.S. LNG remains a dependable energy source in global energy markets and a stabilizing presence in times of disruption.  Thanks to President Trump’s leadership and American innovation, the United States is the world’s largest natural gas producer and exporter, with exports reaching all-time highs in March 2026. Since President Trump ended the previous administration’s LNG export approval ban, the Department has approved more than 19 Bcf/d of LNG export authorizations. With recent final investment decisions for additional export capacity, U.S. LNG exports are set

Read More »

Why can’t we have nice routers anymore?

In the Volt Typhoon and Flax Typhoon attacks, the routers themselves weren’t compromised because they were foreign-made routers. Far from it! They were compromised because they were unpatched, Internet-exposed, and end-of-life. The router manufacturers were no more guilty of opening the doors to these attacks than Microsoft is for your company’s Windows 7 PCs being hacked in 2026. Only the Salt Typhoon assault on Cisco IOS XE software, which was running on enterprise-grade routers—specifically, ASR 1000 Series, ISR 4000 Series, and Catalyst 8000 Series edge platforms—can be linked directly to Chinese-made routers. Guess what, though? You can still buy, use, and deploy this Cisco hardware, which is used as core routers by top American telecoms such as AT&T, Verizon, and T-Mobile. Uncle Joe wants to replace his router with a brand-new Wi-Fi 7 model router? Nope, he can’t do it. Multi-billion-dollar companies decide to replace vital infrastructure routers that carry billions of messages every day? Sure, go for it! You know, if it were me, I’d be taking a long, hard look at the actual modern enterprise networking gear that we know has been breached. Why isn’t the FCC doing this? Darned if I know. Even the FCC acknowledges that some of Cisco’s problems have nothing to do with who made the hardware and where it was built. For example, the truly awful CVE-2023-20198 vulnerability, with its CVSS score of 10, was all about a boneheaded security hole in Cisco IOS XE Web UI, not the firmware or hardware. The FCC argues, however, that consumer routers pose unique risks because they’re deployed in millions of homes with minimal security oversight, thus making them ideal for botnet infrastructure. I can’t argue with that. But that has nothing to do with who made these devices and where.

Read More »

Amazon Middle East datacenter suffers second drone hit as Iran steps up attacks

Amazon was contacted for comment on the latest Bahrain drone incident, but said it had nothing to add beyond the statement in its current advisory. Denial of infrastructure Doing the damage is the Shaheed 136, a small and unsophisticated drone designed to overwhelm defenders with numbers. If only one in twenty reaches its target, the price-performance still exceeds that of more expensive systems. When aimed at critical infrastructure such as datacenters, the effect is also psychological; the threat of an attack on its own can be enough to make it difficult for organizations to continue using an at-risk facility.  Iran’s targeting of the Bahrain datacenter is unlikely to be random. Amazon opened its ME-SOUTH-1 AWS presence in 2019, and it is still believed to be the company’s largest site in the Middle East. Earlier this week, the Islamic Revolutionary Guard Corps (IRGC) Telegram channel explicitly threatened to target at least 18 US companies operating in the region, including Microsoft, Google, Nvidia, and Apple. This follows similar threats to an even longer list of US companies made on the IRGC-affiliated Tasnim News Agency in recent weeks. That strategy doesn’t bode well for US companies that have made large investments in Middle Eastern datacenter infrastructure in recent years, drawn by the growing wealth and influence of countries in the region. This includes Amazon, which has announced plans to build a $5.3 billion datacenter in Saudi Arabia, due to become available in 2026. If this is now under threat, whether by warfare or the hypothetical possibility of attack, that will create uncertainty.

Read More »

Gemma 4: Byte for byte, the most capable open models

At the edge, our E2B and E4B models redefine on-device utility, prioritizing multimodal capabilities, low-latency processing and seamless ecosystem integration over raw parameter count.Powerful, accessible, openTo power the next generation of pioneering research and products, we’ve sized the Gemma 4 models specifically to run and fine-tune efficiently on hardware — from billions of Android devices worldwide, to laptop GPUs, all the way up to developer workstations and accelerators.By using these highly optimized models, you can fine-tune Gemma 4 to achieve state-of-the-art performance on your specific tasks. We’ve already seen incredible success with this approach; for instance, INSAIT created a pioneering Bulgarian-first language model (BgGPT), and we worked with Yale University on Cell2Sentence-Scale to discover new pathways for cancer therapy, among many others.Here is what makes Gemma 4 our most capable open model family yet:Advanced reasoning: Capable of multi-step planning and deep logic, Gemma 4 demonstrates significant improvements in math and instruction-following benchmarks that require it.Agentic workflows: Native support for function-calling, structured JSON output, and native system instructions enables you to build autonomous agents that can interact with different tools and APIs and execute workflows reliably.Code generation: Gemma 4 supports high-quality offline code, turning your workstation into a local-first AI code assistant.Vision and audio: All models natively process video and images, supporting variable resolutions, and excelling at visual tasks like OCR and chart understanding. Additionally, the E2B and E4B models feature native audio input for speech recognition and understanding.Longer context: Process long-form content seamlessly. The edge models feature a 128K context window, while the larger models offer up to 256K, allowing you to pass repositories or long documents in a single prompt.140+ languages: Natively trained on over 140 languages, Gemma 4 helps developers build inclusive, high-performance applications for a global audience.Versatile models for diverse hardwareWe are releasing the Gemma 4 model weights in sizes tailored for specific hardware and use cases, ensuring you get frontier-class reasoning wherever you need it:26B and 31B models: Frontier intelligence, offline on your personal computersOptimized to provide researchers and developers with state-of-the-art reasoning on accessible hardware, our unquantized bfloat16 weights fit efficiently on a single 80GB NVIDIA H100 GPU. For local setups, quantized versions run natively on consumer GPUs to power your IDEs, coding assistants and agentic workflows. Our 26B Mixture of Experts (MoE) focus on latency, activating only 3.8 billion of its total parameters during inference to deliver exceptionally fast tokens-per-second, while our 31B Dense is maximizing raw quality and provides a powerful foundation for fine-tuning.

Read More »

New tool on AWS makes it easier to develop quantum error correction

Constellation is available via Quantum Elements and runs on AWS, says Izhar Medalsy, co-founder and CEO at Quantum Elements. And it is designed to help quantum researchers develop and test error correction strategies. Alternatives, such as the popular Stim simulator from Google Quantum AI, don’t simulate all the potential sources of errors, says Medalsy. “Stim uses a lot of approximations, which makes it very fast,” adds Tong Shen, research scientist at Quantum Elements, who worked on Constellation. “It’s low latency. But it’s just inaccurate.” “Imagine you’re a captain of a boat, and you want to train your team to get from point A to point B,” Medalsy says. If the training simulator doesn’t account for ocean currents or wind conditions, the team won’t be able to navigate once they hit the real world. Currently, he says, Constellation has modeled computers of up to 97 qubits, and it can be used to go even higher. “We know how to make qubits work,” he says. “Now we see it as the engineering task to increase the number of qubits and reduce the noise.” And with a digital twin, researchers can experiment with error-correction techniques even before the physical computers are ready. “You can solve the problem so once the hardware is ready, you plug it in, and you’re good to go,” he says.

Read More »

ExxonMobil begins Turrum Phase 3 drilling off Australia’s east coast

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Esso Australia Pty Ltd., a subsidiary of ExxonMobil Corp. and current operator of the Gippsland basin oil and gas fields in Bass Strait offshore eastern Victoria, has started drilling the Turrum Phase 3 project in Australia. This $350-million investment will see the VALARIS 107 jack-up rig drill five new wells into Turrum and North Turrum gas fields within Production License VIC/L03 to support Australia’s east coast domestic gas market. The new wells will be drilled from Marlin B platform, about 42 km off the Gippsland coastline, southeast of Lakes Entrance in water depths of about 60 m, according to a 2025 information bulletin.   <!–> Turrum Phase 3, which builds on nearly $1 billion in recent investment across the Gippsland basin, is expected to be online before winter 2027, the company said in a post to its LinkedIn account Mar. 24. In 2025, Esso made a final investment decision to develop the Turrum Phase 3 project targeting underdeveloped gas resources. The Gippsland Basin joint venture is a 50-50 partnership between Esso Australia Resources and Woodside Energy (Bass Strait) and operated by Esso Australia.  ]–><!–> ]–>

Read More »

The Golden Rule of the oil market: Understanding global price dynamics and emerging exceptions

Mark FinleyBaker Institute, Rice University  In recent weeks, questions surrounding the oil market crisis have been framed around a core principle described as the Golden Rule of the Oil Market: it is a global market. When conditions change anywhere—positively or negatively—prices respond everywhere. That framework helps explain why gasoline prices are rising in the US despite limited direct imports from the Middle East and the US’s status as a significant net exporter of oil. It also explains why oil cargoes that Iran permits to transit the Strait of Hormuz reduce Iran’s leverage over global oil prices, and by extension over US consumers and policymakers concerned about prices at the pump. Alongside its own exports, Iran has allowed a handful of additional tankers to transit the Strait, including several tankers destined for China and LPG shipments for India. The greater the volume of oil transiting the Strait, the smaller the disruption to the global oil market and the less upward pressure on global prices. The same logic applies to US efforts to ease sanctions on Iranian and Russian oil cargoes already at sea, which are unlikely to provide meaningful relief for rising oil prices. Under the Golden Rule, those barrels—having already been produced and shipped—would have found buyers regardless of sanctions, with price discounts sufficient to offset the risk of US penalties, as has been the case for Russian oil since 2022. Exceptions The Golden Rule has described oil market dynamics effectively for decades. However, a small number of potential exceptions have begun to emerge. For now, those exceptions remain relatively inconsequential, though larger risks may be developing. The non-market player There are two ways that supply and demand can be equalized. In a global market, it is achieved by price changes. Prices rise or fall to ensure that there is

Read More »

Dallas Fed survey: War uncertainty capping firms’ ambitions

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Seven out of 10 oil-and-gas executives surveyed by the Federal Reserve Bank of Dallas think the price of a barrel of West Texas Intermediate (WTI), which flirted with $100 in the last 2 weeks, will finish 2026 below $80. But with the war with Iran “wreaking havoc” in commodity markets, most firms aren’t rushing to overhaul their 2026 production plans. Fed researchers’ quarterly survey of industry players from about 130 companies in Texas and parts of Louisiana and New Mexico showed that the average WTI price forecast for year-end is around $74. That’s up significantly from the $62 outlook from 3 months ago and well below the roughly $94/bbl at which WTI was being priced during the Fed’s survey period earlier this month. At $74, WTI would also be at a price high enough for most production to be profitable. Federal Reserve Bank of Dallas Only about 5% of recent Dallas Fed Energy Survey respondents think WTI prices will be above $90 at year’s end. <!–> ]–> But the spike in uncertainty from the conflict in the Middle East means most executives are being sober about their options in

Read More »

Trinidad and Tabago enlists Maire for refinery restart study

The government of Trinidad and Tobago has launched a study to evaluate the potential restart of state-owned Guaracara Refining Co. Ltd.’s (Guaracara) Pointe-a-Pierre refinery—the island nation’s only—which ceased processing activities in late 2018 amid the government’s restructuring of former operator Petroleum Co. of Trinidad and Tobago Ltd. (Petrotrin) As part of a contract award announced on Mar. 25, Maire SPA subsidiary Tecnimont SPA will conduct a rehabilitation study for the upgrading of the currently idled Guaracara refinery complex, Maire said. Tecnimont’s scope of work under the $50-million includes execution of a comprehensive technical and integrity assessment of the Guaracara complex’s existing units and equipment for development of a rehabilitation study that could lead to restart of the 150,000-b/d refinery, according to Maire. Alongside identifying areas of the refinery requiring necessary upgrading or refurbishment, the assessment will also evaluate: The adequacy of existing technologies against the manufacturing site’s long-term operational and performance objectives. The complex’s energy efficiency and environmental performance. Preliminary CAPEX and OPEX estimates to support the possible refurbishment and restart project. Tecnimont’s scope additionally covers engineering of advanced water intake and cooling systems, “all to be designed in accordance with the most stringent international standards,” the parent company said. To be completed in two phases, Maire said it anticipates Tecnimont’s work on the study to be completed by early 2027, after which the service provider expects to receive subsequent contracts upon project approval for front-end engineering and design (FEED), engineering, procurement, and construction (EPC), and ongoing operations and maintenance services associated with the complex’s full rehabilitation. Alessandro Bernini, chief executive officer of MAIRE, commented: “This project further strengthens our geographic diversification expanding our presence in Central America, and confirms the strategic relevance of upgrading initiatives. Emphasizing the company’s engineering expertise and technological know-how to support transformation of existing assets

Read More »

JPM Energía targets infrastructure-led development as new Vaca Muerta asset operator

JPM Energía is entering Argentina’s unconventional upstream sector through an asset-acquisition agreement with Pluspetrol. If the transaction closes as expected, JPM Energía will become a new independent operator in the Vaca Muerta shale play. The company agreed to acquire Pluspetrol’s 80% interest in Los Toldos I Sur and a 50% interest in Pampa de las Yeguas I. Gustavo Nagel, JPM president, said the acquisition is focused on operational execution, not exploration upside. “These are not exploration blocks. They are assets with infrastructure, wells and processing capacity. The value here is execution—completing wells, optimizing facilities and increasing throughput,” Nagel said. The acquired areas include gas treatment plants, oil handling infrastructure, and pipeline connections, and the development strategy will be based on reactivating existing assets rather than building new infrastructure. “Our model is not large-scale drilling from day one. The plan is phased development, starting with DUCs [drilled but uncompleted wells], facility optimization, and incremental production growth,” Nagel said. “We saw an opportunity in assets with existing infrastructure and low activity. With the right operational approach, these blocks can increase production without massive initial capital,” Nagel continued. Pluspetrol retained its pipeline capacity so JPM would  need to negotiate new transportation agreements as production ramps up, Nagel said.

Read More »

EIA: US crude inventories up 6.9 million bbl

US crude oil inventories for the week ended Mar. 20, excluding the Strategic Petroleum Reserve, increased by 6.9 million bbl from the previous week, according to data from the US Energy Information Administration (EIA). At 456.2 million bbl, US crude oil inventories are about 0.1% above the 5-year average for this time of year, the EIA report indicated. EIA said total motor gasoline inventories decreased by 2.6 million bbl from last week and are 3% above the 5-year average for this time of year. Both finished gasoline inventories and blending components inventories decreased last week. Distillate fuel inventories increased by 3.0 million bbl last week and are about 0.4% below the 5-year average for this time of year. Propane-propylene inventories increased by 500,000 bbl from last week and are 59% above the 5-year average for this time of year, EIA said. US crude oil refinery inputs averaged 16.6 million b/d for the week ended Mar. 20, which was 366,000 b/d more than the previous week’s average. Refineries operated at 92.9% of operable capacity. Gasoline production increased, averaging 9.7 million b/d. Distillate fuel production increased by 158,000 b/d, averaging 5.0 million b/d. US crude oil imports averaged 6.5 million b/d, down by 730,000 b/d from the previous week. Over the last 4 weeks, crude oil imports averaged about 6.6 million b/d, 15.5% more than the same 4-week period last year. Total motor gasoline imports averaged 443,000 b/d. Distillate fuel imports averaged 155,000 b/d.

Read More »

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.  But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.  Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.  More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.  I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.  On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.  People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest).  People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.  Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?  In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good.  Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed.  And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.   But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing.  Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search.  “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly.  It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be.  But once you’ve used AI Overviews a bit, you realize they are different.  Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.)  “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.”  That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video.  When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai.  There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out?  I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too.  “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web.  “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful.  “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.”  But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?   Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.   “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says.  Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.”  Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”  “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.”  He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew?  A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.   “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says.  OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more.  “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience.  Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does.  “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.”  When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them.  “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed!  The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers.  It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.”  We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge.  The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets.  Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed.  “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information.  In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses.  But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on. 

Read More »

Subsea7 Scores Various Contracts Globally

Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Read More »

Driving into the future

Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.  We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen.  Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes. 
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake.  What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story. 
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa.  Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Read More »

Oil Holds at Highest Levels Since October

Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

Read More »

What to expect from NaaS in 2025

Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

Read More »

UK battery storage industry ‘back on track’

UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing  and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was  fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

Read More »

Lyria 3 Pro: Create longer tracks in more

Providing new places to generate musicHigh-quality music generation should be accessible wherever creativity happens. Whether you are an app developer, a business or music professional, or a creator, these integrations allow you to use Lyria’s advanced musical awareness to scale your production.Vertex AI: Lyria 3 Pro is now in public preview on Vertex AI for businesses who require on-demand audio at scale. It gives organizations the ability to scale high-fidelity production, from rapidly generating bespoke soundtracks for gaming to integrating into creative tools, music and video platforms.Google AI Studio and the Gemini API: For developers building the next generation of creative tools, Lyria 3 provides improved musical awareness and structural coherence to offer creative flexibility. Lyria 3 Pro is now available alongside Lyria RealTime in AI Studio.Google Vids: Vids is an AI-powered video creation app that anyone can use. With Lyria 3 and Lyria 3 Pro in Vids, you can add custom music that matches your style for everything from creative projects to marketing videos. This is rolling out to Google Workspace customers and Google AI Pro & Ultra subscribers starting this week.Gemini app: Longer generations with Lyria 3 Pro are now available in the Gemini app, starting with paid subscribers. Lyria 3 Pro’s enhanced customization offers more space to experiment and play with longer tracks. So now, you can add more details to bring your full vision to life, or create personalized tracks for vlogs, podcasts or tutorial videos.ProducerAI: We recently introduced ProducerAI, a collaborative music creation tool, built by musicians looking for new ways to enhance their creative process. With Lyria 3 Pro, ProducerAI offers an agentic experience designed to help artists, producers and songwriters at every level iterate on comprehensive songs. It’s available globally to free and paid subscribers.

Read More »

Why this battery company is pivoting to AI

Qichao Hu doesn’t mince words about how he sees the state of the battery industry. “Almost every Western battery company has either died or is going to die. It’s kind of the reality,” he says. Hu is the CEO of SES AI, a Massachusetts-based battery company. It once had aims of making huge amounts of advanced lithium metal batteries for major industries like electric vehicles—but now the company is placing its bets on AI materials discovery. Hu sees the pivot as an essential one. “It’s just not possible for a Western company to build a sustainable business,” he says. The company is still making some batteries, but only for smaller markets like drones rather than those that would require higher volumes, like EVs. The new focus is the company’s battery materials discovery platform—which it can either license to other battery companies or use to develop materials to sell.  Some leading US EV battery companies have folded in recent months, and others, like SES AI, are making dramatic changes in strategy. This shift in who’s building batteries and where they’re doing it could shape the future geopolitics of energy. 
The work that would eventually evolve into SES AI began at MIT, where Hu completed his graduate research. His battery work was aimed at applications in oil and gas exploration. The industry uses sensors that go deep underground, where temperatures can top 120 °C (about 250 °F). The team hoped to develop a battery that could withstand those high temperatures and last longer on a single charge.  The chosen technology was a solid polymer lithium metal battery. These cells use lithium metal for their anode and a polymer for their electrolyte (the material that ions move through in a battery cell). Together, these components can increase the energy density of a cell significantly, relative to the lithium-ion batteries that are common in personal devices and EVs today. (Lithium-ion batteries generally use a graphite material for their anode and a liquid for the electrolyte.)
That solid-state battery technology became the foundation of Solid Energy, a startup Hu founded that spun out from MIT in 2012 and raised its first private investment in 2013. The team eventually realized that underground oil exploration was a small market, so after several years of operation they began to focus on electric vehicles, which were starting to come into the mainstream. After the team tweaked the chemistry to work better at lower temperatures, the company built its first pilot facility in Massachusetts and eventually another facility in Shanghai. By 2021, the battery industry was booming, Hu recalls, and EVs were the hottest industry to be in. There was a ton of interest in next-generation battery technology from major automakers at the time, and Solid Energy started developing technology with GM, Hyundai, and Honda. Larger vehicles, like SUVs and trucks, seemed like a good fit for next-generation batteries, Hu says. Massive vehicles like the ones Americans like to drive would need lighter batteries so they could have a reasonable range without being prohibitively heavy. The company also shifted its chemistry focus, and in 2022 it announced a battery with a silicon anode rather than a lithium metal one. That shift could help make the battery easier to manufacture. Since then, growth in the EV market has slowed, at least in the US, partly because of major pullbacks in funding from the Trump administration. EV tax credits for drivers, a key piece of support pushing Americans toward electric options, ended in late 2025. With the market for large electric cars in trouble, Hu says, “now we have to look at every market.”   The AI materials discovery platform on which it’s pinning many of its hopes is called Molecular Universe. The company seeks not only to provide its software to other battery companies but also to identify new battery materials and either license them or sell them to those companies. COURTESY OF SES AI The platform has already identified six new electrolyte materials, according to the company. Hu says one is an additive that could help improve the lifetime of batteries with silicon anodes. 

One of the challenges with silicon anodes is that they tend to swell a lot during use, which can cause physical damage and prevent efficient charging and discharging. To address the problem, the industry typically uses a material called fluoroethylene carbonate (FEC), which can help form an elastic film on the anode so the battery can still charge effectively. That additive can degrade at high temperatures, though, producing gases that can harm a battery’s lifetime. The SES platform identified a compound that works like FEC but doesn’t release those gases. The company’s long history and deep battery knowledge could help make its platform a useful tool, Hu says. He sees the actual model as less crucial than SES’s domain expertise and data from years of making and testing batteries.  “By not actually making the physical battery, we’re actually able to scale and then generate revenue faster,” he says.  But some experts are skeptical about the near-term prospects for AI materials discovery to revive the industry. “New materials development, as much as we thought that was what people wanted (and, frankly, it should be what the cell makers want)—I don’t know that that seems to be the real linchpin of the battery industry’s progress,” says Kara Rodby, a technical principal at Volta Energy Technologies, a venture capital firm that focuses on the energy storage industry. Investors are pulling back, and a slowdown in public support is making things difficult for some parts of the battery industry, she adds: “I don’t know that the ability to discover any new material is going to unlock anything new for the battery industry at this point in time.”

Read More »

This startup wants to change how mathematicians do math

Axiom Math, a startup based in Palo Alto, California, has released a free new AI tool for mathematicians, designed to discover mathematical patterns that could unlock solutions to long-standing problems. The tool, called Axplorer, is a redesign of an existing one called PatternBoost that François Charton, now a research scientist at Axiom, co-developed in 2024 when he was at Meta. PatternBoost ran on a supercomputer; Axplorer runs on a Mac Pro. The aim is to put the power of PatternBoost, which was used to crack a hard math puzzle known as the Turán four-cycles problem, in the hands of anyone who can install Axplorer on their own computer. Last year, the US Defense Advanced Research Projects Agency set up a new initiative called expMath—short for Exponentiating Mathematics—to encourage mathematicians to develop and use AI tools. Axiom sees itself as part of that drive.
Breakthroughs in math have enormous knock-on effects across technology, says Charton. In particular, new math is crucial for advances in computer science, from building next-generation AI to improving internet security. Most of the successes with AI tools have involved finding solutions to existing problems. But finding solutions is not all that mathematicians do, says Axiom Math founder and CEO Carina Hong. Math is exploratory and experimental, she says. 
MIT Technology Review met with Charton and Hong last week for an exclusive video chat about their new tool and how AI in general could change mathematics.  Math by chatbot In the last few months, a number of mathematicians have used LLMs, such as OpenAI’s GPT-5, to find solutions to unsolved problems, especially ones set by the 20th-century mathematician Paul Erdős, who left behind hundreds of puzzles when he died. But Charton is dismissive of those successes. “There are tons of problems that are open because nobody looked at them, and it’s easy to find a few gems you can solve,” he says. He’s set his sights on tougher challenges—“the big problems that have been very, very well studied and famous people have worked on them.” Last year, Axiom Math used another of its tools, called AxiomProver, to find solutions to four such problems in mathematics.    The Turán four-cycles problem that PatternBoost cracked is another big problem, says Charton. (The problem is an important one in graph theory, a branch of math that’s used to analyze complex networks such as social media connections, supply chains, and search engine rankings. Imagine a page covered in dots. The puzzle involves figuring out how to draw lines between as many of the dots as possible without creating loops that connect four dots in a row.) “LLMs are extremely good if what you want to do is derivative of something that has already been done,” says Charton. “This is not surprising—LLMs are pretrained on all the data that there is. But you could say that LLMs are conservative. They try to reuse things that exist.” However, there are lots of problems in math that require new ideas, insights that nobody has ever had. Sometimes those insights come from spotting patterns that hadn’t been spotted before. Such discoveries can open up whole new branches of mathematics. PatternBoost was designed to help mathematicians find new patterns. Give the tool an example and it generates others like it. You select the ones that seem interesting and feed them back in. The tool then generates more like those, and so on.   It’s a similar idea to Google DeepMind’s AlphaEvolve, a system that uses an LLM to come up with novel solutions to a problem. AlphaEvolve keeps the best suggestions and asks the LLM to improve on them.

Special access Researchers have already used both AlphaEvolve and PatternBoost to discover new solutions to long-standing math problems. The trouble is that those tools run on large clusters of GPUs and are not available to most mathematicians. Mathematicians are excited about AlphaEvolve, says Charton. “But it’s closed—you need to have access to it. You have to go and ask the DeepMind guy to type in your problem for you.” And when Charton solved the Turán problem with PatternBoost, he was still at Meta. “I had literally thousands, sometimes tens of thousands, of machines I could run it on,” he says. “It ran for three weeks. It was embarrassing brute force.” Axplorer is far faster and far more efficient, according to the team at Axiom Math. Charton says it took Axplorer just 2.5 hours to match PatternBoost’s Turán result. And it runs on a single machine. Geordie Williamson, a mathematician at the University of Sydney, who worked on PatternBoost with Charton, has not yet tried Axplorer. But he is curious to see what mathematicians do with it. (Williamson still occasionally collaborates with Charton on academic projects but says he is not otherwise connected to Axiom Math.) Williamson says Axiom Math has made several improvements to PatternBoost that (in theory) make Axplorer applicable to a wider range of mathematical problems. “It remains to be seen how significant these improvements are,” he says. “We are in a strange time at the moment, where lots of companies have tools that they’d like us to use,” Williamson adds. “I would say mathematicians are somewhat overwhelmed by the possibilities. It is unclear to me what impact having another such tool will be.” Hong admits that there are a lot of AI tools being pitched at mathematicians right now. Some also require mathematicians to train their own neural networks. That’s a turnoff, says Hong, who is a mathematician herself. Instead, Axplorer will walk you through what you want to do step by step, she says. The code for Axplorer is open source and available via GitHub. Hong hopes that students and researchers will use the tool to generate sample solutions and counterexamples to problems they’re working on, speeding up mathematical discovery. Williamson welcomes new tools and says he uses LLMs a lot. But he doesn’t think mathematicians should throw out the whiteboards just yet. “In my biased opinion, PatternBoost is a lovely idea, but it is certainly not a panacea,” he says. “I’d love us not to forget more down-to-earth approaches.”

Read More »

The Download: reawakening frozen brains, and the AI Hype Index returns

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. This scientist rewarmed and studied pieces of his friend’s cryopreserved brain  L. Stephen Coles’s brain sits in a vat at a storage facility in Arizona. It has been held there at a temperature of around −146 degrees °C for over a decade, largely undisturbed. Before he died in 2014, Coles had the brain frozen with an ambitious goal in mind: reanimation.  His friend, cryobiologist Greg Fahy, believes it could be revived one day. But other experts are less optimistic.   Still, Fahy’s research could lead to new ways to study the brain. And using cryopreservation for organ transplantation is becoming a viable reality.  
Read the full story to find out what the future holds for the technology.  —Jessica Hamzelou 
The AI Hype Index  Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition.   MIT Technology Review Narrated: how Pokémon Go is giving delivery robots an inch-perfect view of the world   Pokémon Go was the world’s first augmented-reality megahit. Released in 2016 by Niantic, the AR twist on the juggernaut Pokémon franchise fast became a global phenomenon. “500 million people installed that app in 60 days,” says Brian McClendon, CTO at Niantic Spatial, an AI company that Niantic spun out last year.   Now Niantic Spatial is using that vast trove of crowdsourced data to build a kind of world model—a buzzy new technology that grounds the smarts of LLMs in real environments. The firm wants to use it to help robots navigate more precisely.  —Will Douglas Heaven  This is our latest story to be turned into an MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.  The next era of space exploration  Our footprint in the solar system is rapidly expanding. Programs to build permanent Moon bases and find life on Mars have transitioned from science fiction to active space agency missions. The scientists behind them will not only shed new light on the cosmos, but also reveal where humanity is headed.  To examine what the future holds in store, MIT Technology Review features editor Amanda Silverman will sit down today with award-winning science journalist and author Robin George Andrews for an exclusive subscriber-only Roundtable conversation about “The Next Era of Space Exploration.” Register here to join the session at 16:00 GMT / 12:00 PM ET / 9:00 AM PT.  The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 OpenAI is shutting down AI video generator Sora  The app attracted at least as much controversy as acclaim. (CNBC) + Closing it means saying goodbye to $1 billion from Disney. (BBC) + OpenAI is cutting back on side projects ahead of an expected IPO. (WSJ $) + But it’s focusing its efforts on building a fully automated researcher. (MIT Technology Review)  2 A judge suspects the Pentagon is illegally punishing Anthropic She labelled the DoD’s ban “troubling.” (Bloomberg) + Anthropic and the Pentagon are facing off in court. (Guardian) + The DoD wants AI companies to train on classified data. (MIT Technology Review)  3 Meta has been ordered to pay $375 million for endangering children online Prosecutors said the company knew it put children at risk. (Engadget) + Meta is offering its top talent stock options as incentives for its AI push. (CNBC)  4 Arm will sell its own computer chips for the first time It’s aimed at data centers that run AI tasks. (NYT $) + Arm stock jumped 13% on the news. (CNBC)  5 Manus’s founders have been barred from leaving China following Meta’s takeover Beijing is reviewing the $2 billion acquisition of the AI startup. (FT $)  6 Baltimore has sued xAI over Grok’s fake nude images  The chatbot allegedly violated consumer protections. (Guardian) + There’s a big market for pornographic deepfakes of real women. (MIT Technology Review)  7 NASA plans to send a nuclear-powered spacecraft to Mars in 2028 It’ll take a payload of Ingenuity-class helicopters to the Red Planet. (NYT $) + NASA also wants to put a $20 billion base on the Moon. (The Verge) 
8 A company is secretly turning Zoom meetings into AI-generated podcasts WebinarTV turns the calls into content without telling anyone. (404 Media)  9 Iranian volunteers have built their own missile warning map It fills the gap left by Iran’s lack of a public emergency alert tool. (Wired $) + Here’s where OpenAI’s tech could show up in Iran. (MIT Technology Review) 
10 A nonprofit is sending basic income payments to AI-impacted workers It’s starting by giving 25-50 people $1,000 per month. (Gizmodo)  Quote of the day  “I am first and foremost a scientist. My goal is to understand nature. But doing science is, sort of, like reading the mind of God.”  —DeepMind CEO Demis Hassabis shares his approach to AI strategy with the FT.  One More Thing  EVA REDAMONTI Inside the hunt for the most dangerous asteroid ever   As asteroid 2024 YR4 hurtled toward Earth, astronomers determined that this massive rock posed a higher risk of impact than any object of its size in recorded history. Then, just as quickly as history was made, experts declared that the danger had passed.  This is the inside story of the network of global scientists who found, followed, planned for, and finally dismissed the most dangerous asteroid ever found—all under the tightest of timelines and with the highest of stakes. Find out how they did it. 
—Robin George Andrews  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + Soothe subscription fatigue with this simple cancellation tool. + Takashi Murakami’s reimagined Monets are pop-art magic. + Jump into a rabbit hole with this app that visualizes links between Wikipedia pages. + This playful lynx that snatched the top prize in a photo competition is a delight. 

Read More »

Agentic commerce runs on truth and context

In partnership withReltio Imagine telling a digital agent, “Use my points and book a family trip to Italy. Keep it within budget, pick hotels we’ve liked before, and handle the details.” Instead of returning a list of links, the agent assembles an itinerary and executes the purchase. That shift, from assistance to execution, is what makes agentic AI different. It also changes the operating speed of commerce. Payment transactions are already clear in milliseconds. The new acceleration is everything before the payment: discovery, comparison, decisioning, authorization, and follow-through across many systems. As humans step out of routine decisions, “good enough” data stops being good enough. In an agent-driven economy, the constraint isn’t speed; it’s trust at machine speed and scale. Automated markets already work because identity, authority, and accountability are built in. As agents transact across businesses, that same clarity is required. Master data management (MDM)—the discipline of creating a single master record—becomes the exchange layer: tracking who an agent represents, what it can do, and where responsibility sits when value moves. Markets don’t fail from automation; they fail from ambiguous ownership. MDM turns autonomous action into legitimate, scalable trust. To make agentic commerce safe and scalable, organizations will need more than better models. They will need a modern data architecture and an authoritative system of context that can instantly recognize, resolve, and distinguish entities. It is the difference between automation that scales and automation that needs constant human correction.
The agent is a new participant Digital commerce has long been built on two primary sides: buyers and suppliers/merchants. Agentic commerce adds a third participant that must be treated as a first-class entity: the agent acting on the buyer’s behalf. That sounds simple until you ask the questions every enterprise will face:
Who is the individual, across channels and devices, with enough certainty for automation? Who is the agent, and what permissions and limits define what it can do? Who is the merchant or supplier, and are we sure we mean the right one? Who holds liability if the agent acts with permission, but against user intent? The practical risk is confusion. Humans, for example, can infer that “Delta” means the airline when they are booking a flight, not the faucet company. An agent needs deterministic signals. If the system guesses wrong, it either breaks trust or forces a human confirmation step that defeats the promise of speed. Why ‘good enough’ data breaks at machine speed Most organizations have learned to live with imperfect data. Duplicate customer records are tolerable. Incomplete product attributes are annoying. Merchant identities can be reconciled later. Agentic workflows change that tolerance. When an agent takes action without a human checking the output, it needs data that is close to perfect, because it cannot reliably notice when data is ambiguous or wrong the way a person can. The failure modes are predictable, and they show up in places that matter most: Product truth: If the catalog is inconsistent, an agent’s choices will look arbitrary (“the wrong shirt,” “the wrong size,” “the wrong material”), and trust collapses quickly. Payee truth: Agentic commerce expands beyond cards to account-to-account and open-banking-connected experiences, broadening the universe of payees and the need to recognize them accurately in real time. Identity truth: People operate in multiple contexts (work versus personal). Devices shift. A system that cannot distinguish amongst these contexts will either block legitimate activity or approve risky activity, both of which damage adoption. This is why unified enterprise data and entity resolution move from nice to have to operationally required. The more autonomy you want, the more you must invest in modern data foundations that ensure it is safe. Context intelligence: The missing layer When leaders talk about agentic AI, they often focus on model capability: planning, tool use, and reasoning. Those are necessary, but they are not sufficient. Agentic commerce also requires a layer that provides authoritative context at runtime. Think of it as a real-time system of context that can answer instantly and consistently: • Is this the right person?• Is this the right agent, acting within the right permissions?• Is this the right merchant or payee?• What constraints apply right now (budget, policy, risk, loyalty rules, preferred suppliers)?

Two design principles matter. First, entity truth must be deterministic enough for automation. Large language models are probabilistic by nature. That is helpful for creating options for writing and drawing. It is risky for deciding where money goes, especially in B2B and finance workflows, where “probably correct” is not acceptable. Second, context must travel at the speed of interaction and remain portable across the entire connected network value chain. Mastercard’s experience optimizing payment flows is instructive: the more services you layer onto a transaction, the more you risk slowing it down. The pattern that scales pre-resolves, curates, and packages the signal so that execution is lightweight. This is also where tokenization is heading. Initiatives like Mastercard’s Agent Pay and Verifiable Intent signal a future in which consumer credentials, agent identities, permissions, and provable user intent are encoded as cryptographically secure artifacts — enabling merchants, issuers and platforms to deterministically verify authorization and execution at machine speed. What leaders should do in the next 12 to 24 months Adoption will not be uniform. Early traction will often depend less on industry and more on the sophistication of an organization’s systems and data discipline. That makes the next two years a window for practical preparation. Five moves stand out. Treat agents as governed identities, not features. Define how agents are onboarded, authenticated, permissioned, monitored, and retired. Prioritize entity resolution where the cost of being wrong is highest. Start with payees, suppliers, employee-versus-personal identity, and high-volume product categories. Build a reusable context service that every workflow and agent can call. Do not force each system to reconstruct identity and relationships from scratch. Precompute and compress signals. Resolve and curate context upstream so that runtime decisioning stays fast and predictable. Expand autonomy only as trust is earned. Build a governance framework to address disputes, keep humans in the loop for higher-risk actions, measure accuracy, and expand automation as outcomes prove reliable. A tsunami effect across industries Agentic AI will not be confined to shopping carts. It will touch procurement, travel, claims, customer service, and finance operations. It will compress decision cycles and remove manual steps, but only for organizations that can supply agents with clean identity, precise entity truth, and reliable context. The winners will treat entity truth and context as core infrastructure for automation, not as a back-office cleanup project. In commerce at machine speed, trust is not a brand attribute; it is an architectural decision encoded in identity, context, and control. This content was produced by Reltio. It was not written by MIT Technology Review’s editorial staff.

Read More »

The AI Hype Index: AI goes to war

AI is at war. Anthropic and the Pentagon feuded over how to weaponize Anthropic’s AI model Claude; then OpenAI swept the Pentagon off its feet with an “opportunistic and sloppy” deal. Users quit ChatGPT in droves. People marched through London in the biggest protest against AI to date. If you’re keeping score, Anthropic—the company founded to be ethical—is now turbocharging US strikes on Iran. 
.cst-large,
.cst-default {
width: 100%;
}

@media (max-width: 767px) {
.cst-block {
overflow-x: hidden;
}
}

@media (min-width: 630px) {
.cst-large {
margin-left: -25%;
width: 150%;
}

@media (min-width: 960px) {
.cst-large {
margin-left: -16.666666666666664%;
width: 140.26%;
}
}

@media (min-width: 1312px) {
.cst-large {
width: 145.13%;
}
}
} @media (min-width: 60rem) { .flourish-embed { width: 60vw; transform: translateX(-50%); left: 50%; position: relative; } } On the lighter side, AI agents are now going viral online. OpenAI hired the creator of OpenClaw, a popular AI agent. Meta snapped up Moltbook, where AI agents seem to ponder their own existence and invent new religions like Crustafarianism. And on RentAHuman, bots are hiring people to deliver CBD gummies. The future isn’t AI taking your job. It’s AI becoming your boss and finding God.

Read More »

Google Research touts memory-compression breakthrough for AI processing

The last time the market witnessed a shakeup like this was China’s DeepSeek, but doubts emerged quickly about its efficacy. Developers found DeepSeek’s efficiency gains required deep architectural decisions that had to be built in from the start. TurboQuant requires no retraining or fine-tuning. You just drop it straight into existing inference pipelines, at least in theory. If it works in production systems with no retrofitting, then data center operators will get tremendous performance gains on existing hardware. Data center operators won’t have to throw hardware at the performance problem. However, analysts urge caution before jumping to conclusions. “This is a research breakthrough, not a shipping product,” said Alex Cordovil, research director for physical infrastructure at The Dell’Oro Group. “There’s often a meaningful gap between a published paper and real-world inference workloads.” Also, Dell’Oro notes that efficiency gains in AI compute tend to get consumed by more demand, known as the Jevons paradox. “Any freed-up capacity would likely be absorbed by frontier models expanding their capabilities rather than reducing their hardware footprint.” Jim Handy, president of Objective Analysis, agrees on that second part. “Hyperscalers won’t cut their spending – they’ll just spend the same amount and get more bang for their buck,” he said. “Data centers aren’t looking to reach a certain performance level and subsequently stop spending on AI. They’re looking to out-spend each other to gain market dominance. This won’t change that.” Google plans to present a paper outlining TurboQuant at the ICLR conference in Rio de Janeiro running from April 23 through April 27.

Read More »

Energy Department Authorizes Additional Exports of LNG from Elba Island Terminal, Strengthening Global Energy Supply with U.S. LNG

WASHINGTON—U.S. Secretary of Energy Chris Wright today authorized an immediate 22% increase in exports of liquefied natural gas (LNG) from the Elba Island Terminal in Chatham County, Georgia. With today’s order, Kinder Morgan subsidiary Southern LNG Company L.L.C., operator of the Elba Island LNG Terminal, is now authorized to export up to an additional 28.25 (Bcf/yr) to non-free trade agreement countries, strengthening global natural gas supplies with reliable U.S. LNG. Elba Island was previously authorized to export up to 130 billion cubic feet per year (Bcf/yr) of natural gas as LNG to non-free trade agreement countries and has been exporting U.S. LNG since 2019. The project is positioned to export the additional approved volumes immediately.  “At a time when global energy supply routes face disruption, the United States remains a reliable energy partner to our allies and trading partners,” said DOE Assistant Secretary of the Hydrocarbons and Geothermal Energy Office, Kyle Haustveit. “DOE is using all available authorities to ensure American energy can reach global markets when it is needed most, supporting energy security and helping stabilize global energy supplies.”  The action comes as global oil and LNG supply routes face disruption from tensions in the Middle East and attacks carried out by Iran and its proxies, threatening the reliable flow of energy through critical maritime corridors. The Department will continue to act, using its full set of authorities, to ensure U.S. LNG remains a dependable energy source in global energy markets and a stabilizing presence in times of disruption.  Thanks to President Trump’s leadership and American innovation, the United States is the world’s largest natural gas producer and exporter, with exports reaching all-time highs in March 2026. Since President Trump ended the previous administration’s LNG export approval ban, the Department has approved more than 19 Bcf/d of LNG export authorizations. With recent final investment decisions for additional export capacity, U.S. LNG exports are set

Read More »

Why can’t we have nice routers anymore?

In the Volt Typhoon and Flax Typhoon attacks, the routers themselves weren’t compromised because they were foreign-made routers. Far from it! They were compromised because they were unpatched, Internet-exposed, and end-of-life. The router manufacturers were no more guilty of opening the doors to these attacks than Microsoft is for your company’s Windows 7 PCs being hacked in 2026. Only the Salt Typhoon assault on Cisco IOS XE software, which was running on enterprise-grade routers—specifically, ASR 1000 Series, ISR 4000 Series, and Catalyst 8000 Series edge platforms—can be linked directly to Chinese-made routers. Guess what, though? You can still buy, use, and deploy this Cisco hardware, which is used as core routers by top American telecoms such as AT&T, Verizon, and T-Mobile. Uncle Joe wants to replace his router with a brand-new Wi-Fi 7 model router? Nope, he can’t do it. Multi-billion-dollar companies decide to replace vital infrastructure routers that carry billions of messages every day? Sure, go for it! You know, if it were me, I’d be taking a long, hard look at the actual modern enterprise networking gear that we know has been breached. Why isn’t the FCC doing this? Darned if I know. Even the FCC acknowledges that some of Cisco’s problems have nothing to do with who made the hardware and where it was built. For example, the truly awful CVE-2023-20198 vulnerability, with its CVSS score of 10, was all about a boneheaded security hole in Cisco IOS XE Web UI, not the firmware or hardware. The FCC argues, however, that consumer routers pose unique risks because they’re deployed in millions of homes with minimal security oversight, thus making them ideal for botnet infrastructure. I can’t argue with that. But that has nothing to do with who made these devices and where.

Read More »

Amazon Middle East datacenter suffers second drone hit as Iran steps up attacks

Amazon was contacted for comment on the latest Bahrain drone incident, but said it had nothing to add beyond the statement in its current advisory. Denial of infrastructure Doing the damage is the Shaheed 136, a small and unsophisticated drone designed to overwhelm defenders with numbers. If only one in twenty reaches its target, the price-performance still exceeds that of more expensive systems. When aimed at critical infrastructure such as datacenters, the effect is also psychological; the threat of an attack on its own can be enough to make it difficult for organizations to continue using an at-risk facility.  Iran’s targeting of the Bahrain datacenter is unlikely to be random. Amazon opened its ME-SOUTH-1 AWS presence in 2019, and it is still believed to be the company’s largest site in the Middle East. Earlier this week, the Islamic Revolutionary Guard Corps (IRGC) Telegram channel explicitly threatened to target at least 18 US companies operating in the region, including Microsoft, Google, Nvidia, and Apple. This follows similar threats to an even longer list of US companies made on the IRGC-affiliated Tasnim News Agency in recent weeks. That strategy doesn’t bode well for US companies that have made large investments in Middle Eastern datacenter infrastructure in recent years, drawn by the growing wealth and influence of countries in the region. This includes Amazon, which has announced plans to build a $5.3 billion datacenter in Saudi Arabia, due to become available in 2026. If this is now under threat, whether by warfare or the hypothetical possibility of attack, that will create uncertainty.

Read More »

Gemma 4: Byte for byte, the most capable open models

At the edge, our E2B and E4B models redefine on-device utility, prioritizing multimodal capabilities, low-latency processing and seamless ecosystem integration over raw parameter count.Powerful, accessible, openTo power the next generation of pioneering research and products, we’ve sized the Gemma 4 models specifically to run and fine-tune efficiently on hardware — from billions of Android devices worldwide, to laptop GPUs, all the way up to developer workstations and accelerators.By using these highly optimized models, you can fine-tune Gemma 4 to achieve state-of-the-art performance on your specific tasks. We’ve already seen incredible success with this approach; for instance, INSAIT created a pioneering Bulgarian-first language model (BgGPT), and we worked with Yale University on Cell2Sentence-Scale to discover new pathways for cancer therapy, among many others.Here is what makes Gemma 4 our most capable open model family yet:Advanced reasoning: Capable of multi-step planning and deep logic, Gemma 4 demonstrates significant improvements in math and instruction-following benchmarks that require it.Agentic workflows: Native support for function-calling, structured JSON output, and native system instructions enables you to build autonomous agents that can interact with different tools and APIs and execute workflows reliably.Code generation: Gemma 4 supports high-quality offline code, turning your workstation into a local-first AI code assistant.Vision and audio: All models natively process video and images, supporting variable resolutions, and excelling at visual tasks like OCR and chart understanding. Additionally, the E2B and E4B models feature native audio input for speech recognition and understanding.Longer context: Process long-form content seamlessly. The edge models feature a 128K context window, while the larger models offer up to 256K, allowing you to pass repositories or long documents in a single prompt.140+ languages: Natively trained on over 140 languages, Gemma 4 helps developers build inclusive, high-performance applications for a global audience.Versatile models for diverse hardwareWe are releasing the Gemma 4 model weights in sizes tailored for specific hardware and use cases, ensuring you get frontier-class reasoning wherever you need it:26B and 31B models: Frontier intelligence, offline on your personal computersOptimized to provide researchers and developers with state-of-the-art reasoning on accessible hardware, our unquantized bfloat16 weights fit efficiently on a single 80GB NVIDIA H100 GPU. For local setups, quantized versions run natively on consumer GPUs to power your IDEs, coding assistants and agentic workflows. Our 26B Mixture of Experts (MoE) focus on latency, activating only 3.8 billion of its total parameters during inference to deliver exceptionally fast tokens-per-second, while our 31B Dense is maximizing raw quality and provides a powerful foundation for fine-tuning.

Read More »

New tool on AWS makes it easier to develop quantum error correction

Constellation is available via Quantum Elements and runs on AWS, says Izhar Medalsy, co-founder and CEO at Quantum Elements. And it is designed to help quantum researchers develop and test error correction strategies. Alternatives, such as the popular Stim simulator from Google Quantum AI, don’t simulate all the potential sources of errors, says Medalsy. “Stim uses a lot of approximations, which makes it very fast,” adds Tong Shen, research scientist at Quantum Elements, who worked on Constellation. “It’s low latency. But it’s just inaccurate.” “Imagine you’re a captain of a boat, and you want to train your team to get from point A to point B,” Medalsy says. If the training simulator doesn’t account for ocean currents or wind conditions, the team won’t be able to navigate once they hit the real world. Currently, he says, Constellation has modeled computers of up to 97 qubits, and it can be used to go even higher. “We know how to make qubits work,” he says. “Now we see it as the engineering task to increase the number of qubits and reduce the noise.” And with a digital twin, researchers can experiment with error-correction techniques even before the physical computers are ready. “You can solve the problem so once the hardware is ready, you plug it in, and you’re good to go,” he says.

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE