Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

US Inventory Drop, OPEC Action Lift Oil Prices

Oil rose on the prospect of a de-escalation in the trade war between the world’s two largest economies and a stall in nuclear talks between the US and Iran. West Texas Intermediate futures added 1.9% to settle near $62.50 a barrel, the third gain in the four past sessions, after China signaled openness to trade negotiations with the Trump administration. Pre-conditions for the talks would include a more consistent US position and a willingness to address China’s concerns around American sanctions and Taiwan, according to a person familiar with the Chinese government’s thinking. Elsewhere, Iran said it won’t be drawn into negotiations with the US over its ability to enrich uranium, reducing the potential of looser restrictions on Iranian crude. The US also sanctioned another China-based independent “teapot” refinery for its role in purchasing Tehran’s crude, and Treasury Secretary Scott Bessent said the US would ramp up pressure on Iran. Crude has recovered from a sharp drop to near the lowest in four years brought about by an onslaught of tariffs and counter-levies between the US and its biggest trading partners. Washington on Tuesday started a probe into the need for import taxes on critical minerals, while trade differences with the European Union persist as White House officials said the bulk of the US tariffs imposed on the bloc won’t be removed. Meanwhile, Iraq plans to cut its oil exports this month as it faces growing pressure to adhere to its OPEC+ production target. The country aims to reduce shipments by 70,000 barrels a day, an official with knowledge of the matter said. In another support for prices, US government data released Wednesday showed inventory levels at Cushing, Oklahoma — the delivery point for West Texas Intermediate — fell by roughly 650,000 barrels to the lowest since 2008 for this

Read More »

Intel sells off majority stake in its FPGA business

Altera will continue offering field-programmable gate array (FPGA) products across a wide range of use cases, including automotive, communications, data centers, embedded systems, industrial, and aerospace.  “People were a bit surprised at Intel’s sale of the majority stake in Altera, but they shouldn’t have been. Lip-Bu indicated that shoring up Intel’s balance sheet was important,” said Jim McGregor, chief analyst with Tirias Research. The Altera has been in the works for a while and is a relic of past mistakes by Intel to try to acquire its way into AI, whether it was through FPGAs or other accelerators like Habana or Nervana, note Anshel Sag, principal analyst with Moor Insight and Research. “Ultimately, the 50% haircut on the valuation of Altera is unfortunate, but again is a demonstration of Intel’s past mistakes. I do believe that finishing the process of spinning it out does give Intel back some capital and narrows the company’s focus,” he said. So where did it go wrong? It wasn’t with FPGAs because AMD is making a good run of it with its Xilinx acquisition. The fault, analysts say, lies with Intel, which has a terrible track record when it comes to acquisitions. “Altera could have been a great asset to Intel, just as Xilinx has become a valuable asset to AMD. However, like most of its acquisitions, Intel did not manage Altera well,” said McGregor.

Read More »

OpenAI launches o3 and o4-mini, AI models that ‘think with images’ and use tools autonomously

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI launched two groundbreaking AI models today that can reason with images and use tools independently, representing what experts call a step change in artificial intelligence capabilities. The San Francisco-based company introduced o3 and o4-mini, the latest in its “o-series” of reasoning models, which it claims are its most intelligent and capable models to date. These systems can integrate images directly into their reasoning process, search the web, run code, analyze files, and even generate images within a single task flow. “There are some models that feel like a qualitative step into the future. GPT-4 was one of those. Today is also going to be one of those days,” said Greg Brockman, OpenAI’s president, during a press conference announcing the release. “These are the first models where top scientists tell us they produce legitimately good and useful novel ideas.” How OpenAI’s new models ‘think with images’ to transform visual problem-solving The most striking feature of these new models is their ability to “think with images” — not just see them, but manipulate and reason about them as part of their problem-solving process. “They don’t just see an image — they think with it,” OpenAI said in a statement sent to VentureBeat. “This unlocks a new class of problem-solving that blends visual and textual reasoning.” During a demonstration at the press conference, a researcher showed how o3 could analyze a physics poster from a decade-old internship, navigate its complex diagrams independently, and even identify that the final result wasn’t present in the poster itself. “It must have just read, you know, at least like 10 different papers in a few seconds for me,” Brandon McKenzie, a researcher at OpenAI working on multimodal

Read More »

Renewable PPA prices shrug off the tariff roller coaster — at least for now

Dive Brief: Solar power purchase agreement prices remain essentially unchanged since the end of 2024, while wind PPA prices declined slightly in spite of uncertain and even adverse policy actions coming out of the Trump administration, according to data from LevelTen Energy’s PPA marketplace. The average North American solar PPA went for $57.04 per MWh in the first quarter of 2025, up 28 cents from the end of 2024 and 9.8% since this time last year. Wind PPA prices dropped more than 5% during the first quarter, but remain 4.4% higher than last year, according to LevelTen Energy. Although an ample supply of solar projects should put downward pressure on solar prices, developers may be reluctant to tighten their margins in the face of policy uncertainty, said Zach Starsia, senior director of the energy marketplace at LevelTen. Data from the next few months could clarify which direction prices are headed, Starsia said. Dive Insight: PPA prices have remained relatively static despite — or perhaps because of — the policy turmoil in recent months, Starsia said. It’s not just the Trump administration’s on-again, off-again tariffs that stand to increase costs for solar developers, he said. Renewable energy developers, who rely heavily on the U.S. Army Corps of Engineers during federal permitting processes, have also been impacted by the Department of Government Efficiency’s cost-cutting. And talk of revamping or repealing the Inflation Reduction Act — while still seen as unlikely — could hit developers hard, Starsia said. With a glut of solar projects set to come online in many U.S. markets, long-term analyses suggest that PPA prices should decline. But uncertainty about the future of trade and energy policy in the U.S. seems to have prompted most developers to hedge their bets by maintaining their asking prices — at least for now.

Read More »

Humber carbon emitter wants government signal on Viking CCS

Power company VPI has called for clarity to progress the Viking carbon capture and storage (CCS) project and help drive the future of heavy industries in the Humber. VPI requested a signal from the UK government in its upcoming comprehensive spending review that it will be selected as an anchor emitter for the CCS project. The group owns the nearly 1.3GW Immingham thermal power plant, which provides power to the Humber’s two large oil refineries. VPI is planning to deploy a £1.5 billion carbon capture proposal, which will utilise Harbour Energy’s Viking CCS pipeline to transport carbon that will be buried in a depleted gas field in the North Sea. VPI chief executive Jorge Pikunic said: “Carbon capture and storage provides a once-in-a-generation opportunity to turn the Humber into a powerhouse of the future. If missed, it may not come again. “For the last five years, public officials have worked tirelessly with industry to set in motion the development of Viking CCS, a unique carbon capture and storage network, here in the Humber. “Proceeding with the next stage of Viking CCS now will demonstrate how a strategic, mission-driven government can successfully transition an industrial hub into a future powerhouse, in a prudent, value-for money driven, just and meaningful way.” Viking CCS The Viking CCS pipeline will transport CO₂ captured from the industrial cluster at Immingham out to the Viking reservoirs via the Theddlethorpe gas terminal and an existing 75-mile (120km) pipeline as part of the Lincolnshire offshore gas gathering system (LOGGS). The project forms part of the UK’s track 2 CCS projects along with Scotland’s Acorn CCS project. While the UK government has backed the track 1 projects with around £22 billion of government funding, the track 2 proposal have not received similar pledges of support. Business leaders have warned

Read More »

TotalEnergies Agrees 15-year LNG Supply Deal with Enadom

Global energy major TotalEnergies SE signed a heads of agreement (HoA) with Energia Natural Dominicana Enadom, S.R.L. (Enadom) for the delivery of 400,000 tons of liquefied natural gas (LNG) per year. TotalEnergies said in a media release that the HoA with the joint venture between AES Dominicana and Energas in the Dominican Republic is subject to the finalization of sale and purchase agreements (SPAs). Once the SPAs are signed, the agreement will start in mid-2027, with a 15-year term, and the price will be indexed to Henry Hub. The deal enables Enadom to supply natural gas to the 470 MW combined-cycle power plant, currently under construction, which will increase the country’s electricity generation capacity, TotalEnergies said. This project contributes to the energy transition of the Dominican Republic by reducing its dependence on coal and fuel oil through the use of a less carbon-intensive energy source, natural gas, the company said. “We are pleased to have signed this agreement to answer, alongside AES and its partners, the energy needs of the Dominican Republic. This new contract underscores TotalEnergies’ leadership in the LNG sector and our commitment to supporting the island’s energy transition. It will be a natural outlet for our US LNG supply which will progressively increase”, Gregory Joffroy, Senior Vice President LNG at TotalEnergies, said. TotalEnergies said it is the world’s third largest LNG player with a global portfolio of 40 Mt/y in 2024 thanks to its interests in liquefaction plants in all geographies. “This agreement with TotalEnergies is the result of the confidence placed in the Dominican Republic’s energy sector and, specifically, in Enadom and AES. This partnership, alongside Enadom’s, has demonstrated investment capabilities in providing natural gas to the Dominican electricity market by ensuring a reliable, competitive, and environmentally responsible energy supply. Enadom is proud to play a pivotal

Read More »

US Inventory Drop, OPEC Action Lift Oil Prices

Oil rose on the prospect of a de-escalation in the trade war between the world’s two largest economies and a stall in nuclear talks between the US and Iran. West Texas Intermediate futures added 1.9% to settle near $62.50 a barrel, the third gain in the four past sessions, after China signaled openness to trade negotiations with the Trump administration. Pre-conditions for the talks would include a more consistent US position and a willingness to address China’s concerns around American sanctions and Taiwan, according to a person familiar with the Chinese government’s thinking. Elsewhere, Iran said it won’t be drawn into negotiations with the US over its ability to enrich uranium, reducing the potential of looser restrictions on Iranian crude. The US also sanctioned another China-based independent “teapot” refinery for its role in purchasing Tehran’s crude, and Treasury Secretary Scott Bessent said the US would ramp up pressure on Iran. Crude has recovered from a sharp drop to near the lowest in four years brought about by an onslaught of tariffs and counter-levies between the US and its biggest trading partners. Washington on Tuesday started a probe into the need for import taxes on critical minerals, while trade differences with the European Union persist as White House officials said the bulk of the US tariffs imposed on the bloc won’t be removed. Meanwhile, Iraq plans to cut its oil exports this month as it faces growing pressure to adhere to its OPEC+ production target. The country aims to reduce shipments by 70,000 barrels a day, an official with knowledge of the matter said. In another support for prices, US government data released Wednesday showed inventory levels at Cushing, Oklahoma — the delivery point for West Texas Intermediate — fell by roughly 650,000 barrels to the lowest since 2008 for this

Read More »

Intel sells off majority stake in its FPGA business

Altera will continue offering field-programmable gate array (FPGA) products across a wide range of use cases, including automotive, communications, data centers, embedded systems, industrial, and aerospace.  “People were a bit surprised at Intel’s sale of the majority stake in Altera, but they shouldn’t have been. Lip-Bu indicated that shoring up Intel’s balance sheet was important,” said Jim McGregor, chief analyst with Tirias Research. The Altera has been in the works for a while and is a relic of past mistakes by Intel to try to acquire its way into AI, whether it was through FPGAs or other accelerators like Habana or Nervana, note Anshel Sag, principal analyst with Moor Insight and Research. “Ultimately, the 50% haircut on the valuation of Altera is unfortunate, but again is a demonstration of Intel’s past mistakes. I do believe that finishing the process of spinning it out does give Intel back some capital and narrows the company’s focus,” he said. So where did it go wrong? It wasn’t with FPGAs because AMD is making a good run of it with its Xilinx acquisition. The fault, analysts say, lies with Intel, which has a terrible track record when it comes to acquisitions. “Altera could have been a great asset to Intel, just as Xilinx has become a valuable asset to AMD. However, like most of its acquisitions, Intel did not manage Altera well,” said McGregor.

Read More »

OpenAI launches o3 and o4-mini, AI models that ‘think with images’ and use tools autonomously

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI launched two groundbreaking AI models today that can reason with images and use tools independently, representing what experts call a step change in artificial intelligence capabilities. The San Francisco-based company introduced o3 and o4-mini, the latest in its “o-series” of reasoning models, which it claims are its most intelligent and capable models to date. These systems can integrate images directly into their reasoning process, search the web, run code, analyze files, and even generate images within a single task flow. “There are some models that feel like a qualitative step into the future. GPT-4 was one of those. Today is also going to be one of those days,” said Greg Brockman, OpenAI’s president, during a press conference announcing the release. “These are the first models where top scientists tell us they produce legitimately good and useful novel ideas.” How OpenAI’s new models ‘think with images’ to transform visual problem-solving The most striking feature of these new models is their ability to “think with images” — not just see them, but manipulate and reason about them as part of their problem-solving process. “They don’t just see an image — they think with it,” OpenAI said in a statement sent to VentureBeat. “This unlocks a new class of problem-solving that blends visual and textual reasoning.” During a demonstration at the press conference, a researcher showed how o3 could analyze a physics poster from a decade-old internship, navigate its complex diagrams independently, and even identify that the final result wasn’t present in the poster itself. “It must have just read, you know, at least like 10 different papers in a few seconds for me,” Brandon McKenzie, a researcher at OpenAI working on multimodal

Read More »

Renewable PPA prices shrug off the tariff roller coaster — at least for now

Dive Brief: Solar power purchase agreement prices remain essentially unchanged since the end of 2024, while wind PPA prices declined slightly in spite of uncertain and even adverse policy actions coming out of the Trump administration, according to data from LevelTen Energy’s PPA marketplace. The average North American solar PPA went for $57.04 per MWh in the first quarter of 2025, up 28 cents from the end of 2024 and 9.8% since this time last year. Wind PPA prices dropped more than 5% during the first quarter, but remain 4.4% higher than last year, according to LevelTen Energy. Although an ample supply of solar projects should put downward pressure on solar prices, developers may be reluctant to tighten their margins in the face of policy uncertainty, said Zach Starsia, senior director of the energy marketplace at LevelTen. Data from the next few months could clarify which direction prices are headed, Starsia said. Dive Insight: PPA prices have remained relatively static despite — or perhaps because of — the policy turmoil in recent months, Starsia said. It’s not just the Trump administration’s on-again, off-again tariffs that stand to increase costs for solar developers, he said. Renewable energy developers, who rely heavily on the U.S. Army Corps of Engineers during federal permitting processes, have also been impacted by the Department of Government Efficiency’s cost-cutting. And talk of revamping or repealing the Inflation Reduction Act — while still seen as unlikely — could hit developers hard, Starsia said. With a glut of solar projects set to come online in many U.S. markets, long-term analyses suggest that PPA prices should decline. But uncertainty about the future of trade and energy policy in the U.S. seems to have prompted most developers to hedge their bets by maintaining their asking prices — at least for now.

Read More »

Humber carbon emitter wants government signal on Viking CCS

Power company VPI has called for clarity to progress the Viking carbon capture and storage (CCS) project and help drive the future of heavy industries in the Humber. VPI requested a signal from the UK government in its upcoming comprehensive spending review that it will be selected as an anchor emitter for the CCS project. The group owns the nearly 1.3GW Immingham thermal power plant, which provides power to the Humber’s two large oil refineries. VPI is planning to deploy a £1.5 billion carbon capture proposal, which will utilise Harbour Energy’s Viking CCS pipeline to transport carbon that will be buried in a depleted gas field in the North Sea. VPI chief executive Jorge Pikunic said: “Carbon capture and storage provides a once-in-a-generation opportunity to turn the Humber into a powerhouse of the future. If missed, it may not come again. “For the last five years, public officials have worked tirelessly with industry to set in motion the development of Viking CCS, a unique carbon capture and storage network, here in the Humber. “Proceeding with the next stage of Viking CCS now will demonstrate how a strategic, mission-driven government can successfully transition an industrial hub into a future powerhouse, in a prudent, value-for money driven, just and meaningful way.” Viking CCS The Viking CCS pipeline will transport CO₂ captured from the industrial cluster at Immingham out to the Viking reservoirs via the Theddlethorpe gas terminal and an existing 75-mile (120km) pipeline as part of the Lincolnshire offshore gas gathering system (LOGGS). The project forms part of the UK’s track 2 CCS projects along with Scotland’s Acorn CCS project. While the UK government has backed the track 1 projects with around £22 billion of government funding, the track 2 proposal have not received similar pledges of support. Business leaders have warned

Read More »

TotalEnergies Agrees 15-year LNG Supply Deal with Enadom

Global energy major TotalEnergies SE signed a heads of agreement (HoA) with Energia Natural Dominicana Enadom, S.R.L. (Enadom) for the delivery of 400,000 tons of liquefied natural gas (LNG) per year. TotalEnergies said in a media release that the HoA with the joint venture between AES Dominicana and Energas in the Dominican Republic is subject to the finalization of sale and purchase agreements (SPAs). Once the SPAs are signed, the agreement will start in mid-2027, with a 15-year term, and the price will be indexed to Henry Hub. The deal enables Enadom to supply natural gas to the 470 MW combined-cycle power plant, currently under construction, which will increase the country’s electricity generation capacity, TotalEnergies said. This project contributes to the energy transition of the Dominican Republic by reducing its dependence on coal and fuel oil through the use of a less carbon-intensive energy source, natural gas, the company said. “We are pleased to have signed this agreement to answer, alongside AES and its partners, the energy needs of the Dominican Republic. This new contract underscores TotalEnergies’ leadership in the LNG sector and our commitment to supporting the island’s energy transition. It will be a natural outlet for our US LNG supply which will progressively increase”, Gregory Joffroy, Senior Vice President LNG at TotalEnergies, said. TotalEnergies said it is the world’s third largest LNG player with a global portfolio of 40 Mt/y in 2024 thanks to its interests in liquefaction plants in all geographies. “This agreement with TotalEnergies is the result of the confidence placed in the Dominican Republic’s energy sector and, specifically, in Enadom and AES. This partnership, alongside Enadom’s, has demonstrated investment capabilities in providing natural gas to the Dominican electricity market by ensuring a reliable, competitive, and environmentally responsible energy supply. Enadom is proud to play a pivotal

Read More »

DOI Announces ‘Significant Increase’ in Estimated Gulf OCS O&G Reserves

In a statement posted on its site recently, the U.S. Department of the Interior (DOI) announced a “significant increase in estimated oil and gas reserves in the Gulf of America Outer Continental Shelf”. The DOI highlighted in the statement that analysis from its Bureau of Ocean Energy Management (BOEM) revealed an additional 1.30 billion barrels of oil equivalent since 2021, “bringing the total reserve estimate to 7.04 billion barrels of oil equivalent”. The DOI noted in the statement that this includes 5.77 billion barrels of oil and 7.15 trillion cubic feet of natural gas, “a 22.6 percent increase in remaining recoverable reserves”. BOEM’s updated assessment evaluated over 140 oil and gas fields, identifying 18 new discoveries, and analyzing more than 37,000 reservoirs across 1,336 fields in the Gulf, the DOI said in the statement. The DOI pointed out in the statement that “this comprehensive review added 4.39 billion barrels of oil equivalent in original reserves”, and added that, “after subtracting production of 3.09 billion barrels of oil equivalent since 2020 – 2021, the net increase reflects continued opportunity and momentum in offshore development”.  In the statement, U.S. Secretary of the Interior, Doug Burgum, said, “this new data confirms what we’ve known all along – America is sitting on a treasure trove of energy, and under President Trump’s leadership, we’re unlocking it”. James Kendall, BOEM Gulf of America Regional Director, highlighted in the statement that “the Gulf of America is delivering 14 percent of the nation’s oil”. “These updated estimates reaffirm the Gulf’s vital role in ensuring a reliable, affordable domestic energy supply,” he added. The DOI noted in the statement that BOEM oversees nearly 3.2 billion acres of the Outer Continental Shelf, highlighting that about 160 million acres are located in the Gulf. It stated that the region continues to

Read More »

FERC should order PJM to rerun last capacity auction: ratepayer advocates

Dive Brief: The Federal Energy Regulatory Commission should order the PJM Interconnection to rerun its last base capacity auction, a move that could lead to more than $5 billion in consumer refunds, according to a complaint filed Monday at the agency by ratepayer advocates from Illinois, Maryland and New Jersey. PJM’s base residual auction for the 2025/2026 delivery year, which starts June 1, produced “unjust and unreasonable” results reflecting the omission or withholding of existing capacity, nonprice barriers to new entry and a failure to mitigate supplier market power, the Illinois Attorney General’s Office, Maryland Office of People’s Counsel and New Jersey Division of Rate Counsel said. The ratepayer advocates asked FERC to review the complaint using fast-track procedures. If the commission grants the complaint after the next delivery takes effect, FERC should order refunds, they said. Dive Insight: In PJM’s last capacity auction, held in July, total capacity costs jumped to $14.7 billion from $2.2 billion in the previous auction. The auction results sparked three complaints at FERC asking the agency to change PJM’s capacity auction rules, the ratepayer advocates said, noting the next capacity auction is set to be held in July. PJM has submitted five filings to put in place some changes the complainants are seeking and to try to reduce interconnection-related barriers for new generating resources, they said. FERC’s approval of some of the PJM proposals — such as one dealing with reliability must-run contracts, which PJM excluded from the last capacity auction — shows that the grid operator’s rules for the last auction were unjust, according to the complaint. The ratepayer advocates contend the capacity price jump was driven by flawed market rules, not a sudden lack of power supplies. “It occurred because defective market rules either ignored or allowed market participants to withhold thousands

Read More »

LUMA installs Puerto Rico’s first smart meter

Puerto Rico utility LUMA Energy announced Friday it has begun installing smart meters across the island, marking “a significant milestone in modernizing the electrical grid, improving energy efficiency, and creating a more modern, resilient, and sustainable energy system.” The island’s first smart meter was installed at the Nemesio Canales public housing complex in San Juan, the utility said. LUMA aims to install 1.5 million Itron smart meters across Puerto Rico. The meter vendor said in December that it expects the rollout to take three years. The addition of advanced metering “is a fundamental step in our commitment to modernize Puerto Rico’s energy infrastructure,” Juan Rodríguez, LUMA senior vice president of capital programs, said in a statement. “This technology optimizes consumption, improves efficiency, and empowers our citizens to make informed energy decisions.” The meters will also reduce LUMA’s reliance on estimated billing and promote the integration of renewable energy sources, the utility said. LUMA in February announced plans to add almost 1 GW of renewable energy and more than 700 MW of energy storage in its bid to transition away from fossil fuels and strengthen the island’s fragile electric grid. Puerto Rico is aiming to eliminate coal-fired generation by 2028 and develop a 100% renewable energy grid by 2050. The island’s electric system was destroyed by Hurricane Maria in 2017, resulting in a full rebuild and the development of a plan to modernize and decarbonize the power grid. The smart meters “will optimize the performance of the transmission and distribution grid while facilitating the integration of distributed energy resources like solar systems,” Luma said.  Itron’s platform, which is designed to have a redundant communications network, will feature advanced distributed grid intelligence and allow LUMA to improve system reliability, resilience and customer service, the utility said. The platform will give customers more control over

Read More »

SMRs, not large reactors, are ‘future of nuclear power’: ITIF

Dive Brief: Small modular reactors are more likely than larger designs to achieve long-term “price and performance parity” with conventional energy sources, such as gas, but only with substantial, ongoing support from the U.S. government, the Information Technology and Innovation Foundation said Monday. Authored by ITIF Center for Clean Energy Innovation Research Director Robin Gaster, “Small Modular Reactors: A Realist Approach to the Future of Nuclear Power” advised the U.S. Department of Energy to develop independent SMR assessment capabilities that focus on price and performance parity, or P3, while expanding support for basic and applied nuclear research and funding efforts to commercialize and scale promising technologies. With robust federal backing, SMR developers could support “an important strategic export industry” for the United States over the next two decades, the report said. Dive Insight: ITIF’s analysis pushes back on the notion that new gigawatt-scale reactors will play a major role in the United States’ future energy mix. A range of pro-nuclear voices have supported the idea of “fleet scale” large reactor deployments, including private sector developers like The Nuclear Company and DOE under former President Joe Biden. In a September update to its “Pathways to Commercial Liftoff: Advanced Nuclear” report, DOE said the U.S. would need to deploy a mix of SMRs alongside larger Generation III+ reactors, like Westinghouse Electric’s 1,117-MW AP1000, to meet expected future power demand. DOE recommended a “consortium approach” to enable serial deployments of five to 10 reactors of the same design. President Donald Trump sounded more skeptical of large reactors on the campaign trail last year, telling podcaster Joe Rogan in October that projects like the twin AP1000 reactors at Georgia Power’s Plant Vogtle Units 3 and 4 — which took more than a decade to complete and ran billions over budget — “get too big,

Read More »

Iraq to Cut Oil Price in Federal Budget Amid Plummeting Market

Iraq, OPEC’s second-largest producer, plans to cut its oil-price assumption in the federal budget after the market plunged. The 2025 assumption will be lower than last year’s $80 a barrel, Mudher Saleh, a financial adviser to the prime minister, said Tuesday by phone, without being more specific. The decision was delayed earlier this year by negotiations over payments to oil companies. Oil has tumbled this year, dropping sharply the past two weeks as US President Donald Trump’s sweeping tariffs upended global markets. Benchmark Brent has lost 13% in April as the trade war stokes fears of a recession that would hurt energy demand, especially in the US and China, the biggest crude consumers. In mid-2023, Iraq’s parliament approved spending plans through 2025. The 2023 budget assumed a $70 price for crude, with subsequent years to be reviewed and adjusted. Brent is currently trading below $65 a barrel in London. The lower price puts particular pressure on Middle Eastern economies that are dependent on oil. Iraq, especially, needs higher prices to support spending as it rebuilds an economy weakened by years of war. International oil companies operating in Iraq’s semi-autonomous Kurdish region were forced to halt exports following the shutdown of a pipeline to the Turkish port of Ceyhan in early 2023. They’ve since been negotiating contract terms with both federal and regional authorities in a bid to restart flows. The budget is set to be sent to parliament shortly for final approval, Saleh said. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed. MORE FROM THIS AUTHOR Bloomberg

Read More »

Algae to create 100 new jobs for oil workers in Grangemouth

Plans have been submitted to increase algae production at Grangemouth in a move that could create 100 new jobs. The firm behind the project, MiAlgae, has said that the expansion will enable it to continue creating “retraining opportunities for workers transitioning from the oil and gas industry”. The business secured £13.8 million in a Series A funding round last year to finance the scale-up. Douglas Martin, founder and managing director of MiAlgae, said: “Grangemouth has incredible potential for us as we look to the next stage of our growth. “This location offers an ideal position to support our scaling efforts and meet the increasing demand for our ‘biotech for good’ solutions, with the creation of green jobs across engineering, production, and research and development. “We are confident that this new facility will help build a greener future and bring high-quality, sustainable jobs to the local community.” © Supplied by MiAlgaeDouglas Martin, founder and managing director of MiAlgae. Grangemouth worker woes Workers in Grangemouth are currently facing career uncertainty as the owners of Scotland’s only oil refinery look to shut up shop. Petroineos launched the first wave of redundancies at the Grangemouth oil refinery early this year as it aims to close that part of the plant. The firm is set to cut 400 jobs at the site in the coming months in a move that unions labelled a “national disgrace”. © Andrew Milligan/PA WireMembers of Unite union take part in a demonstration to protest at Petroineos plans to close Grangemouth oil refinery, during the Scottish Labour Party conference at the Scottish Exhibition Centre (SEC) in Glasgow. Image: Andrew Milligan/PA Wire However, a £1.5 million feasibility study published in March claimed clean energy projects at the Grangemouth refinery could create around 800 jobs over the next 15 years. The government-backed initiative,

Read More »

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.  But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.  Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.  More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.  I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.  On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.  People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest).  People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.  Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?  In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good.  Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed.  And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.   But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing.  Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search.  “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly.  It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be.  But once you’ve used AI Overviews a bit, you realize they are different.  Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.)  “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.”  That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video.  When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai.  There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out?  I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too.  “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web.  “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful.  “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.”  But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?   Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.   “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says.  Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.”  Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”  “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.”  He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew?  A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.   “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says.  OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more.  “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience.  Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does.  “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.”  When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them.  “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed!  The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers.  It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.”  We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge.  The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets.  Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed.  “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information.  In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses.  But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on. 

Read More »

Subsea7 Scores Various Contracts Globally

Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Read More »

Driving into the future

Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.  We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen.  Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes. 
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake.  What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story. 
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa.  Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Read More »

Oil Holds at Highest Levels Since October

Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

Read More »

What to expect from NaaS in 2025

Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

Read More »

UK battery storage industry ‘back on track’

UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing  and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was  fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

Read More »

Cohere launches Embed 4: New multimodal search model processes 200-page documents

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Enterprise retrieval augmented generation (RAG) remains integral to the current agentic AI craze. Taking advantage of the continued interest in agents, Cohere released the latest version of its embeddings model with longer context windows and more multimodality.  Cohere’s Embed 4 builds on the multimodal updates of Embed 3 and adds more capabilities around unstructured data. Thanks to a 128,000 token context window, organizations can generate embeddings for documents with around 200 pages.  “Existing embedding models fail to natively understand complex multimodal business materials,‬‭ leading companies to develop cumbersome data pre-processing pipelines that only slightly‬‭ improve accuracy,” Cohere said in a blog post. “Embed 4 solves this problem, allowing enterprises and their employees to‬‭ efficiently surface insights that are hidden within mountains of unsearchable information.‬” Enterprises can deploy Embed 4 on virtual private clouds or on-premise technology stacks for added data security.  Companies can generate embeddings to transform their documents or other data into numerical representations for RAG use cases. Agents can then reference these embeddings to answer prompts.  Domain-specific knowledge Embed 4 “excels in regulated industries” like finance, healthcare and manufacturing, the company said. Cohere, which mainly focuses on enterprise AI use cases, said its models consider the security needs of regulated sectors and have a strong understanding of businesses. The company trained Embed 4 “to be robust against noisy real-world data” in that it remains accurate despite the “imperfections” of enterprise data, such as spelling mistakes and formatting issues.  “It is also performant at searching over scanned documents and‬ handwriting. These formats are common in legal paperwork, insurance invoices, and expense‬ receipts. This capability eliminates the need for complex data preparations or pre-processing‭ pipelines, saving businesses time and operational costs,” Cohere

Read More »

The Download: tracking the evolution of street drugs, and the next wave of military AI

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. How the federal government is tracking changes in the supply of street drugs In 2021, the Maryland Department of Health and the state police were confronting a crisis: Fatal drug overdoses in the state were at an all-time high, and authorities didn’t know why.Seeking answers, Maryland officials turned to scientists at the National Institute of Standards and Technology, the national metrology institute for the United States, which defines and maintains standards of measurement essential to a wide range of industrial sectors and health and security applications.There, a research chemist named Ed Sisco and his team had developed methods for detecting trace amounts of drugs, explosives, and other dangerous materials—techniques that could protect law enforcement officials and others who had to collect these samples. And a pilot uncovered new, critical information almost immediately. Read the full story. —Adam Bluestein
This story is from the next edition of our print magazine. Subscribe now to read it and get a copy of the magazine when it lands!
Phase two of military AI has arrived —James O’Donnell Last week, I spoke with two US Marines who spent much of last year deployed in the Pacific, conducting training exercises from South Korea to the Philippines. Both were responsible for analyzing surveillance to warn their superiors about possible threats to the unit. But this deployment was unique: For the first time, they were using generative AI to scour intelligence, through a chatbot interface similar to ChatGPT.  As I wrote in my new story, this experiment is the latest evidence of the Pentagon’s push to use generative AI—tools that can engage in humanlike conversation—throughout its ranks, for tasks including surveillance. This push raises alarms from some AI safety experts about whether large language models are fit to analyze subtle pieces of intelligence in situations with high geopolitical stakes. Here are three open questions to keep your eye on as the US military, and others around the world, bring generative AI to more parts of the so-called “kill chain.” Read the full story. This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 The FCC wants Europe to choose between US and Chinese technologyTrump official Brendan Carr has urged Western allies to pick Elon Musk’s Starlink over rival Chinese satellite firms. (FT $)+ China may look like a less erratic choice right now. (NY Mag $) 2 Nvidia wants to build its AI supercomputers entirely in the USIt’s a decision the Trump administration has claimed credit for. (WP $)+ That said, Nvidia hasn’t said how much gear it plans to make in America. (WSJ $)+ Production of its latest chip has already begun in Arizona. (Bloomberg $) 3 Mark Zuckerberg defended Meta in the first day of its antitrust trialHe downplayed the company’s decision to purchase Instagram and WhatsApp. (Politico)+ The government claims he bought the firms to stifle competition. (The Verge)+ Zuckerberg has previously denied that his purchases had hurt competition. (NYT $) 4 OpenAI’s new models are designed to excel at codingThe three models have been optimized to follow complex instructions. (Wired $)+ We’re still waiting for confirmation of GPT-5. (The Verge)+ The second wave of AI coding is here. (MIT Technology Review) 5 Apple has increased its iPhone shipments by 10%It’s part of a pre-emptive plan to mitigate tariff disruptions. (Bloomberg $)+ The tariff chaos has played havoc with Apple stocks. (Insider $) 6 We’re learning more about the link between long covid and cognitive impairmentStudies suggest that a patient’s age when they contracted covid may be a key factor. (WSJ $) 7 Can’t be bothered to call your elderly parents? Get AI to do it 📞How thoroughly depressing. (404 Media) 8 This video app hopes to capitalize on TikTok’s uncertain futureBut unlike TikTok, Neptune allows creators to hide their likes. (TechCrunch)
9 Meet the tech bros who want to live underwaterColonizing the sea is one of the final frontiers. (NYT $)+ Meet the divers trying to figure out how deep humans can go. (MIT Technology Review) 10 Google’s new AI model can decipher dolphin sounds🐬If they’re squawking, back away. (Ars Technica)+ The way whales communicate is closer to human language than we realized. (MIT Technology Review)
Quote of the day “If you don’t like an ad, you scroll past it. It takes about a second.” —Mark Hansen, Meta’s lead lawyer, makes light of the Federal Trade Commission’s assertion that users of its platforms are inundated with ads during the first day of Meta’s monopoly trial, Ars Technica reports. The big story
Recapturing early internet whimsy with HTML Websites weren’t always slick digital experiences.There was a time when surfing the web involved opening tabs that played music against your will and sifting through walls of text on a colored background. In the 2000s, before Squarespace and social media, websites were manifestations of individuality—built from scratch using HTML, by users who had some knowledge of code. Scattered across the web are communities of programmers working to revive this seemingly outdated approach. And the movement is anything but a superficial appeal to retro aesthetics—it’s about celebrating the human touch in digital experiences. Read the full story.  —Tiffany Ng

Read More »

Kore.ai teams with G42’s Inception to develop AI-powered products for the enterprise

Kore.ai announced a strategic partnership with Inception, a division of UAE-based G42, to jointly develop AI-powered products for the enterprise.

The collaboration brings together Inception’s expertise in AI product development backed by years of R&D capabilities, with Kore.ai’s conversational and GenAI technology-powered agentic platform and solutions, to fast-track Inception’s product development capabilities.

Additionally, the partnership will enable the delivery of high-impact AI solutions to businesses in the UAE and beyond and strengthen Inception’s global reach through Kore.ai’s extensive customer network and partner ecosystem.

The announcement of this collaboration comes on the back of a U.S. visit by H.H. Sheikh Tahnoon bin Zayed Al Nahyan, deputy ruler of Abu Dhabi and chairman of G42.

Kore.ai is the latest to strategically align itself with Inception – a company that also serves as G42’s AI research and product development arm. The collaboration also marks a step in advancing one of the world’s most ambitious AI agendas. The UAE aims to become one of the leading nations in AI by 2031, with Abu Dhabi investing $13 billion into a digital strategy to develop AI, cloud computing, and automation solutions.

“Our collaboration with Inception represents a significant opportunity to accelerate AI adoption across global markets in alignment with our vision to help businesses drive tangible value through AI,” said Raj Koneru, CEO of Kore.ai, in a statement. “By combining our industry-leading AI platforms/solutions and broad market reach with Inception’s deep expertise in AI models and product development, and domain-specific solutions, we deliver AI-powered solutions that will transform business operations.”

Andrew Jackson, CEO of Inception, commented: “Partnering with Kore.ai aligns perfectly with our mission to realize the adoption of AI power through the G42 Intelligence Grid and bring to market AI-powered products that drive real value. By combining our technological strengths, we will be able to accelerate the implementation of AI-powered products and drive positive outcomes for governments and enterprises in UAE and across the world.”

Kore.ai is a leading provider of advanced AI with over a decade of experience in helping large enterprises realize business value through the safe and responsible use of AI. It provides comprehensive offerings for AI work, process automation and customer service use cases coupled with an AI agent platform with no-code and pro-code tools for custom development and deployment at enterprise scale.

Kore.ai takes an agnostic approach to model, data, cloud and applications used, giving customers freedom of choice. Trusted by over 500 partners and 450 Global 2000 companies, Kore.ai helps them navigate their AI strategy. The company has a strong patent portfolio in the AI space and has been recognized as a leader and an innovator by top analysts. Headquartered in Orlando, Kore.ai has a network of offices to support customers including in India, the UK, the Middle East, Japan, South Korea, and Europe.

Inception, a G42 company, is working on tech including (In)Alpha for investment decisions & portfolio management, (In)Climate – a next-generation meteorological platform and (In)Energy – designed to optimize upstream and downstream energy operations at scale. Inception’s (In)Business Suite is an industry-agnostic set of products that includes procurement, human capital, workflow management, complex business processes, customer experience and a generative AI solution for executives.

Read More »

This architect wants to build cities out of lava

Arnhildur Pálmadóttir was around three years old when she saw a red sky from her living room window. A volcano was erupting about 25 miles away from where she lived on the northeastern coast of Iceland. Though it posed no immediate threat, its ominous presence seeped into her subconscious, populating her dreams with streaks of light in the night sky. Fifty years later, these “gloomy, strange dreams,” as Pálmadóttir now describes them, have led to a career as an architect with an extraordinary mission: to harness molten lava and build cities out of it. Pálmadóttir today lives in Reykjavik, where she runs her own architecture studio, S.AP Arkitektar, and the Icelandic branch of the Danish architecture company Lendager, which specializes in reusing building materials. The architect believes the lava that flows from a single eruption could yield enough building material to lay the foundations of an entire city. She has been researching this possibility for more than five years as part of a project she calls Lavaforming. Together with her son and colleague Arnar Skarphéðinsson, she has identified three potential techniques: drill straight into magma pockets and extract the lava; channel molten lava into pre-dug trenches that could form a city’s foundations; or 3D-print bricks from molten lava in a technique similar to the way objects can be printed out of molten glass.
Pálmadóttir and Skarphéðinsson first presented the concept during a talk at Reykjavik’s DesignMarch festival in 2022. This year they are producing a speculative film set in 2150, in an imaginary city called Eldborg. Their film, titled Lavaforming, follows the lives of Eldborg’s residents and looks back on how they learned to use molten lava as a building material. It will be presented at the Venice Biennale, a leading architecture festival, in May.  Set in 2150, her speculative film Lavaforming presents a fictional city built from molten lava.COURTESY OF S.AP ARKITEKTAR Buildings and construction materials like concrete and steel currently contribute a staggering 37% of the world’s annual carbon dioxide emissions. Many architects are advocating for the use of natural or preexisting materials, but mixing earth and water into a mold is one thing; tinkering with 2,000 °F lava is another. 
Still, Pálmadóttir is piggybacking on research already being done in Iceland, which has 30 active volcanoes. Since 2021, eruptions have intensified in the Reykjanes Peninsula, which is close to the capital and to tourist hot spots like the Blue Lagoon. In 2024 alone, there were six volcanic eruptions in that area. This frequency has given volcanologists opportunities to study how lava behaves after a volcano erupts. “We try to follow this beast,” says Gro Birkefeldt M. Pedersen, a volcanologist at the Icelandic Meteorological Office (IMO), who has consulted with Pálmadóttir on a few occasions. “There is so much going on, and we’re just trying to catch up and be prepared.” Pálmadóttir’s concept assumes that many years from now, volcanologists will be able to forecast lava flow accurately enough for cities to plan on using it in building. They will know when and where to dig trenches so that when a volcano erupts, the lava will flow into them and solidify into either walls or foundations. Today, forecasting lava flows is a complex science that requires remote sensing technology and tremendous amounts of computational power to run simulations on supercomputers. The IMO typically runs two simulations for every new eruption—one based on data from previous eruptions, and another based on additional data acquired shortly after the eruption (from various sources like specially outfitted planes). With every event, the team accumulates more data, which makes the simulations of lava flow more accurate. Pedersen says there is much research yet to be done, but she expects “a lot of advancement” in the next 10 years or so.  To design the speculative city of Eldborg for their film, Pálmadóttir and Skarphéðinsson used 3D-modeling software similar to what Pedersen uses for her simulations. The city is primarily built on a network of trenches that were filled with lava over the course of several eruptions, while buildings are constructed out of lava bricks. “We’re going to let nature design the buildings that will pop up,” says Pálmadóttir.  The aesthetic of the city they envision will be less modernist and more fantastical—a bit “like [Gaudi’s] Sagrada Familia,” says Pálmadóttir. But the aesthetic output is not really the point; the architects’ goal is to galvanize architects today and spark an urgent discussion about the impact of climate change on our cities. She stresses the value of what can only be described as moonshot thinking. “I think it is important for architects not to be only in the present,” she told me. “Because if we are only in the present, working inside the system, we won’t change anything.” Pálmadóttir was born in 1972 in Húsavik, a town known as the whale-watching capital of Iceland. But she was more interested in space and technology and spent a lot of time flying with her father, a construction engineer who owned a small plane. She credits his job for the curiosity she developed about science and “how things were put together”—an inclination that proved useful later, when she started researching volcanoes. So was the fact that Icelanders “learn to live with volcanoes from birth.” At 21, she moved to Norway, where she spent seven years working in 3D visualization before returning to Reykjavik and enrolling in an architecture program at the Iceland University of the Arts. But things didn’t click until she moved to Barcelona for a master’s degree at the Institute for Advanced Architecture of Catalonia. “I remember being there and feeling, finally, like I was in the exact right place,” she says.  Before, architecture had seemed like a commodity and architects like “slaves to investment companies,” she says. Now, it felt like a path with potential.  COURTESY OF S.AP ARKITEKTAR COURTESY OF S.AP ARKITEKTAR COURTESY OF S.AP ARKITEKTAR COURTESY OF S.AP ARKITEKTAR Lava has proved to be a strong, durable building material, at least in its solid state. To explore its potential, Pálmadóttir and Skarphéðinsson envision a city built on a network of trenches that have filled with lava over the course of several eruptions, while buildings are constructed with lava bricks. She returned to Reykjavik in 2009 and worked as an architect until she founded S.AP (for “studio Arnhildur Pálmadóttir”) Arkitektar in 2018; her son started working with her in 2019 and officially joined her as an architect this year, after graduating from the Southern California Institute of Architecture. 

In 2021, the pair witnessed their first eruption up close, near the Fagradalsfjall volcano on the Reykjanes Peninsula. It was there that Pálmadóttir became aware of the sheer quantity of material coursing through the planet’s veins, and the potential to divert it into channels.  Lava has already proved to be a strong, long-lasting building material—at least in its solid state. When it cools, it solidifies into volcanic rock like basalt or rhyolite. The type of rock depends on the composition of the lava, but basaltic lava—like the kind found in Iceland and Hawaii—forms one of the hardest rocks on Earth, which means that structures built from this type of lava would be durable and resilient.  For years, architects in Mexico, Iceland, and Hawaii (where lava is widely available) have built structures out of volcanic rock. But quarrying that rock is an energy-intensive process that requires heavy machines to extract, cut, and haul it, often across long distances, leaving a big carbon footprint. Harnessing lava in its molten state, however, could unlock new methods for sustainable construction. Jeffrey Karson, a professor emeritus at Syracuse University who specializes in volcanic activity and who cofounded the Syracuse University Lava Project, agrees that lava is abundant enough to warrant interest as a building material. To understand how it behaves, Karson has spent the past 15 years performing over a thousand controlled lava pours from giant furnaces. If we figure out how to build up its strength as it cools, he says, “that stuff has a lot of potential.”  In his research, Karson found that inserting metal rods into the lava flow helps reduce the kind of uneven cooling that would lead to thermal cracking—and therefore makes the material stronger (a bit like rebar in concrete). Like glass and other molten materials, lava behaves differently depending on how fast it cools. When glass or lava cools slowly, crystals start forming, strengthening the material. Replicating this process—perhaps in a kiln—could slow down the rate of cooling and let the lava become stronger. This kind of controlled cooling is “easy to do on small things like bricks,” says Karson, so “it’s not impossible to make a wall.”  Pálmadóttir is clear-eyed about the challenges before her. She knows the techniques she and Skarphéðinsson are exploring may not lead to anything tangible in their lifetimes, but they still believe that the ripple effect the projects could create in the architecture community is worth pursuing. Both Karson and Pedersen caution that more experiments are necessary to study this material’s potential. For Skarphéðinsson, that potential transcends the building industry. More than 12 years ago, Icelanders voted that the island’s natural resources, like its volcanoes and fishing waters, should be declared national property. That means any city built from lava flowing out of these volcanoes would be controlled not by deep-pocketed individuals or companies, but by the nation itself. (The referendum was considered illegal almost as soon as it was approved by voters and has since stalled.)  For Skarphéðinsson, the Lavaforming project is less about the material than about the “political implications that get brought to the surface with this material.” “That is the change I want to see in the world,” he says. “It could force us to make radical changes and be a catalyst for something”—perhaps a social megalopolis where citizens have more say in how resources are used and profits are shared more evenly. Cynics might dismiss the idea of harnessing lava as pure folly. But the more I spoke with Pálmadóttir, the more convinced I became. It wouldn’t be the first time in modern history that a seemingly dangerous idea (for example, drilling into scalding pockets of underground hot springs) proved revolutionary. Once entirely dependent on oil, Iceland today obtains 85% of its electricity and heat from renewable sources. “[My friends] probably think I’m pretty crazy, but they think maybe we could be clever geniuses,” she told me with a laugh. Maybe she is a little bit of both. Elissaveta M. Brandon is a regular contributor to Fast Company and Wired.

Read More »

How the federal government is tracking changes in the supply of street drugs

In 2021, the Maryland Department of Health and the state police were confronting a crisis: Fatal drug overdoses in the state were at an all-time high, and authorities didn’t know why. There was a general sense that it had something to do with changes in the supply of illicit drugs—and specifically of the synthetic opioid fentanyl, which has caused overdose deaths in the US to roughly double over the past decade, to more than 100,000 per year.  But Maryland officials were flying blind when it came to understanding these fluctuations in anything close to real time. The US Drug Enforcement Administration reported on the purity of drugs recovered in enforcement operations, but the DEA’s data offered limited detail and typically came back six to nine months after the seizures. By then, the actual drugs on the street had morphed many times over. Part of the investigative challenge was that fentanyl can be some 50 times more potent than heroin, and inhaling even a small amount can be deadly. This made conventional methods of analysis, which required handling the contents of drug packages directly, incredibly risky.  Seeking answers, Maryland officials turned to scientists at the National Institute of Standards and Technology, the national metrology institute for the United States, which defines and maintains standards of measurement essential to a wide range of industrial sectors and health and security applications. There, a research chemist named Ed Sisco and his team had developed methods for detecting trace amounts of drugs, explosives, and other dangerous materials—techniques that could protect law enforcement officials and others who had to collect these samples. Essentially, Sisco’s lab had fine-tuned a technology called DART (for “direct analysis in real time”) mass spectrometry—which the US Transportation Security Administration uses to test for explosives by swiping your hand—to enable the detection of even tiny traces of chemicals collected from an investigation site. This meant that nobody had to open a bag or handle unidentified powders; a usable residue sample could be obtained by simply swiping the outside of the bag.  
Sisco realized that first responders or volunteers at needle exchange sites could use these same methods to safely collect drug residue from bags, drug paraphernalia, or used test strips—which also meant they would no longer need to wait for law enforcement to seize drugs for testing. They could then safely mail the samples to NIST’s lab in Maryland and get results back in as little as 24 hours, thanks to innovations in Sisco’s lab that shaved the time to generate a complete report from 10 to 30 minutes to just one or two. This was partly enabled by algorithms that allowed them to skip the time-consuming step of separating the compounds in a sample before running an analysis. The Rapid Drug Analysis and Research (RaDAR) program launched as a pilot in October 2021 and uncovered new, critical information almost immediately. Early analysis found xylazine—a veterinary sedative that’s been associated with gruesome wounds in users—in about 80% of opioid samples they collected. 
This was a significant finding, Sisco says: “Forensic labs care about things that are illegal, not things that are not illegal but do potentially cause harm. Xylazine is not a scheduled compound, but it leads to wounds that can lead to amputation, and it makes the other drugs more dangerous.” In addition to the compounds that are known to appear in high concentrations in street drugs—xylazine, fentanyl, and the veterinary sedative medetomidine—NIST’s technology can pick out trace amounts of dozens of adulterants that swirl through the street-drug supply and can make it more dangerous, including acetaminophen, rat poison, and local anesthetics like lidocaine. What’s more, the exact chemical formulation of fentanyl on the street is always changing, and differences in molecular structure can make the drugs deadlier. So Sisco’s team has developed new methods for spotting these “analogues”—­compounds that resemble known chemical structures of fentanyl and related drugs. Ed Sisco’s lab at NIST developed a test that gives law enforcement and public health officials vital information about what substances are present in street drugs.B. HAYES/NIST The RaDAR program has expanded to work with partners in public health, city and state law enforcement, forensic science, and customs agencies at about 65 sites in 14 states. Sisco’s lab processes 700 to 1,000 samples a month. About 85% come from public health organizations that focus on harm reduction (an approach to minimizing negative impacts of drug use for people who are not ready to quit). Results are shared at these collection points, which also collect survey data about the effects of the drugs. Jason Bienert, a wound-care nurse at Johns Hopkins who formerly volunteered with a nonprofit harm reduction organization in rural northern Maryland, started participating in the RaDAR program in spring 2024. “Xylazine hit like a storm here,” he says. “Everyone I took care of wanted to know what was in their drugs because they wanted to know if there was xylazine in it.” When the data started coming back, he says, “it almost became a race to see how many samples we could collect.” Bienert sent in about 14 samples weekly and created a chart on a dry-erase board, with drugs identified by the logos on their bags, sorted into columns according to the compounds found in them: ­heroin, fentanyl, xylazine, and everything else. “It was a super useful tool,” Bienert says. “Everyone accepted the validity of it.” As people came back to check on the results of testing, he was able to build rapport and offer additional support, including providing wound care for about 50 people a week. The breadth and depth of testing under the RaDAR program allow an eagle’s-eye view of the national street-drug landscape—and insights about drug trafficking. “We’re seeing distinct fingerprints from different states,” says Sisco. NIST’s analysis shows that fentanyl has taken over the opioid market—except for pockets in the Southwest, there is very little heroin on the streets anymore. But the fentanyl supply varies dramatically as you cross the US. “If you drill down in the states,” says Sisco, “you also see different fingerprints in different areas.” Maryland, for example, has two distinct fentanyl supplies—one with xylazine and one without. In summer 2024, RaDAR analysis detected something really unusual: the sudden appearance of an industrial-grade chemical called BTMPS, which is used to preserve plastic, in drug samples nationwide. In the human body, BTMPS acts as a calcium channel blocker, which lowers blood pressure, and mixed with xylazine or medetomidine, can make overdoses harder to treat. Exactly why and how BTMPS showed up in the drug supply isn’t clear, but it continues to be found in fentanyl samples at a sustained level since it was initially detected. “This was an example of a compound we would have never thought to look for,” says Sisco.  To Sisco, Bienert, and others working on the public health front of the drug crisis, the ever-shifting chemical composition of the street-drug supply speaks to the futility of the “war on drugs.” They point out that a crackdown on heroin smuggling is what gave rise to fentanyl. And NIST’s data shows how in June 2024—the month after Pennsylvania governor Josh Shapiro signed a bill to make possession of xylazine illegal in his state—it was almost entirely replaced on the East Coast by the next veterinary drug, medetomidine.  Over the past year, for reasons that are not fully understood, drug overdose deaths nationally have been falling for the first time in decades. One theory is that xylazine has longer-lasting effects than fentanyl, which means people using drugs are taking them less often. Or it could be that more and better information about the drugs themselves is helping people make safer decisions. “It’s difficult to say the program prevents overdoses and saves lives,” says Sisco. “But it increases the likelihood of people coming in to needle exchange centers and getting more linkages to wound care, other services, other education.” Working with public health partners “has humanized this entire area for me,” he says. “There’s a lot more gray than you think—it’s not black and white. And it’s a matter of life or death for some of these people.”  Adam Bluestein writes about innovation in business, science, and technology.

Read More »

Phase two of military AI has arrived

Last week, I spoke with two US Marines who spent much of last year deployed in the Pacific, conducting training exercises from South Korea to the Philippines. Both were responsible for analyzing surveillance to warn their superiors about possible threats to the unit. But this deployment was unique: For the first time, they were using generative AI to scour intelligence, through a chatbot interface similar to ChatGPT.  As I wrote in my new story, this experiment is the latest evidence of the Pentagon’s push to use generative AI—tools that can engage in humanlike conversation—throughout its ranks, for tasks including surveillance. Consider this phase two of the US military’s AI push, where phase one began back in 2017 with older types of AI, like computer vision to analyze drone imagery. Though this newest phase began under the Biden administration, there’s fresh urgency as Elon Musk’s DOGE and Secretary of Defense Pete Hegseth push loudly for AI-fueled efficiency.  As I also write in my story, this push raises alarms from some AI safety experts about whether large language models are fit to analyze subtle pieces of intelligence in situations with high geopolitical stakes. It also accelerates the US toward a world where AI is not just analyzing military data but suggesting actions—for example, generating lists of targets. Proponents say this promises greater accuracy and fewer civilian deaths, but many human rights groups argue the opposite.  With that in mind, here are three open questions to keep your eye on as the US military, and others around the world, bring generative AI to more parts of the so-called “kill chain.”
What are the limits of “human in the loop”? Talk to as many defense-tech companies as I have and you’ll hear one phrase repeated quite often: “human in the loop.” It means that the AI is responsible for particular tasks, and humans are there to check its work. It’s meant to be a safeguard against the most dismal scenarios—AI wrongfully ordering a deadly strike, for example—but also against more trivial mishaps. Implicit in this idea is an admission that AI will make mistakes, and a promise that humans will catch them. But the complexity of AI systems, which pull from thousands of pieces of data, make that a herculean task for humans, says Heidy Khlaaf, who is chief AI scientist at the AI Now Institute, a research organization, and previously led safety audits for AI-powered systems.
“‘Human in the loop’ is not always a meaningful mitigation,” she says. When an AI model relies on thousands of data points to draw conclusions, “it wouldn’t really be possible for a human to sift through that amount of information to determine if the AI output was erroneous.” As AI systems rely on more and more data, this problem scales up.  Is AI making it easier or harder to know what should be classified? In the Cold War era of US military intelligence, information was captured through covert means, written up into reports by experts in Washington, and then stamped “Top Secret,” with access restricted to those with proper clearances. The age of big data, and now the advent of generative AI to analyze that data, is upending the old paradigm in lots of ways. One specific problem is called classification by compilation. Imagine that hundreds of unclassified documents all contain separate details of a military system. Someone who managed to piece those together could reveal important information that on its own would be classified. For years, it was reasonable to assume that no human could connect the dots, but this is exactly the sort of thing that large language models excel at.  With the mountain of data growing each day, and then AI constantly creating new analyses, “I don’t think anyone’s come up with great answers for what the appropriate classification of all these products should be,” says Chris Mouton, a senior engineer for RAND, who recently tested how well suited generative AI is for intelligence and analysis. Underclassifying is a US security concern, but lawmakers have also criticized the Pentagon for overclassifying information.  The defense giant Palantir is positioning itself to help, by offering its AI tools to determine whether a piece of data should be classified or not. It’s also working with Microsoft on AI models that would train on classified data.  How high up the decision chain should AI go? Zooming out for a moment, it’s worth noting that the US military’s adoption of AI has in many ways followed consumer patterns. Back in 2017, when apps on our phones were getting good at recognizing our friends in photos, the Pentagon launched its own computer vision effort, called Project Maven, to analyze drone footage and identify targets. Now, as large language models enter our work and personal lives through interfaces such as ChatGPT, the Pentagon is tapping some of these models to analyze surveillance.  So what’s next? For consumers, it’s agentic AI, or models that can not just converse with you and analyze information but go out onto the internet and perform actions on your behalf. It’s also personalized AI, or models that learn from your private data to be more helpful. 

All signs point to the prospect that military AI models will follow this trajectory as well. A report published in March from Georgetown’s Center for Security and Emerging Technology found a surge in military adoption of AI to assist in decision-making. “Military commanders are interested in AI’s potential to improve decision-making, especially at the operational level of war,” the authors wrote. In October, the Biden administration released its national security memorandum on AI, which provided some safeguards for these scenarios. This memo hasn’t been formally repealed by the Trump administration, but President Trump has indicated that the race for competitive AI in the US needs more innovation and less oversight. Regardless, it’s clear that AI is quickly moving up the chain not just to handle administrative grunt work, but to assist in the most high-stakes, time-sensitive decisions.  I’ll be following these three questions closely. If you have information on how the Pentagon might be handling these questions, please reach out via Signal at jamesodonnell.22.  This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Read More »

US Inventory Drop, OPEC Action Lift Oil Prices

Oil rose on the prospect of a de-escalation in the trade war between the world’s two largest economies and a stall in nuclear talks between the US and Iran. West Texas Intermediate futures added 1.9% to settle near $62.50 a barrel, the third gain in the four past sessions, after China signaled openness to trade negotiations with the Trump administration. Pre-conditions for the talks would include a more consistent US position and a willingness to address China’s concerns around American sanctions and Taiwan, according to a person familiar with the Chinese government’s thinking. Elsewhere, Iran said it won’t be drawn into negotiations with the US over its ability to enrich uranium, reducing the potential of looser restrictions on Iranian crude. The US also sanctioned another China-based independent “teapot” refinery for its role in purchasing Tehran’s crude, and Treasury Secretary Scott Bessent said the US would ramp up pressure on Iran. Crude has recovered from a sharp drop to near the lowest in four years brought about by an onslaught of tariffs and counter-levies between the US and its biggest trading partners. Washington on Tuesday started a probe into the need for import taxes on critical minerals, while trade differences with the European Union persist as White House officials said the bulk of the US tariffs imposed on the bloc won’t be removed. Meanwhile, Iraq plans to cut its oil exports this month as it faces growing pressure to adhere to its OPEC+ production target. The country aims to reduce shipments by 70,000 barrels a day, an official with knowledge of the matter said. In another support for prices, US government data released Wednesday showed inventory levels at Cushing, Oklahoma — the delivery point for West Texas Intermediate — fell by roughly 650,000 barrels to the lowest since 2008 for this

Read More »

Intel sells off majority stake in its FPGA business

Altera will continue offering field-programmable gate array (FPGA) products across a wide range of use cases, including automotive, communications, data centers, embedded systems, industrial, and aerospace.  “People were a bit surprised at Intel’s sale of the majority stake in Altera, but they shouldn’t have been. Lip-Bu indicated that shoring up Intel’s balance sheet was important,” said Jim McGregor, chief analyst with Tirias Research. The Altera has been in the works for a while and is a relic of past mistakes by Intel to try to acquire its way into AI, whether it was through FPGAs or other accelerators like Habana or Nervana, note Anshel Sag, principal analyst with Moor Insight and Research. “Ultimately, the 50% haircut on the valuation of Altera is unfortunate, but again is a demonstration of Intel’s past mistakes. I do believe that finishing the process of spinning it out does give Intel back some capital and narrows the company’s focus,” he said. So where did it go wrong? It wasn’t with FPGAs because AMD is making a good run of it with its Xilinx acquisition. The fault, analysts say, lies with Intel, which has a terrible track record when it comes to acquisitions. “Altera could have been a great asset to Intel, just as Xilinx has become a valuable asset to AMD. However, like most of its acquisitions, Intel did not manage Altera well,” said McGregor.

Read More »

OpenAI launches o3 and o4-mini, AI models that ‘think with images’ and use tools autonomously

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI launched two groundbreaking AI models today that can reason with images and use tools independently, representing what experts call a step change in artificial intelligence capabilities. The San Francisco-based company introduced o3 and o4-mini, the latest in its “o-series” of reasoning models, which it claims are its most intelligent and capable models to date. These systems can integrate images directly into their reasoning process, search the web, run code, analyze files, and even generate images within a single task flow. “There are some models that feel like a qualitative step into the future. GPT-4 was one of those. Today is also going to be one of those days,” said Greg Brockman, OpenAI’s president, during a press conference announcing the release. “These are the first models where top scientists tell us they produce legitimately good and useful novel ideas.” How OpenAI’s new models ‘think with images’ to transform visual problem-solving The most striking feature of these new models is their ability to “think with images” — not just see them, but manipulate and reason about them as part of their problem-solving process. “They don’t just see an image — they think with it,” OpenAI said in a statement sent to VentureBeat. “This unlocks a new class of problem-solving that blends visual and textual reasoning.” During a demonstration at the press conference, a researcher showed how o3 could analyze a physics poster from a decade-old internship, navigate its complex diagrams independently, and even identify that the final result wasn’t present in the poster itself. “It must have just read, you know, at least like 10 different papers in a few seconds for me,” Brandon McKenzie, a researcher at OpenAI working on multimodal

Read More »

Renewable PPA prices shrug off the tariff roller coaster — at least for now

Dive Brief: Solar power purchase agreement prices remain essentially unchanged since the end of 2024, while wind PPA prices declined slightly in spite of uncertain and even adverse policy actions coming out of the Trump administration, according to data from LevelTen Energy’s PPA marketplace. The average North American solar PPA went for $57.04 per MWh in the first quarter of 2025, up 28 cents from the end of 2024 and 9.8% since this time last year. Wind PPA prices dropped more than 5% during the first quarter, but remain 4.4% higher than last year, according to LevelTen Energy. Although an ample supply of solar projects should put downward pressure on solar prices, developers may be reluctant to tighten their margins in the face of policy uncertainty, said Zach Starsia, senior director of the energy marketplace at LevelTen. Data from the next few months could clarify which direction prices are headed, Starsia said. Dive Insight: PPA prices have remained relatively static despite — or perhaps because of — the policy turmoil in recent months, Starsia said. It’s not just the Trump administration’s on-again, off-again tariffs that stand to increase costs for solar developers, he said. Renewable energy developers, who rely heavily on the U.S. Army Corps of Engineers during federal permitting processes, have also been impacted by the Department of Government Efficiency’s cost-cutting. And talk of revamping or repealing the Inflation Reduction Act — while still seen as unlikely — could hit developers hard, Starsia said. With a glut of solar projects set to come online in many U.S. markets, long-term analyses suggest that PPA prices should decline. But uncertainty about the future of trade and energy policy in the U.S. seems to have prompted most developers to hedge their bets by maintaining their asking prices — at least for now.

Read More »

Humber carbon emitter wants government signal on Viking CCS

Power company VPI has called for clarity to progress the Viking carbon capture and storage (CCS) project and help drive the future of heavy industries in the Humber. VPI requested a signal from the UK government in its upcoming comprehensive spending review that it will be selected as an anchor emitter for the CCS project. The group owns the nearly 1.3GW Immingham thermal power plant, which provides power to the Humber’s two large oil refineries. VPI is planning to deploy a £1.5 billion carbon capture proposal, which will utilise Harbour Energy’s Viking CCS pipeline to transport carbon that will be buried in a depleted gas field in the North Sea. VPI chief executive Jorge Pikunic said: “Carbon capture and storage provides a once-in-a-generation opportunity to turn the Humber into a powerhouse of the future. If missed, it may not come again. “For the last five years, public officials have worked tirelessly with industry to set in motion the development of Viking CCS, a unique carbon capture and storage network, here in the Humber. “Proceeding with the next stage of Viking CCS now will demonstrate how a strategic, mission-driven government can successfully transition an industrial hub into a future powerhouse, in a prudent, value-for money driven, just and meaningful way.” Viking CCS The Viking CCS pipeline will transport CO₂ captured from the industrial cluster at Immingham out to the Viking reservoirs via the Theddlethorpe gas terminal and an existing 75-mile (120km) pipeline as part of the Lincolnshire offshore gas gathering system (LOGGS). The project forms part of the UK’s track 2 CCS projects along with Scotland’s Acorn CCS project. While the UK government has backed the track 1 projects with around £22 billion of government funding, the track 2 proposal have not received similar pledges of support. Business leaders have warned

Read More »

TotalEnergies Agrees 15-year LNG Supply Deal with Enadom

Global energy major TotalEnergies SE signed a heads of agreement (HoA) with Energia Natural Dominicana Enadom, S.R.L. (Enadom) for the delivery of 400,000 tons of liquefied natural gas (LNG) per year. TotalEnergies said in a media release that the HoA with the joint venture between AES Dominicana and Energas in the Dominican Republic is subject to the finalization of sale and purchase agreements (SPAs). Once the SPAs are signed, the agreement will start in mid-2027, with a 15-year term, and the price will be indexed to Henry Hub. The deal enables Enadom to supply natural gas to the 470 MW combined-cycle power plant, currently under construction, which will increase the country’s electricity generation capacity, TotalEnergies said. This project contributes to the energy transition of the Dominican Republic by reducing its dependence on coal and fuel oil through the use of a less carbon-intensive energy source, natural gas, the company said. “We are pleased to have signed this agreement to answer, alongside AES and its partners, the energy needs of the Dominican Republic. This new contract underscores TotalEnergies’ leadership in the LNG sector and our commitment to supporting the island’s energy transition. It will be a natural outlet for our US LNG supply which will progressively increase”, Gregory Joffroy, Senior Vice President LNG at TotalEnergies, said. TotalEnergies said it is the world’s third largest LNG player with a global portfolio of 40 Mt/y in 2024 thanks to its interests in liquefaction plants in all geographies. “This agreement with TotalEnergies is the result of the confidence placed in the Dominican Republic’s energy sector and, specifically, in Enadom and AES. This partnership, alongside Enadom’s, has demonstrated investment capabilities in providing natural gas to the Dominican electricity market by ensuring a reliable, competitive, and environmentally responsible energy supply. Enadom is proud to play a pivotal

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE