Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Offshore rigs up 4 in US Gulf

Offshore rigs in the Gulf of Mexico increased by 4 this week, increasing the total US offshore rig count to 21, officials at Baker Hughes reported Friday. Twelve rigs were operating in the US Gulf of Mexico for the week ended Oct. 24.  Total US drilling increased slightly, with 550 rotary rigs working this week, up 2 from last week but down 35 from 585 a year ago, officials said. Activity in US inland waters decreased by 1 unit to 2, while the US land rig count dipped by 1 to 527. There were 199 rotary rigs drilling in Canada this week, up 1 from last week and down 17 from the 216 units that were working during the same period a year ago. The number of rigs drilling for oil in the US increased by 2 to 420. There were 121 US rotary rigs drilling for natural gas, unchanged from last week, and 9 rigs were unclassified, also unchanged from last week. US directional drilling increased by 2 to 53 units. Horizontal drilling declined by 1 to 485. Drilling increased in Louisiana by 3 rigs to 40, and Wyoming’s rig count was up 2 to 14. Texas and Colorado each dropped a single unit to end the week with 236 and 14 rigs working, respectively.

Read More »

New sanctions on Russia rally crude prices

Oil, fundamental analysis Crude prices were already poised for a technically-driven rebound, but across-the-board inventory draws, and new sanctions placed on Russian energy entities led to a $5.00+ rally this week. The US grade started the week as low as $56.35/bbl but pushed as high as $62.60/bbl by Friday. Brent followed a similar pattern, hitting its low of $60.35/bbl on Monday and its weekly high of $66.80 on Friday. Both grades settled much higher week-on-week with gains exceeding $5.00/bbl. The WTI/Brent spread has widened to ($4.40). Political risk premium entered oil markets again this week in the form of new US sanctions on Russia’s two largest oil companies, Rosneft and Lukoil, which represent about 50% of the country’s exports. The  sanctions will preclude both companies from doing business with US banks and other financial institutions. The EU had imposed a new sanction package on the two last week along with some Chinese refiners. From the start of the Russia/Ukraine war, Russia’s crude exports of about 3.0 million b/d have been the target for sanctions. Yet, to date, most sanctions have proved ineffective or have been circumvented. However, oil markets still react bullishly to such announcements. But this time around, India’s largest refiner, Reliance, is agreeing to halt the purchases of oil from Rosneft that were taking place under a long-term agreement which will impact the physical sales of Urals. China remains Russia’s No. 1 importer of oil. China is now also the No. 1 purchaser of Canadian bitumen, taking up to 70% of the 3.5 million b/d of the oil sands production being delivered to British Columbia ports via the expanded Trans Mountain pipeline. The Energy Information Administration (EIA)’s Weekly Petroleum Status Report (still released despite the government shut-down) indicated that commercial crude oil and refined product inventories for last

Read More »

Brazil’s ANP awards 5 offshore presalt blocks in latest auction

Five companies, including Petrobras and Equinor, came away with blocks following a recent bid round for blocks offshore Brazil. Awards for Brazil’s National Agency of Petroleum, Natural Gas and Biofuels (ANP) 3rd Cycle bidding round for pre-salt blocks were Petrobras, Equinor, Karoon Energy, and a consortium of CNOOC and Sinopec. The cycle had seven pre-salt blocks for sale: Esmeralda and Ametista in the Santos basin; and Citrino, Itaimbezinho, Ônix, Larimar, and Jaspe in Campos basin. Fifteen companies were eligible to submit bids: Petrobras, 3R Petroleum, BP Energy, Chevron, CNOOC, Ecopetrol, Equinor, Karoon, Petrogal, Petronas, Prio, QatarEnergy, Shell, Sinopec, and TotalEnergies. Two of the seven blocks up for bids received no offers, Reuters reported Oct. 22, noting ANP received offers from only eight of 15 eligible companies. Petrobras acquired the Jaspe block in partnership with Equinor Brasil Energia Ltda. Petrobras will serve as operator with a 60% stake, with Equinor holding the remaining 40%. Petrobras also acquired the Citrino block with 100% interest. Equinor was awarded the Itaimbezinho block with a 100% stake. Karoon was awarded 100% interest in the Esmeralda block. A consortium of CNOOC and Sinopec was awarded the Ametista block. CNOOC will serve as operator with 70% interest.

Read More »

Glenfarne, Tokyo Gas sign LOI for Alaska LNG offtake

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Glenfarne Alaska LNG LLC has signed a letter of intent (LOI) with Tokyo Gas Co. Ltd. for the offtake of 1 million tonnes/year (tpy) of LNG from the Alaska LNG project. The 20-million tpy Alaska LNG project consists of a 42-in. OD pipeline to transport natural gas from Alaska’s North Slope to meet Alaska’s domestic needs and produce 20 million tpy of LNG for export, Glenfarne said in a release Oct. 24. <!–> –> <!–> March 28, 2025 ]–> <!–> –> <!–> Sept. 11, 2025 ]–> <!–> Since March 2025, Glenfarne has signed preliminary offtake agreements with LNG buyers in Japan, Korea, Taiwan, and Thailand that include JERA, POSCO, CPC, and PTT, totaling 11 million tpy of capacity of the 16 million tpy Glenfarne expects to contract to reach a financial close for the project. Worley is completing the final engineering and cost validation for the project’s 807-mile pipeline. Phase one of the project includes the domestic pipeline to deliver natural gas about 765 miles from the North Slope to the Anchorage region. Phase two would add the LNG terminal and related infrastructure to enable export capability. The State

Read More »

Vantage Data Centers Pours $15B Into Wisconsin AI Campus as It Builds Global Giga-Scale Footprint

Expanding in Ohio: Financing Growth Through Green Capital In June 2025, Vantage secured $5 billion in green loan capacity, including $2.25 billion to fully fund its New Albany, Ohio (OH1) campus and expand its existing borrowing base. The 192 MW development will comprise three 64 MW buildings, with first delivery expected in December 2025 and phased completion through 2028. The OH1 campus is designed to come online as Vantage’s larger megasites ramp up, providing early capacity and regional proximity to major cloud and AI customers in the Columbus–New Albany corridor. The site also offers logistical and workforce advantages within one of the fastest-growing data center regions in the U.S. Beyond the U.S. – Vantage Expands Its Global Footprint Moving North: Reinforcing Canada’s Renewable Advantage In February 2025, Vantage announced a C$500 million investment to complete QC24, the fourth and final building at its Québec City campus, adding 32 MW of capacity by 2027. The project strengthens Vantage’s Montreal–Québec platform and reinforces its renewable-heavy power profile, leveraging abundant hydropower to serve sustainability-driven customers. APAC Expansion: Strategic Scale in Southeast Asia In September 2025, Vantage unveiled a $1.6 billion APAC expansion, led by existing investors GIC (Singapore’s sovereign wealth fund) and ADIA (Abu Dhabi Investment Authority). The investment includes the acquisition of Yondr’s Johor, Malaysia campus at Sedenak Tech Park. Currently delivering 72.5 MW, the Johor campus is planned to scale to 300 MW at full build-out, positioning it within one of Southeast Asia’s most active AI and cloud growth corridors. Analysts note that the location’s connectivity to Singapore’s hyperscale market and favorable development economics give Vantage a strong competitive foothold across the region. Italy: Expanding European Presence Under National Priority Status Vantage is also adding a second Italian campus alongside its existing Milan site, totaling 32 MW across two facilities. Phase

Read More »

How CoreWeave and Poolside Are Teaming Up in West Texas to Build the Next Generation of AI Data Centers

In the evolving landscape of artificial-intelligence infrastructure, a singular truth is emerging: access to cutting-edge silicon and massive GPU clusters is no longer enough by itself. For companies chasing the frontier of multi-trillion-parameter model training and agentic AI deployment, the bottleneck increasingly lies not just in compute, but in the seamless integration of compute + power + data center scale. The latest chapter in this story is the collaboration between CoreWeave and Poolside, culminating in the launch of Project Horizon, a 2-gigawatt AI-campus build in West Texas. Setting the Stage: Who’s Involved, and Why It Matters CoreWeave (NASDAQ: CRWV) has positioned itself as “The Essential Cloud for AI™” — a company founded in 2017, publicly listed in March 2025, and aggressively building out its footprint of ultra-high-performance infrastructure.  One of its strategic moves: in July 2025 CoreWeave struck a definitive agreement to acquire Core Scientific (NASDAQ: CORZ) in an all-stock transaction. Through that deal, CoreWeave gains grip over approximately 1.3 GW of gross power across Core Scientific’s nationwide data center footprint, plus more than 1 GW of expansion potential.  That acquisition underlines a broader trend: AI-specialist clouds are no longer renting space and power; they’re working to own or tightly control it. Poolside, founded in 2023, is a foundation-model company with an ambitious mission: building artificial general intelligence (AGI) and deploying enterprise-scale agents.  According to Poolside’s blog: “When people ask what it takes to build frontier AI … the focus is usually on the model … but that’s only half the story. The other half is infrastructure. If you don’t control your infrastructure, you don’t control your destiny—and you don’t have a shot at the frontier.”  Simply put: if you’re chasing multi-trillion-parameter models, you need both the compute horsepower and the power infrastructure; and ideally, tight vertical integration. Together, the

Read More »

Offshore rigs up 4 in US Gulf

Offshore rigs in the Gulf of Mexico increased by 4 this week, increasing the total US offshore rig count to 21, officials at Baker Hughes reported Friday. Twelve rigs were operating in the US Gulf of Mexico for the week ended Oct. 24.  Total US drilling increased slightly, with 550 rotary rigs working this week, up 2 from last week but down 35 from 585 a year ago, officials said. Activity in US inland waters decreased by 1 unit to 2, while the US land rig count dipped by 1 to 527. There were 199 rotary rigs drilling in Canada this week, up 1 from last week and down 17 from the 216 units that were working during the same period a year ago. The number of rigs drilling for oil in the US increased by 2 to 420. There were 121 US rotary rigs drilling for natural gas, unchanged from last week, and 9 rigs were unclassified, also unchanged from last week. US directional drilling increased by 2 to 53 units. Horizontal drilling declined by 1 to 485. Drilling increased in Louisiana by 3 rigs to 40, and Wyoming’s rig count was up 2 to 14. Texas and Colorado each dropped a single unit to end the week with 236 and 14 rigs working, respectively.

Read More »

New sanctions on Russia rally crude prices

Oil, fundamental analysis Crude prices were already poised for a technically-driven rebound, but across-the-board inventory draws, and new sanctions placed on Russian energy entities led to a $5.00+ rally this week. The US grade started the week as low as $56.35/bbl but pushed as high as $62.60/bbl by Friday. Brent followed a similar pattern, hitting its low of $60.35/bbl on Monday and its weekly high of $66.80 on Friday. Both grades settled much higher week-on-week with gains exceeding $5.00/bbl. The WTI/Brent spread has widened to ($4.40). Political risk premium entered oil markets again this week in the form of new US sanctions on Russia’s two largest oil companies, Rosneft and Lukoil, which represent about 50% of the country’s exports. The  sanctions will preclude both companies from doing business with US banks and other financial institutions. The EU had imposed a new sanction package on the two last week along with some Chinese refiners. From the start of the Russia/Ukraine war, Russia’s crude exports of about 3.0 million b/d have been the target for sanctions. Yet, to date, most sanctions have proved ineffective or have been circumvented. However, oil markets still react bullishly to such announcements. But this time around, India’s largest refiner, Reliance, is agreeing to halt the purchases of oil from Rosneft that were taking place under a long-term agreement which will impact the physical sales of Urals. China remains Russia’s No. 1 importer of oil. China is now also the No. 1 purchaser of Canadian bitumen, taking up to 70% of the 3.5 million b/d of the oil sands production being delivered to British Columbia ports via the expanded Trans Mountain pipeline. The Energy Information Administration (EIA)’s Weekly Petroleum Status Report (still released despite the government shut-down) indicated that commercial crude oil and refined product inventories for last

Read More »

Brazil’s ANP awards 5 offshore presalt blocks in latest auction

Five companies, including Petrobras and Equinor, came away with blocks following a recent bid round for blocks offshore Brazil. Awards for Brazil’s National Agency of Petroleum, Natural Gas and Biofuels (ANP) 3rd Cycle bidding round for pre-salt blocks were Petrobras, Equinor, Karoon Energy, and a consortium of CNOOC and Sinopec. The cycle had seven pre-salt blocks for sale: Esmeralda and Ametista in the Santos basin; and Citrino, Itaimbezinho, Ônix, Larimar, and Jaspe in Campos basin. Fifteen companies were eligible to submit bids: Petrobras, 3R Petroleum, BP Energy, Chevron, CNOOC, Ecopetrol, Equinor, Karoon, Petrogal, Petronas, Prio, QatarEnergy, Shell, Sinopec, and TotalEnergies. Two of the seven blocks up for bids received no offers, Reuters reported Oct. 22, noting ANP received offers from only eight of 15 eligible companies. Petrobras acquired the Jaspe block in partnership with Equinor Brasil Energia Ltda. Petrobras will serve as operator with a 60% stake, with Equinor holding the remaining 40%. Petrobras also acquired the Citrino block with 100% interest. Equinor was awarded the Itaimbezinho block with a 100% stake. Karoon was awarded 100% interest in the Esmeralda block. A consortium of CNOOC and Sinopec was awarded the Ametista block. CNOOC will serve as operator with 70% interest.

Read More »

Glenfarne, Tokyo Gas sign LOI for Alaska LNG offtake

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Glenfarne Alaska LNG LLC has signed a letter of intent (LOI) with Tokyo Gas Co. Ltd. for the offtake of 1 million tonnes/year (tpy) of LNG from the Alaska LNG project. The 20-million tpy Alaska LNG project consists of a 42-in. OD pipeline to transport natural gas from Alaska’s North Slope to meet Alaska’s domestic needs and produce 20 million tpy of LNG for export, Glenfarne said in a release Oct. 24. <!–> –> <!–> March 28, 2025 ]–> <!–> –> <!–> Sept. 11, 2025 ]–> <!–> Since March 2025, Glenfarne has signed preliminary offtake agreements with LNG buyers in Japan, Korea, Taiwan, and Thailand that include JERA, POSCO, CPC, and PTT, totaling 11 million tpy of capacity of the 16 million tpy Glenfarne expects to contract to reach a financial close for the project. Worley is completing the final engineering and cost validation for the project’s 807-mile pipeline. Phase one of the project includes the domestic pipeline to deliver natural gas about 765 miles from the North Slope to the Anchorage region. Phase two would add the LNG terminal and related infrastructure to enable export capability. The State

Read More »

Vantage Data Centers Pours $15B Into Wisconsin AI Campus as It Builds Global Giga-Scale Footprint

Expanding in Ohio: Financing Growth Through Green Capital In June 2025, Vantage secured $5 billion in green loan capacity, including $2.25 billion to fully fund its New Albany, Ohio (OH1) campus and expand its existing borrowing base. The 192 MW development will comprise three 64 MW buildings, with first delivery expected in December 2025 and phased completion through 2028. The OH1 campus is designed to come online as Vantage’s larger megasites ramp up, providing early capacity and regional proximity to major cloud and AI customers in the Columbus–New Albany corridor. The site also offers logistical and workforce advantages within one of the fastest-growing data center regions in the U.S. Beyond the U.S. – Vantage Expands Its Global Footprint Moving North: Reinforcing Canada’s Renewable Advantage In February 2025, Vantage announced a C$500 million investment to complete QC24, the fourth and final building at its Québec City campus, adding 32 MW of capacity by 2027. The project strengthens Vantage’s Montreal–Québec platform and reinforces its renewable-heavy power profile, leveraging abundant hydropower to serve sustainability-driven customers. APAC Expansion: Strategic Scale in Southeast Asia In September 2025, Vantage unveiled a $1.6 billion APAC expansion, led by existing investors GIC (Singapore’s sovereign wealth fund) and ADIA (Abu Dhabi Investment Authority). The investment includes the acquisition of Yondr’s Johor, Malaysia campus at Sedenak Tech Park. Currently delivering 72.5 MW, the Johor campus is planned to scale to 300 MW at full build-out, positioning it within one of Southeast Asia’s most active AI and cloud growth corridors. Analysts note that the location’s connectivity to Singapore’s hyperscale market and favorable development economics give Vantage a strong competitive foothold across the region. Italy: Expanding European Presence Under National Priority Status Vantage is also adding a second Italian campus alongside its existing Milan site, totaling 32 MW across two facilities. Phase

Read More »

How CoreWeave and Poolside Are Teaming Up in West Texas to Build the Next Generation of AI Data Centers

In the evolving landscape of artificial-intelligence infrastructure, a singular truth is emerging: access to cutting-edge silicon and massive GPU clusters is no longer enough by itself. For companies chasing the frontier of multi-trillion-parameter model training and agentic AI deployment, the bottleneck increasingly lies not just in compute, but in the seamless integration of compute + power + data center scale. The latest chapter in this story is the collaboration between CoreWeave and Poolside, culminating in the launch of Project Horizon, a 2-gigawatt AI-campus build in West Texas. Setting the Stage: Who’s Involved, and Why It Matters CoreWeave (NASDAQ: CRWV) has positioned itself as “The Essential Cloud for AI™” — a company founded in 2017, publicly listed in March 2025, and aggressively building out its footprint of ultra-high-performance infrastructure.  One of its strategic moves: in July 2025 CoreWeave struck a definitive agreement to acquire Core Scientific (NASDAQ: CORZ) in an all-stock transaction. Through that deal, CoreWeave gains grip over approximately 1.3 GW of gross power across Core Scientific’s nationwide data center footprint, plus more than 1 GW of expansion potential.  That acquisition underlines a broader trend: AI-specialist clouds are no longer renting space and power; they’re working to own or tightly control it. Poolside, founded in 2023, is a foundation-model company with an ambitious mission: building artificial general intelligence (AGI) and deploying enterprise-scale agents.  According to Poolside’s blog: “When people ask what it takes to build frontier AI … the focus is usually on the model … but that’s only half the story. The other half is infrastructure. If you don’t control your infrastructure, you don’t control your destiny—and you don’t have a shot at the frontier.”  Simply put: if you’re chasing multi-trillion-parameter models, you need both the compute horsepower and the power infrastructure; and ideally, tight vertical integration. Together, the

Read More »

TotalEnergies Sees Debt Pile Falling Further

TotalEnergies SE said debt came down in the third quarter and will likely fall further by the end of the year as more asset sales complete. Net debt receded to $24.6 billion at the end of September from $26 billion at the end of June, the French energy major said in a statement. It also reported quarterly earnings that met analyst estimates. Total’s shares have trailed those of US and UK peers for much of the year as debt ballooned. Investor concerns about borrowings prompted the company to reduce its quarterly share buybacks last month, and it signaled repurchases may fall further next year if oil prices continue to weaken. The stock dropped as much as 3.2% on Thursday, potentially reflecting plans by the French opposition National Rally to make buybacks more expensive. The lower house — where no party or coalition has an outright majority — has adopted an amendment proposed by the far-right group, which would raise a levy on repurchases to 33%. “This has something to do with the underperformance” of Total today, said Ahmed Ben Salem, an analyst at Oddo BHF. It’s unclear whether the amendment will remain the same in the final budget bill, he said. The shares traded down 1.7% at €53.22 as of 3:53 p.m. in Paris. There’s “big fiscal creativity” in parliament, Chief Executive Officer Patrick Pouyanne said on a conference call. “There is a huge debate making a lot of noise, but I trust that at the end of the day we land on a reasonable” path. Profit Falls Total posted a 2.3% decline in quarterly adjusted net income, citing lower oil and gas prices, while hydrocarbon production increased. Gearing — a measure of indebtedness — eased to 17.3% at the end of the period and is seen as low as 15% at

Read More »

USA Energy Sec Says USA Is Ready to Sell More Oil, Gas to China

Energy Secretary Chris Wright said the US is prepared to sell more oil and natural gas to China if Beijing cuts back on purchases from Russia.  “There’s so much space for mutually beneficial deals between the US and China,” Wright said Thursday during a Bloomberg Television interview, noting that the US is the world’s largest oil and gas exporter, while China is the biggest importer.  The energy secretary plans to travel to Asia within weeks, or possibly sooner, following President Donald Trump trip to the continent this week.  During his trip, Trump said he reached deals with Chinese President Xi Jinping and South Korea President Lee Jae Myung to buy more US oil and gas. Trump also cited a “very large scale” transaction involving Alaskan oil and gas in a post on the social media site Truth Social but didn’t provide more details. “There is lots of room from the United States to grow our role in supplying natural gas, oil, and frankly nuclear technology to South Korea,” Wright said in the interview.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

DTE inks first data center deal to grow electric load 25%

8.4 GW data center pipeline DTE Energy has signed agreements to serve a 1.4 GW hyperscaler and has line of sight to another 7 GW of potential large loads, officials said. $30 billion investment pipeline DTE plans to invest $30 billion in generation, distribution and other infrastructure across the 2026-2030 timeframe. 12 GW New generation DTE expects to add from 2026 to 2032, including batteries, renewables and gas. DTE Energy has signed a 1.4 GW agreement to serve a hyperscale data center and sees “transformational growth” ahead in a project pipeline that could represent up to an additional 7 GW of load. It is the utility’s first hyperscaler agreement at a time when data centers are rapidly expanding around the United States.  Large loads, including AI data centers, could ultimately add 20% to U.S. utilities’ peak demand, most within the next decade, Wood Mackenzie said in a September report. DTE serves about 2.3 million customers in southeast Michigan, including Detroit. “This is an exciting milestone,” DTE President and CEO Joi Harris said in a Thursday call with analysts. “Aside from the 1.4 GW of new load, we are still in late-stage negotiations with an additional 3 GW of data center load providing potential further upside to our capital plan as we advance these negotiations. … And we have a pipeline of an additional 3-4 GW behind that.” The data center contract of 1.4 GW increases DTE’s electric load by 25%, officials said. “We also expect longer-term growth opportunities through the expansion of these initial hyperscaler projects,” Harris said. The generation investment needed to support the additional load “could very well come into the back end of our five-year plan, providing incremental capital investment.” The utility has added about $6 billion to its five-year plan and now expects to invest $30 billion in

Read More »

AEP capital spending plan surges 33% to $72B in utility ‘super cycle’

$72 billion capital expenditure plan Up 33% from AEP’s previous five-year capex plan, partly driven by 765-kV transmission projects in Texas and the PJM Interconnection region. 65 GW peak load in 2030 Up 76% from AEP’s summer peak, driven by 28 GW in data center and other large load agreements. $2.6 billion year-to-date operating earnings Up 13% from a year ago, partly driven by 765-kV transmission projects in Texas and the PJM Interconnection region. 7% to 9% earnings per share annual growth rate Up from 6% to 8% previously. AEP’s stock price jumped 6% Wednesday to $122.11/share. 3.5% Annual residential rate hikes AEP expects its customers will face over the next five years. Surging loads In the last 12 months, AEP’s utilities sold 6% more electricity compared to the previous year, with residential sales up 2.3% and commercial sales up 7.9%, and those sales are expected to continue growing, according to the company. About 2 GW of data centers came online in the third quarter, Trevor Mihalik, AEP vice president and CFO, said Wednesday during an earnings conference call. AEP expects its peak load will hit 65 GW by 2030, up from 37 GW, with demand surging in Indiana, Ohio, Oklahoma and Texas, according to William Fehrman, AEP chairman, president and CEO. The growth estimate includes 28 GW of customers with electric service agreements or letters of agreement, he said. About half of that 28 GW is in the Electric Reliability Council of Texas market, 40% in the PJM Interconnection and 10% in the Southwest Power Pool, according to Fehrman. About 80% of that pending demand is from hyperscalers such as Google, AWS and Meta, Mihalik said. The remaining demand growth is from industrial customers with projects such as a Nucor steel mill in West Virginia and Cheniere Energy’s liquefied

Read More »

A hydrogen ‘do-over’ for California

Melanie Davidson is a hydrogen policy and markets expert. Most recently she led clean fuels strategy at San Diego Gas & Electric. She is former board member of the California Hydrogen Business Council and was a founding staff member of the Green Hydrogen Coalition. Recently, over $2 billion of federal funding for the U.S. Department of Energy California and Pacific Northwest Hydrogen Hubs was terminated. These Hubs were premised on the use of “renewable, electrolytic hydrogen” — meaning hydrogen generated by using renewable electricity to power water-splitting electrolyzers. The resultant hydrogen would have replaced fossil fuels for heavy duty transportation, port operations, and power generation.  The idea of a fully renewable, water-based, hydrogen economy for the West was an exciting one — both in its altruism and the premise, backed by the DOE’s 2021 “Hydrogen Shot.” The idea was for cheap, abundant solar and rapidly declining electrolyzer cost curves to generate hydrogen from water with zero emissions — for $1/kg by 2030, no less.  However, cuts in Hub funding, together with a 2027 sunset date for projects to qualify for the hydrogen production tax credit, are just two more blows to the many pre-existing economic challenges facing a renewable hydrogen future. At least in California, those challenges include rising (not falling) capital costs for electrolyzers and electrical equipment, high interest rates, a scarcity of water rights, and high costs of grid electricity, qualifying renewable energy credits, and land.  The renewable Hubs were anchored on the idea that by leveraging otherwise curtailed solar (terrawatt-hours worth annually), we could generate cheap, abundant, seasonally stored renewable electrolytic hydrogen at distributed locations, then convert the hydrogen back to the grid via fuel cells as needed. It’s an elegant idea, but it doesn’t pencil. The capital costs of those electrolyzers, compressors, liquefiers, hydrogen storage vessels and fuel

Read More »

USA Crude Oil Stocks Drop Almost 7MM Barrels WoW

In its latest weekly petroleum status report, the U.S. Energy Information Administration (EIA) highlighted that U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 6.9 million barrels from the week ending October 17 to the week ending October 24. This EIA report, which was released on October 29 and included data for the week ending October 24, showed that crude oil stocks, not including the SPR, stood at 416.0 million barrels on October 24, 422.8 million barrels on October 17, and 425.5 million barrels on October 25, 2024. The report highlighted that data may not add up to totals due to independent rounding. Crude oil in the SPR stood at 409.1 million barrels on October 24, 408.6 million barrels on October 17, and 385.8 million barrels on October 25, 2024, the report highlighted. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.677 billion barrels on October 24, the report revealed. Total petroleum stocks were down 15.4 million barrels week on week and up 43.6 million barrels year on year, the report showed. “At 416.0 million barrels, U.S. crude oil inventories are about six percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories decreased by 5.9 million barrels from last week and are about three percent below the five year average for this time of year. Both finished gasoline and blending components inventories decreased last week,” it added. “Distillate fuel inventories decreased by 3.4 million barrels last week and are about eight percent below the five year average for this time of year. Propane/propylene inventories increased by 2.5

Read More »

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.  But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.  Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.  More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.  I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.  On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.  People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest).  People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.  Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?  In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good.  Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed.  And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.   But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing.  Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search.  “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly.  It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be.  But once you’ve used AI Overviews a bit, you realize they are different.  Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.)  “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.”  That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video.  When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai.  There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out?  I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too.  “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web.  “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful.  “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.”  But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?   Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.   “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says.  Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.”  Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”  “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.”  He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew?  A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.   “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says.  OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more.  “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience.  Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does.  “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.”  When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them.  “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed!  The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers.  It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.”  We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge.  The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets.  Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed.  “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information.  In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses.  But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on. 

Read More »

Subsea7 Scores Various Contracts Globally

Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Read More »

Driving into the future

Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.  We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen.  Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes. 
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake.  What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story. 
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa.  Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Read More »

Oil Holds at Highest Levels Since October

Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

Read More »

What to expect from NaaS in 2025

Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

Read More »

UK battery storage industry ‘back on track’

UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing  and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was  fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

Read More »

An AI adoption riddle

A few weeks ago, I set out on what I thought would be a straightforward reporting journey.  After years of momentum for AI—even if you didn’t think it would be good for the world, you probably thought it was powerful enough to take seriously—hype for the technology had been slightly punctured. First there was the underwhelming release of GPT-5 in August. Then a report released two weeks later found that 95% of generative AI pilots were failing, which caused a brief stock market panic. I wanted to know: Which companies are spooked enough to scale back their AI spending? I searched and searched for them. As I did, more news fueled the idea of an AI bubble that, if popped, would spell doom economy-wide. Stories spread about the circular nature of AI spending, layoffs, the inability of companies to articulate what exactly AI will do for them. Even the smartest people building modern AI systems were saying the tech has not progressed as much as its evangelists promised.  But after all my searching, companies that took these developments as a sign to perhaps not go all in on AI were nowhere to be found. Or, at least, none that were willing to admit it. What gives? 
There are several interpretations of this one reporter’s quest (which, for the record, I’m presenting as an anecdote and not a representation of the economy), but let’s start with the easy ones. First is that this is a huge score for the “AI is a bubble” believers. What is a bubble if not a situation where companies continue to spend relentlessly even in the face of worrying news? The other is that underneath the bad headlines, there’s not enough genuinely troubling news about AI to convince companies they should pivot. But it could also be that the unbelievable speed of AI progress and adoption has made me think industries are more sensitive to news than they perhaps should be. I spoke with Martha Gimbel, who leads the Yale Budget Lab and coauthored a report finding that AI has not yet changed anyone’s jobs. What I gathered is that Gimbel, like many economists, thinks on a longer time scale than anyone in the AI world is used to. 
“It would be historically shocking if a technology had had an impact as quickly as people thought that this one was going to,” she says. In other words, perhaps most of the economy is still figuring out what the hell AI even does, not deciding whether to abandon it.  The other reaction I heard—particularly from the consultant crowd—is that when executives hear that so many AI pilots are failing, they indeed take it very seriously. They’re just not reading it as a failure of the technology itself. They instead point to pilots not moving quickly enough, companies lacking the right data to build better AI, or a host of other strategic reasons. Even if there is incredible pressure, especially on public companies, to invest heavily in AI, a few have taken big swings on the technology only to pull back. The buy now, pay later company Klarna laid off staff and paused hiring in 2024, claiming it could use AI instead. Less than a year later it was hiring again, explaining that “AI gives us speed. Talent gives us empathy.”  Drive-throughs, from McDonald’s to Taco Bell, ended pilots testing the use of AI voice assistants. The vast majority of Coca-Cola advertisements, according to experts I spoke with, are not made with generative AI, despite the company’s $1 billion promise.  So for now, the question remains unanswered: Are there companies out there rethinking how much their bets on AI will pay off, or when? And if there are, what’s keeping them from talking out loud about it? (If you’re out there, email me!)

Read More »

The Download: what to make of OpenAI’s Atlas browser, and how to make climate progress

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. I tried OpenAI’s new Atlas browser but I still don’t know what it’s for —Mat Honan OpenAI rolled out a new web browser last week called Atlas. It comes with ChatGPT built in, along with an agent, so that you can browse, get answers, and have automated tasks performed on your behalf all at the same time.
I’ve spent the past several days tinkering with Atlas. I’ve used it to do all my normal web browsing, and also tried to take advantage of the ChatGPT functions—plus I threw some weird agentic tasks its way to see how it did with those.My impression is that Atlas is…  fine? But my big takeaway is that it’s pretty pointless for anyone not employed by OpenAI. In fact, Atlas seems to be little more than cynicism masquerading as software. Read the full story. This review first appeared in The Debrief, Mat Honan’s weekly subscriber-only newsletter.
Seeking climate solutions in turbulent times Despite recent political shifts in the US, companies are continuing to pursue exciting new climate solutions. Tomorrow we’re holding an exclusive subscriber-only Roundtable event digging into the most promising technologies of the moment drawing from our recently released 10 Climate Tech Companies to Watch list.This conversation will give subscribers insight into where tangible climate progress is happening today, and how recent political changes are reshaping the path toward a more sustainable future. Join us at 1pm ET on Tuesday October 28—register here! The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Donald Trump says a TikTok deal could be reached this week Perhaps on Thursday, when he’s due to meet Xi Jinping. (CNBC)+ US treasury secretary Scott Bessent appeared to jump the gun when he said the deal had already been done. (The Guardian) 2 Covid vaccines helped to prolong the life of cancer patientsThe findings raise hopes a universal vaccine could help patients with different cancers. (WP $)+ Why US federal health agencies are abandoning mRNA vaccines. (MIT Technology Review)

3 How developing nations benefit from “AI decolonization”Rules forcing Silicon Valley’s giants to process data locally has helped to spread the AI boom’s wealth. (WSJ $)+ Meanwhile, Saudi Arabia wants to be known as the “AI exporter.” (NYT $)+ Inside India’s scramble for AI independence. (MIT Technology Review)4 Those rising electricity costs aren’t just down to AICostly electrical equipment and disaster prep are bigger factors pushing up prices. (WP $)+ Amazon considered concealing its data centers’ water usage. (The Guardian)+ AI is changing the grid. Could it help more than it harms? (MIT Technology Review) 5 California State wants to become America’s largest “AI-empowered” UniversityIt’s teaming up with Amazon, OpenAI and Nvidia to prepare its students for increasingly AI-driven careers. (NYT $)+ How do technologies change our abilities to learn skills? (The Atlantic $)+ Why the ultra-wealthy are sending their kids to High Point University. (WSJ $)+ The job market is tough right now, but we’ve weathered this kind of storm before. (Insider $) 6 This new startup sells AI bot interactions to manipulate social mediaEven though it violates every major platforms’ policies. (404 Media) 7 Even real estate isn’t safe from AI slop 🏠House hunters are being forced to wade through AI-enhanced listings. (Wired $) 8 Why we’re so obsessed with sleepmaxxing Yes, sleep is good for you. But does the tech that tracks it really do the job it claims to? (The Atlantic $)+ I tried to hack my insomnia with technology. Here’s what worked. (MIT Technology Review) 9 It’s probably not worth buying an Ultra-HD TVSo feel free to ignore all that persuasive marketing jargon. (The Guardian)10 Sneaky employees are using AI to fake their expense receipts 🧾So expense firms are in turn deploying AI to try and detect the fakes. (FT $) Quote of the day
“I’m skeptical of all of the hype around AI right now. This is not my first bubble.” —Jay Goldberg, a senior analyst at Seaport Global Securities, is no stranger to the hysteria that surrounds overhyped technologies, he tells Bloomberg.
One more thing Inside Clear’s ambitions to manage your identity beyond the airportClear Secure is the most visible biometric identity company in the United States. Best known for its line-jumping service in airports, it’s also popping up at sports arenas and stadiums all over the country. You can also use its identity verification platform to rent tools at Home Depot, put your profile in front of recruiters on LinkedIn, and, as of this month, verify your identity as a rider on Uber.And soon enough, if Clear has its way, it may also be in your favorite retailer, bank, and even doctor’s office—or anywhere else that you currently have to pull out a wallet (or wait in line).While the company has been building toward this sweeping vision for years, it now seems its time has finally come. But as biometrics go mainstream, what—and who—bears the cost? Read the full story. —Eileen Guo We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Ancient manuscripts are jam packed with weird and wonderful beasts.+ Horror writers tell us the spooky stories that send a shiver down their respective spines.+ Here’s why living on a red dwarf isn’t quite as crazy as it sounds.+ Kiki the sheep may not be able to walk, but she isn’t letting it get in the way of her getting around ❤️ (thanks Amy!)

Read More »

I tried OpenAI’s new Atlas browser but I still don’t know what it’s for

OpenAI rolled out a new web browser last week called Atlas. It comes with ChatGPT built in, along with an agent, so that you can browse, get direct answers, and have automated tasks performed on your behalf all at the same time.  I’ve spent the past several days tinkering with Atlas. I’ve used it to do all my normal web browsing, and also tried to take advantage of the ChatGPT functions—plus I threw some weird agentic tasks its way to see how it did with those. And my impression is that Atlas is…  fine? But my big takeaway is that it’s pretty pointless for anyone not employed by OpenAI, and that Atlas is little more than cynicism masquerading as software.  If you want to know why, let’s start by looking at its agentic capabilities—which is really where it differentiates. When I was browsing Amazon, I asked the Atlas agent to do some shopping for me, using a pre-set prompt of its own suggestion. (“Start a cart with items I’m likely to want based on my browsing here and highlight any active promo codes. Let me review before checkout.”) It picked out a notebook that I’d recently purchased and no longer needed, some deodorant I’d recently purchased and no longer needed, and a vacuum cleaner that I’d considered but decided was too expensive and no longer needed because I bought a cheaper one. 
I would guess that it took 10 minutes or so for it to do all that. I cleaned out my cart and considered myself lucky that it didn’t buy anything.   When I logged onto Facebook, which is already lousy with all sorts of AI slop, I asked it to create a status update for me. So it dug through my browser history and came back with an incredibly long status I won’t bore you with all of it (and there was a lot) but here are the highlights from what it suggested:  “I dipped into Smartsheet and TeamSnap (because editors juggle rosters too!), flirted with Shopify and Amazon (holiday gift‑shopping? side hustle? you decide), and kept tabs on the news … . Somewhere in there I even remembered to log into Slack, schedule Zoom meetings, and read a few NYTimes and Technology Review pieces. Who says an editor’s life isn’t glamorous? 😊” 
Uh. Okay. I decided against posting that. There were some other equally unillustrious examples as well, but you get the picture.  Aside from the agent, the other unique feature is having ChatGPT built right into the browser. Notice I said “unique,” not “useful.” I struggled with finding any obvious utility by having this right there, versus just going to chatgpt dot com. In some cases, the built-in chatbot was worse and dumber.  For example, I asked the built-in ChatGPT to summarize a MIT Technology Review article I was reading for me. Yet instead of answering the question about the page I was on, it referred back to the page I had previously been on when I started the session. Which is to say it spit back some useless nonsense. Thanks, AI.  OpenAI is marketing Atlas pretty aggressively when you come to ChatGPT now, suggesting people download it. And it may in fact score a lot of downloads because of that. But without giving people more of a reason to actually switch from more entrenched browsers, like Chrome or Safari, this feels like a real empty salvo in the new browser wars.  It’s been hard for me to understand why Atlas exists. Who is this browser for, exactly? Who is its customer? And the answer I have come to there is that Atlas is for OpenAI. The real customer, the true end user of Atlas, is not the person browsing websites, it is the company collecting data about what and how that person is browsing. This review first appeared in The Debrief, Mat Honan’s weekly subscriber-only newsletter.

Read More »

The Download: carbon removal’s future, and measuring pain using an app

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. What’s next for carbon removal? After years of growth that spawned hundreds of startups, the nascent carbon removal sector appears to be facing a reckoning.Running Tide, a promising aquaculture company, shut down its operations last summer, and a handful of other companies have shuttered, downsized, or pivoted in recent months as well. Venture investments have flagged. And the collective industry hasn’t made a whole lot more progress toward Running Tide’s ambitious plans to sequester a billion tons of carbon dioxide by this year. The hype phase is over and the sector is sliding into the turbulent business trough that follows, experts warn. 
And the open question is: If the carbon removal sector is heading into a painful if inevitable clearing-out cycle, where will it go from there? Read the full story. —James Temple
This story is part of MIT Technology Review’s What’s Next series, which looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here. An AI app to measure pain is here This week I’ve also been wondering how science and technology can help answer that question—especially when it comes to pain.  In the latest issue of MIT Technology Review’s print magazine, Deena Mousa describes how an AI-powered smartphone app is being used to assess how much pain a person is in. The app, and other tools like it, could help doctors and caregivers. They could be especially useful in the care of people who aren’t able to tell others how they are feeling. But they are far from perfect. And they open up all kinds of thorny questions about how we experience, communicate, and even treat pain. Read the full story. —Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Meta’s lawyers advised workers to remove parts of its teen mental health researchIts counsel told researchers to block or update their work to reduce legal liability. (Bloomberg $)+ Meta recently laid off more than 100 staff tasked with monitoring risks to user privacy. (NYT $)  2 Donald Trump has pardoned the convicted Binance founderChangpeng Zhao pleaded guilty to violating US money laundering laws in 2023. (WSJ $)+ The move is likely to enable Binance to resume operating in the US. (CNN)+ Trump has vowed to be more crypto-friendly than the Biden administration. (Axios) 3 Anthropic and Google Cloud have signed a major chips dealThe agreement is worth tens of billions of dollars. (FT $) 4 Microsoft doesn’t want you to talk dirty to its AIIt’ll leave that kind of thing to OpenAI, thank you very much. (CNBC)+ Copilot now has its own version of Clippy—just don’t try to get erotic with it. (The Verge)+ It’s pretty easy to get DeepSeek to talk dirty, however. (MIT Technology Review)5 Big Tech is footing the bill for Trump’s White House ballroomStand up Amazon, Apple, Google, Meta, and Microsoft. (TechCrunch)+ Crypto twins Tyler and Cameron Winklevoss are also among the donors. (CNN)
6 US investigators have busted a series of high-tech gambling schemesInvolving specially-designed contact lenses and x-ray tables. (NYT $)+ The case follows insider bets on basketball and poker games rigged by the mafia. (BBC)+ Automatic card shufflers can be compromised, too. (Wired $) 7 Deepfake harassment tools are easily accessible on social mediaAnd simple web searches. (404 Media)+ Bans on deepfakes take us only so far—here’s what we really need. (MIT Technology Review)
8 How algorithms can drive up prices onlineEven benign algorithms can sometimes yield bad outcomes for buyers. (Quanta Magazine)+ When AIs bargain, a less advanced agent could cost you. (MIT Technology Review) 9 How to give an LLM brain rotTrain it on short “superficial” posts from X, for a start. (Ars Technica)+ AI trained on AI garbage spits out AI garbage. (MIT Technology Review) 10 Meet the tech workers using AI as little as possibleIn a bid to keep their skills sharp. (WP $)+ This professor thinks there are other ways to teach people how to learn. (The Atlantic $) Quote of the day “He was convicted. He’s not innocent.”
—Republican Senator Thom Tillis criticises Donald Trump’s decision to pardon convicted cryptocurrency mogul Changpeng Zhao, Politico reports. One more thing We’ve never understood how hunger works. That might be about to change.
When you’re starving, hunger is like a demon. It awakens the most ancient and primitive parts of the brain, then commandeers other neural machinery to do its bidding until it gets what it wants.Although scientists have had some success in stimulating hunger in mice, we still don’t really understand how the impulse to eat works. Now, some experts are following known parts of the neural hunger circuits into uncharted parts of the brain to try and find out.Their work could shed new light on the factors that have caused the number of overweight adults worldwide to skyrocket in recent years. And it could also help solve the mysteries around how and why a new class of weight-loss drugs seems to work so well. Read the full story. —Adam Piore We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)+  Middle aged men are getting into cliff-jumping. Should you?+ Pumpkin spice chocolate chip cookies sounds like a great idea to me.+ Christmas Island’s crabs are on the move! 🦀+ Watch out if you’re taking the NY subway today: you might bump into these terrifying witches.

Read More »

An AI app to measure pain is here

The app, and other tools like it, could help doctors and caregivers. They could be especially useful in the care of people who aren’t able to tell others how they are feeling. But they are far from perfect. And they open up all kinds of thorny questions about how we experience, communicate, and even treat pain.
Pain can be notoriously difficult to describe, as almost everyone who has ever been asked to will know. At a recent medical visit, my doctor asked me to rank my pain on a scale from 1 to 10. I found it incredibly difficult to do. A 10, she said, meant “the worst pain imaginable,” which brought back unpleasant memories of having appendicitis. A short while before the problem that brought me in, I’d broken my toe in two places, which had hurt like a mother—but less than appendicitis. If appendicitis was a 10, breaking a toe was an 8, I figured. If that was the case, maybe my current pain was a 6. As a pain score, it didn’t sound as bad as I actually felt. I couldn’t help wondering if I might have given a higher score if my appendix were still intact. I wondered, too, how someone else with my medical issue might score their pain.
In truth, we all experience pain in our own unique ways. Pain is subjective, and it is influenced by our past experiences, our moods, and our expectations. The way people describe their pain can vary tremendously, too. We’ve known this for ages. In the 1940s, the anesthesiologist Henry Beecher noted that wounded soldiers were much less likely to ask for pain relief than similarly injured people in civilian hospitals. Perhaps they were putting on a brave face, or maybe they just felt lucky to be alive, given their circumstances. We have no way of knowing how much pain they were really feeling. Given this messy picture, I can see the appeal of a simple test that can score pain and help medical professionals understand how best to treat their patients. That’s what is being offered by PainChek, the smartphone app Deena wrote about. The app works by assessing small facial movements, such as lip raises or brow pinches. A user is then required to fill a separate checklist to identify other signs of pain the patient might be displaying. It seems to work well, and it is already being used in hospitals and care settings. But the app is judged against subjective reports of pain. It might be useful for assessing the pain of people who can’t describe it themselves—perhaps because they have dementia, for example—but it won’t add much to assessments from people who can already communicate their pain levels. There are other complications. Say a test could spot that a person was experiencing pain. What can a doctor do with that information? Perhaps prescribe pain relief—but most of the pain-relieving drugs we have were designed to treat acute, short-term pain. If a person is grimacing from a chronic pain condition, the treatment options are more limited, says Stuart Derbyshire, a pain neuroscientist at the National University of Singapore. The last time I spoke to Derbyshire was back in 2010, when I covered work by researchers in London who were using brain scans to measure pain. That was 15 years ago. But pain-measuring brain scanners are yet to become a routine part of clinical care. That scoring system was also built on subjective pain reports. Those reports are, as Derbyshire puts it, “baked into the system.” It’s not ideal, but when it comes down to it, we must rely on these wobbly, malleable, and sometimes incoherent self-descriptions of pain. It’s the best we have. Derbyshire says he doesn’t think we’ll ever have a “pain-o-meter” that can tell you what a person is truly experiencing. “Subjective report is the gold standard, and I think it always will be,” he says. This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Read More »

What’s next for carbon removal?

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.In the early 2020s, a little-known aquaculture company in Portland, Maine, snagged more than $50 million by pitching a plan to harness nature to fight back against climate change. The company, Running Tide, said it could sink enough kelp to the seafloor to sequester a billion tons of carbon dioxide by this year, according to one of its early customers. Instead, the business shut down its operations last summer, marking the biggest bust to date in the nascent carbon removal sector. Its demise was the most obvious sign of growing troubles and dimming expectations for a space that has spawned hundreds of startups over the last few years. A handful of other companies have shuttered, downsized, or pivoted in recent months as well. Venture investments have flagged. And the collective industry hasn’t made a whole lot more progress toward that billion-ton benchmark.The hype phase is over and the sector is sliding into the turbulent business trough that follows, warns Robert Höglund, cofounder of CDR.fyi, a public-benefit corporation that provides data and analysis on the carbon removal industry. “We’re past the peak of expectations,” he says. “And with that, we could see a lot of companies go out of business, which is natural for any industry.”
The open question is: If the carbon removal sector is heading into a painful if inevitable clearing-out cycle, where will it go from there?  The odd quirk of carbon removal is that it never made a lot of sense as a business proposition: It’s an atmospheric cleanup job, necessary for the collective societal good of curbing climate change. But it doesn’t produce a service or product that any individual or organization strictly needs—or is especially eager to pay for.
To date, a number of businesses have voluntarily agreed to buy tons of carbon dioxide that companies intend to eventually suck out of the air. But whether they’re motivated by sincere climate concerns or pressures from investors, employees, or customers, corporate do-goodism will only scale any industry so far.  Most observers argue that whether carbon removal continues to bobble along or transforms into something big enough to make a dent in climate change will depend largely on whether governments around the world decide to pay for a whole, whole lot of it—or force polluters to.  “Private-sector purchases will never get us there,” says Erin Burns, executive director of Carbon180, a nonprofit that advocates for the removal and reuse of carbon dioxide. “We need policy; it has to be policy.” What’s the problem? The carbon removal sector began to scale up in the early part of this decade, as increasingly grave climate studies revealed the need to dramatically cut emissions and suck down vast amounts of carbon dioxide to keep global warming in check. Specifically, nations may have to continually remove as much as 11 billion tons of carbon dioxide per year by around midcentury to have a solid chance of keeping the planet from warming past 2 °C over preindustrial levels, according to a UN climate panel report in 2022. A number of startups sprang up to begin developing the technology and building the infrastructure that would be needed, trying out a variety of approaches like sinking seaweed or building carbon-dioxide-sucking factories. Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters And they soon attracted customers. Companies including Stripe, Google, Shopify, Microsoft, and others began agreeing to pre-purchase tons of carbon removal, hoping to stand up the nascent industry and help offset their own climate emissions. Venture investments also flooded into the space, peaking in 2023 at nearly $1 billion, according to data provided by PitchBook. From early on, players in the emerging sector sought to draw a sharp distinction between conventional carbon offset projects, which studies have shown frequently exaggerate climate benefits, and “durable” carbon removal that could be relied upon to suck down and store away the greenhouse gas for decades to centuries. There’s certainly a big difference in the price: While buying carbon offsets through projects that promise to preserve forests or plant trees might cost a few dollars per ton, a ton of carbon removal can run hundreds to thousands of dollars, depending on the approach. 

That high price, however, brings big challenges. Removing 10 billion tons of carbon dioxide a year at, say, $300 a ton adds up to a global price tag of $3 trillion—a year.  Which brings us back to the fundamental question: Who should or would foot the bill to develop and operate all the factories, pipelines, and wells needed to capture, move, and bury billions upon billions of tons of carbon dioxide? The state of the market The market is still growing, as companies voluntarily purchase tons of carbon removal to make strides toward their climate goals. In fact, sales reached an all-time high in the second quarter of this year, mostly thanks to several massive purchases by Microsoft. But industry sources fear that demand isn’t growing fast enough to support a significant share of the startups that have formed or even the projects being built, undermining the momentum required to scale the sector up to the size needed by midcentury. To date, all those hundreds of companies that have spun up in recent years have disclosed deals to sell some 38 million tons of carbon dioxide pulled from the air, according to CDR.fyi. That’s roughly the amount the US pumps out in energy-related emissions every three days.  And they’ve only delivered around 940,000 tons of carbon removal. The US emits that much carbon dioxide in less than two hours. (Not every transaction is publicly announced or revealed to CDR.fyi, so the actual figures could run a bit higher.) Another concern is that the same handful of big players continue to account for the vast majority of the overall purchases, leaving the health and direction of the market dependent on their whims and fortunes.  Most glaringly, Microsoft has agreed to buy 80% of all the carbon removal purchased to date, according to  CDR.fyi. The second-biggest buyer is Frontier, a coalition of companies that includes Google, Meta, Stripe, and Shopify, which has committed to spend $1 billion.
If you strip out those two buyers, the market shrinks from 16 million tons under contract during the first half of this year to just 1.2 million, according to data provided to MIT Technology Review by CDR.fyi.  Signs of trouble Meanwhile, the investor appetite for carbon removal is cooling. For the 12-month period ending in the second quarter of 2025, venture capital investments in the sector fell more than 13% from the same period last year, according to data provided by PitchBook. That tightening funding will make it harder and harder for companies that aren’t bringing in revenue to stay afloat.
Companies that have already shut down also include the carbon removal marketplace Noya and Alkali Earth, which was attempting to use industrial by-products to tie up carbon dioxide. Still other businesses are struggling. Climeworks, one of the first companies to build direct-air-capture (DAC) factories, announced it was laying off 10% of its staff in May, as it grapples with challenges on several fronts. The company’s plans to collaborate on the development of a major facility in the US have been at least delayed as the Trump administration has held back tens of millions of dollars in funding granted in 2023 under the Department of Energy’s Regional Direct Air Capture Hubs program. It now appears the government could terminate the funding altogether, along with perhaps tens of billions of dollars’ worth of additional grants previously awarded for a variety of other US carbon removal and climate tech projects. “Market rumors have surfaced, and Climeworks is prepared for all scenarios,” Christoph Gebald, one of the company’s co-CEOs, said in a previous statement to MIT Technology Review. “The need for DAC is growing as the world falls short of its climate goals and we’re working to achieve the gigaton capacity that will be needed.” But purchases from direct-air-capture projects fell nearly 16% last year and account for just 8% of all carbon removal transactions to date. Buyers are increasingly looking to categories that promise to deliver tons faster and for less money, notably including burying biochar or installing carbon capture equipment on bioenergy plants. (Read more in my recent story on that method of carbon removal, known as BECCS, here.) CDR.fyi recently described the climate for direct air capture in grim terms: “The sector has grown rapidly, but the honeymoon is over: Investment and sales are falling, while deployments are delayed across almost every company.”“Most DAC companies,” the organization added, “will fold or be acquired.”
What’s next? In the end, most observers believe carbon removal isn’t really going to take off unless governments bring their resources and regulations to bear. That could mean making direct purchases, subsidizing these sectors, or getting polluters to pay the costs to do so—for instance, by folding carbon removal into market-based emissions reductions mechanisms like cap-and-trade systems.  More government support does appear to be on the way. Notably, the European Commission recently proposed allowing “domestic carbon removal” within its EU Emissions Trading System after 2030, integrating the sector into one of the largest cap-and-trade programs. The system forces power plants and other polluters in member countries to increasingly cut their emissions or pay for them over time, as the cap on pollution tightens and the price on carbon rises.  That could create incentives for more European companies to pay direct-air-capture or bioenergy facilities to draw down carbon dioxide as a means of helping them meet their climate obligations. There are also indications that the International Civil Aviation Organization, a UN organization that establishes standards for the aviation industry, is considering incorporating carbon removal into its market-based mechanism for reducing the sector’s emissions. That might take several forms, including allowing airlines to purchase carbon removal to offset their use of traditional jet fuel or requiring the use of carbon dioxide obtained through direct air capture in some share of sustainable aviation fuels.
Meanwhile, Canada has committed to spend $10 million on carbon removal and is developing a protocol to allow direct air capture in its national offsets program. And Japan will begin accepting several categories of carbon removal in its emissions trading system.  Despite the Trump administration’s efforts to claw back funding for the development of carbon-sucking projects, the US does continue to subsidize storage of carbon dioxide, whether it comes from power plants, ethanol refineries, direct-air-capture plants, or other facilities. The so-called 45Q tax credit, which is worth up to $180 a ton, was among the few forms of government support for climate-tech-related sectors that survived in the 2025 budget reconciliation bill. In fact, the subsidies for putting carbon dioxide to other uses increased. Even in the current US political climate, Burns is hopeful that local or federal legislators will continue to enact policies that support specific categories of carbon removal in the regions where they make the most sense, because the projects can provide economic growth and jobs as well as climate benefits. “I actually think there are lots of models for what carbon removal policy can look like that aren’t just things like tax incentives,” she says. “And I think that this particular political moment gives us the opportunity in a unique way to start to look at what those regionally specific and pathway specific policies look like.” The dangers ahead But even if more nations do provide the money or enact the laws necessary to drive the business of durable carbon renewal forward, there are mounting concerns that a sector conceived as an alternative to dubious offset markets could increasingly come to replicate their problems. Various incentives are pulling in that direction. Financial pressures are building on suppliers to deliver tons of carbon removal. Corporate buyers are looking for the fastest and most affordable way of hitting their climate goals. And the organizations that set standards and accredit carbon removal projects often earn more money as the volume of purchases rises, creating clear conflicts of interest. Some of the same carbon registries that have long signed off on carbon offset projects have begun creating standards or issuing credits for various forms of carbon removal, including Verra and Gold Standard. “Reliable assurance that a project’s declared ton of carbon savings equates to a real ton of emissions removed, reduced, or avoided is crucial,” Cynthia Giles, a senior EPA advisor under President Biden, and Cary Coglianese, a law professor at the University of Pennsylvania, wrote in a recent editorial in Science. “Yet extensive research from many contexts shows that auditors selected and paid by audited organizations often produce results skewed toward those entities’ interests.” Noah McQueen, the director of science and innovation at Carbon180, has stressed that the industry must strive to counter the mounting credibility risks, noting in a recent LinkedIn post: “Growth matters, but growth without integrity isn’t growth at all.” In an interview, McQueen said that heading off the problem will require developing and enforcing standards to truly ensure that carbon removal projects deliver the climate benefits promised. McQueen added that to gain trust, the industry needs to earn buy-in from the communities in which these projects are built and avoid the environmental and health impacts that power plants and heavy industry have historically inflicted on disadvantaged communities. Getting it right will require governments to take a larger role in the sector than just subsidizing it, argues David Ho, a professor at the University of Hawaiʻi at Mānoa who focuses  on ocean-based carbon removal. He says there should be a massive, multinational research drive to determine the most effective ways of mopping up the atmosphere with minimal environmental or social harm, likening it to a Manhattan Project (minus the whole nuclear bomb bit). “If we’re serious about doing this, then let’s make it a government effort,” he says, “so that you can try out all the things, determine what works and what doesn’t, and you don’t have to please your VCs or concentrate on developing [intellectual property] so you can sell yourself to a fossil-fuel company.” Ho adds that there’s a moral imperative for the world’s historically biggest climate polluters to build and pay for the carbon-sucking and storage infrastructure required to draw down billions of tons of greenhouse gas. That’s because the world’s poorest, hottest nations, which have contributed the least to climate change, will nevertheless face the greatest dangers from intensifying heat waves, droughts, famines, and sea-level rise. “It should be seen as waste management for the waste we’re going to dump on the Global South,” he says, “because they’re the people who will suffer the most from climate change.”

Read More »

Offshore rigs up 4 in US Gulf

Offshore rigs in the Gulf of Mexico increased by 4 this week, increasing the total US offshore rig count to 21, officials at Baker Hughes reported Friday. Twelve rigs were operating in the US Gulf of Mexico for the week ended Oct. 24.  Total US drilling increased slightly, with 550 rotary rigs working this week, up 2 from last week but down 35 from 585 a year ago, officials said. Activity in US inland waters decreased by 1 unit to 2, while the US land rig count dipped by 1 to 527. There were 199 rotary rigs drilling in Canada this week, up 1 from last week and down 17 from the 216 units that were working during the same period a year ago. The number of rigs drilling for oil in the US increased by 2 to 420. There were 121 US rotary rigs drilling for natural gas, unchanged from last week, and 9 rigs were unclassified, also unchanged from last week. US directional drilling increased by 2 to 53 units. Horizontal drilling declined by 1 to 485. Drilling increased in Louisiana by 3 rigs to 40, and Wyoming’s rig count was up 2 to 14. Texas and Colorado each dropped a single unit to end the week with 236 and 14 rigs working, respectively.

Read More »

New sanctions on Russia rally crude prices

Oil, fundamental analysis Crude prices were already poised for a technically-driven rebound, but across-the-board inventory draws, and new sanctions placed on Russian energy entities led to a $5.00+ rally this week. The US grade started the week as low as $56.35/bbl but pushed as high as $62.60/bbl by Friday. Brent followed a similar pattern, hitting its low of $60.35/bbl on Monday and its weekly high of $66.80 on Friday. Both grades settled much higher week-on-week with gains exceeding $5.00/bbl. The WTI/Brent spread has widened to ($4.40). Political risk premium entered oil markets again this week in the form of new US sanctions on Russia’s two largest oil companies, Rosneft and Lukoil, which represent about 50% of the country’s exports. The  sanctions will preclude both companies from doing business with US banks and other financial institutions. The EU had imposed a new sanction package on the two last week along with some Chinese refiners. From the start of the Russia/Ukraine war, Russia’s crude exports of about 3.0 million b/d have been the target for sanctions. Yet, to date, most sanctions have proved ineffective or have been circumvented. However, oil markets still react bullishly to such announcements. But this time around, India’s largest refiner, Reliance, is agreeing to halt the purchases of oil from Rosneft that were taking place under a long-term agreement which will impact the physical sales of Urals. China remains Russia’s No. 1 importer of oil. China is now also the No. 1 purchaser of Canadian bitumen, taking up to 70% of the 3.5 million b/d of the oil sands production being delivered to British Columbia ports via the expanded Trans Mountain pipeline. The Energy Information Administration (EIA)’s Weekly Petroleum Status Report (still released despite the government shut-down) indicated that commercial crude oil and refined product inventories for last

Read More »

Brazil’s ANP awards 5 offshore presalt blocks in latest auction

Five companies, including Petrobras and Equinor, came away with blocks following a recent bid round for blocks offshore Brazil. Awards for Brazil’s National Agency of Petroleum, Natural Gas and Biofuels (ANP) 3rd Cycle bidding round for pre-salt blocks were Petrobras, Equinor, Karoon Energy, and a consortium of CNOOC and Sinopec. The cycle had seven pre-salt blocks for sale: Esmeralda and Ametista in the Santos basin; and Citrino, Itaimbezinho, Ônix, Larimar, and Jaspe in Campos basin. Fifteen companies were eligible to submit bids: Petrobras, 3R Petroleum, BP Energy, Chevron, CNOOC, Ecopetrol, Equinor, Karoon, Petrogal, Petronas, Prio, QatarEnergy, Shell, Sinopec, and TotalEnergies. Two of the seven blocks up for bids received no offers, Reuters reported Oct. 22, noting ANP received offers from only eight of 15 eligible companies. Petrobras acquired the Jaspe block in partnership with Equinor Brasil Energia Ltda. Petrobras will serve as operator with a 60% stake, with Equinor holding the remaining 40%. Petrobras also acquired the Citrino block with 100% interest. Equinor was awarded the Itaimbezinho block with a 100% stake. Karoon was awarded 100% interest in the Esmeralda block. A consortium of CNOOC and Sinopec was awarded the Ametista block. CNOOC will serve as operator with 70% interest.

Read More »

Glenfarne, Tokyo Gas sign LOI for Alaska LNG offtake

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Glenfarne Alaska LNG LLC has signed a letter of intent (LOI) with Tokyo Gas Co. Ltd. for the offtake of 1 million tonnes/year (tpy) of LNG from the Alaska LNG project. The 20-million tpy Alaska LNG project consists of a 42-in. OD pipeline to transport natural gas from Alaska’s North Slope to meet Alaska’s domestic needs and produce 20 million tpy of LNG for export, Glenfarne said in a release Oct. 24. <!–> –> <!–> March 28, 2025 ]–> <!–> –> <!–> Sept. 11, 2025 ]–> <!–> Since March 2025, Glenfarne has signed preliminary offtake agreements with LNG buyers in Japan, Korea, Taiwan, and Thailand that include JERA, POSCO, CPC, and PTT, totaling 11 million tpy of capacity of the 16 million tpy Glenfarne expects to contract to reach a financial close for the project. Worley is completing the final engineering and cost validation for the project’s 807-mile pipeline. Phase one of the project includes the domestic pipeline to deliver natural gas about 765 miles from the North Slope to the Anchorage region. Phase two would add the LNG terminal and related infrastructure to enable export capability. The State

Read More »

Vantage Data Centers Pours $15B Into Wisconsin AI Campus as It Builds Global Giga-Scale Footprint

Expanding in Ohio: Financing Growth Through Green Capital In June 2025, Vantage secured $5 billion in green loan capacity, including $2.25 billion to fully fund its New Albany, Ohio (OH1) campus and expand its existing borrowing base. The 192 MW development will comprise three 64 MW buildings, with first delivery expected in December 2025 and phased completion through 2028. The OH1 campus is designed to come online as Vantage’s larger megasites ramp up, providing early capacity and regional proximity to major cloud and AI customers in the Columbus–New Albany corridor. The site also offers logistical and workforce advantages within one of the fastest-growing data center regions in the U.S. Beyond the U.S. – Vantage Expands Its Global Footprint Moving North: Reinforcing Canada’s Renewable Advantage In February 2025, Vantage announced a C$500 million investment to complete QC24, the fourth and final building at its Québec City campus, adding 32 MW of capacity by 2027. The project strengthens Vantage’s Montreal–Québec platform and reinforces its renewable-heavy power profile, leveraging abundant hydropower to serve sustainability-driven customers. APAC Expansion: Strategic Scale in Southeast Asia In September 2025, Vantage unveiled a $1.6 billion APAC expansion, led by existing investors GIC (Singapore’s sovereign wealth fund) and ADIA (Abu Dhabi Investment Authority). The investment includes the acquisition of Yondr’s Johor, Malaysia campus at Sedenak Tech Park. Currently delivering 72.5 MW, the Johor campus is planned to scale to 300 MW at full build-out, positioning it within one of Southeast Asia’s most active AI and cloud growth corridors. Analysts note that the location’s connectivity to Singapore’s hyperscale market and favorable development economics give Vantage a strong competitive foothold across the region. Italy: Expanding European Presence Under National Priority Status Vantage is also adding a second Italian campus alongside its existing Milan site, totaling 32 MW across two facilities. Phase

Read More »

How CoreWeave and Poolside Are Teaming Up in West Texas to Build the Next Generation of AI Data Centers

In the evolving landscape of artificial-intelligence infrastructure, a singular truth is emerging: access to cutting-edge silicon and massive GPU clusters is no longer enough by itself. For companies chasing the frontier of multi-trillion-parameter model training and agentic AI deployment, the bottleneck increasingly lies not just in compute, but in the seamless integration of compute + power + data center scale. The latest chapter in this story is the collaboration between CoreWeave and Poolside, culminating in the launch of Project Horizon, a 2-gigawatt AI-campus build in West Texas. Setting the Stage: Who’s Involved, and Why It Matters CoreWeave (NASDAQ: CRWV) has positioned itself as “The Essential Cloud for AI™” — a company founded in 2017, publicly listed in March 2025, and aggressively building out its footprint of ultra-high-performance infrastructure.  One of its strategic moves: in July 2025 CoreWeave struck a definitive agreement to acquire Core Scientific (NASDAQ: CORZ) in an all-stock transaction. Through that deal, CoreWeave gains grip over approximately 1.3 GW of gross power across Core Scientific’s nationwide data center footprint, plus more than 1 GW of expansion potential.  That acquisition underlines a broader trend: AI-specialist clouds are no longer renting space and power; they’re working to own or tightly control it. Poolside, founded in 2023, is a foundation-model company with an ambitious mission: building artificial general intelligence (AGI) and deploying enterprise-scale agents.  According to Poolside’s blog: “When people ask what it takes to build frontier AI … the focus is usually on the model … but that’s only half the story. The other half is infrastructure. If you don’t control your infrastructure, you don’t control your destiny—and you don’t have a shot at the frontier.”  Simply put: if you’re chasing multi-trillion-parameter models, you need both the compute horsepower and the power infrastructure; and ideally, tight vertical integration. Together, the

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE