Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Greece Warns Shipowners Against Sailing Near Iran Coast

Greece, home to the world’s largest oil tanker fleet, told the nation’s vessel owners to do what they can to stay away from Iran’s coast — a task that is all but impossible for those entering the Persian Gulf to collect cargoes of Middle East crude. Shipowners were directed to sail closer to the United Arab Emirates and Oman when transiting the Strait of Hormuz, according to two advisories seen by Bloomberg. They were issued by the Greek shipping ministry to local shipowner associations on Jan. 27 and 29. The advisories said more warships were operating near the strait and warned that the European Union’s latest sanctions on Iran risked further inflaming tensions around Hormuz, the Persian Gulf and southern parts of the Red Sea. They were sent to the Hellenic Chamber of Shipping, the Union of Greek Shipowners and the Hellenic Shortsea Shipowners Association. A spokesman for Greece’s shipping ministry confirmed the notices had been sent. The global shipping community and oil traders are closely watching developments in the Middle East after the US dispatched an aircraft-carrier strike group to the region. President Donald Trump said he hoped he would not have to use it against Iran, which monitoring groups have accused of killing thousands of people during recent protests. The Strait of Hormuz is critical to the global oil supply, with roughly a quarter of the world’s seaborne crude passing through the corridor. Much of that oil is transported on Greek-owned vessels. Greece is the biggest tanker owner by tonnage, according to Clarkson Research Services, a unit of the world’s largest shipbroker. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Startup Amutable plotting Linux security overhaul to counter hacking threats

Verifiable integrity Computing is full of security problems, and Linux is no exception to this rule, given convincing the protective free and open source software community of the wisdom of a radical new idea often turns out to be as big a challenge as the engineering itself. While Linux distros on desktop computers remain a niche, the technology’s invisible domination of online platforms and cloud container orchestration tools makes it the most important operating system in the world. That, not surprisingly, has made it a target for attacks, with cybercriminals taking advantage of vulnerabilities allowing privilege escalation, container escapes, and other exploits, as well as embedding backdoors in open source images across Linux’s complex supply chain. Judging from Amutable’s self-declared vision to bring “determinism and verifiable integrity to Linux system,” the founders see plenty of room for improvement. “Today’s infrastructure approaches security reactively. Software agents watch for vulnerabilities and intrusions; attackers refine their evasion. These defensive approaches are costly, brittle, and ineffective,” the company said. “Amutable’s mission is to deliver verifiable integrity to Linux workloads everywhere. We look forward to working towards this goal with the broader Linux community.”

Read More »

Microsoft and Meta’s Earnings Week Put the AI Data Center Cycle in Sharp Relief

If you’re trying to understand where the hyperscalers really are in the AI buildout, beyond the glossy campus renders and “superintelligence” rhetoric, this week’s earnings calls from Microsoft and Meta offered a more grounded view. Both companies are spending at a scale the data center industry has never had to absorb at once. Both are navigating the same hard constraints: power, capacity, supply chain, silicon allocation, and time-to-build.  But the market’s reaction split decisively, and that divergence tells its own story about what investors will tolerate in 2026. To wit: Massive capex is acceptable when the return narrative is already visible in the P&L…and far less so when the payoff is still being described as “early innings.” Microsoft: AI Demand Is Real. So Is the Cost Microsoft’s fiscal Q2 2026 results reinforced the core fact that has been driving North American hyperscale development for two years: Cloud + AI growth is still accelerating, and Azure remains one of the primary runways. Microsoft said Q2 total revenue rose to $81.3 billion, while Microsoft Cloud revenue reached $51.5 billion, up 26% (constant currency 24%). Intelligent Cloud revenue hit $32.9 billion, up 29%, and Azure and other cloud services revenue grew 39%. That’s the demand signal. The supply signal is more complicated. On the call and in follow-on reporting, Microsoft’s leadership framed the moment as a deliberate capacity build into persistent AI adoption. Yet the bill for that build is now impossible to ignore: Reuters reported Microsoft’s capital spending totaled $37.5 billion in the quarter, up nearly 66% year-over-year, with roughly two-thirds going toward computing chips. That “chips first” allocation matters for the data center ecosystem. It implies a procurement and deployment reality that many developers and colo operators have been living: the short pole is not only power and buildings; it’s GPU

Read More »

Rethinking Water in the AI Data Center Era

Finding Water by Eliminating Waste: Leakage as a Hidden Demand Driver ION Water and Meta frame leakage not as a marginal efficiency issue, but as one of the largest and least visible sources of water demand. According to the release, more than half of the water paid for at some properties can be lost to “invisible leaks,” including running toilets, aging water heaters, and faulty fixtures that go undetected for extended periods. ION’s platform is designed to surface that hidden demand. By monitoring water consumption at the unit level, the system flags anomalies in real time and directs maintenance teams to specific fixtures, rather than entire buildings. The company says this approach can reduce leak-driven water waste by as much as 60%. This represents an important evolution in how hyperscalers defend and contextualize their water footprints: Instead of focusing solely on their own direct WUE metrics, operators are investing in demand reduction within the same watershed where their data centers operate. That shift reframes the narrative from simply managing active water consumption to actively helping stabilize stressed local water systems. The Accounting Shift: Volumetric Water Benefits (VWB) The release explicitly positions the project as a model for Volumetric Water Benefits (VWB) initiatives, projects intended to deliver measurable environmental gains while also producing operational and financial benefits for underserved communities. This framing aligns with a broader stewardship accounting movement promoted by organizations such as the World Resources Institute, which has developed Volumetric Water Benefit Accounting (VWBA) as a standardized method for quantifying and valuing watershed-scale benefits. Meta is explicit that the project supports its water-positive commitment tied to its Temple, Texas data center community. The company has set a 2030 goal to restore more water than it consumes across its global operations and has increasingly emphasized “water stewardship in our data center

Read More »

From Silicon to Cooling: Dell’Oro Maps the AI Data Center Buildout

For much of the past decade, data center growth could be measured in incremental gains: another efficiency point here, another capacity tranche there. That era is over. According to a cascade of recent research from Dell’Oro Group, the AI investment cycle has crossed into a new phase, one defined less by experimentation and more by industrial-scale execution. Across servers, networks, power, and cooling, Dell’Oro’s latest data points to a market being reshaped end-to-end by AI workloads which are pulling forward capital spending, redefining bill-of-material assumptions, and forcing architectural transitions that are rapidly becoming non-negotiable. Capex Becomes the Signal The clearest indicator of the shift is spending. Dell’Oro reported that worldwide data center capital expenditures rose 59 percent year-over-year in 3Q 2025, marking the eighth consecutive quarter of double-digit growth. Importantly, this is no longer a narrow, training-centric surge. “The Top 4 US cloud service providers—Amazon, Google, Meta, and Microsoft—continue to raise data center capex expectations for 2025, supported by increased investments in both AI and general-purpose infrastructure,” said Baron Fung, Senior Research Director at Dell’Oro Group. He added that Oracle is on track to double its data center capex as it expands capacity for the Stargate project. “What is notable this cycle is not just the pace of spending, but the expanding scope of investment,” Fung said. Hyperscalers are now scaling accelerated compute, general-purpose servers, and the supporting infrastructure required to deploy AI at production scale, while simultaneously applying tighter discipline around asset lifecycles and depreciation to preserve cash flow. The result is a capex environment that looks less speculative and more structural, with investment signals extending well into 2026. Accelerators Redefine the Hardware Stack At the component level, the AI effect is even more pronounced. Dell’Oro found that global data center server and storage component revenue jumped 40 percent

Read More »

Applied Digital CEO Wes Cummins On the Hard Part of the AI Boom: Execution

Designing for What Comes After the Current AI Cycle Applied Digital’s design philosophy starts with a premise many developers still resist: today’s density assumptions may not hold. “We’re designing for maximum flexibility for the future—higher density power, lower density power, higher voltage delivery, and more floor space,” Cummins said. “It’s counterintuitive because densities are going up, but we don’t know what comes next.” That choice – to allocate more floor space even as rack densities climb – signals a long-view approach. Facilities are engineered to accommodate shifts in voltage, cooling topology, and customer requirements without forcing wholesale retrofits. Higher-voltage delivery, mixed cooling configurations, and adaptable data halls are baked in from the start. The goal is not to predict the future perfectly, Cummins stressed, but to avoid painting infrastructure into a corner. Supply Chain as Competitive Advantage If flexibility is the design thesis, supply chain control is the execution weapon. “It’s a huge advantage that we locked in our MEP supply chain 18 to 24 months ago,” Cummins said. “It’s a tight environment, and more timelines are going to get missed in 2026 because of it.” Applied Digital moved early to secure long-lead mechanical, electrical, and plumbing components; well before demand pressure fully rippled through transformers, switchgear, chillers, generators, and breakers. That foresight now underpins the company’s ability to make credible delivery commitments while competitors confront procurement bottlenecks. Cummins was blunt: many delays won’t stem from poor planning, but from simple unavailability. From 100 MW to 700 MW Without Losing Control The past year marked a structural pivot for Applied Digital. What began as a single, 100-megawatt “field of dreams” facility in North Dakota has become more than 700 MW under construction, with expansion still ahead. “A hundred megawatts used to be considered scale,” Cummins said. “Now we’re at 700

Read More »

Greece Warns Shipowners Against Sailing Near Iran Coast

Greece, home to the world’s largest oil tanker fleet, told the nation’s vessel owners to do what they can to stay away from Iran’s coast — a task that is all but impossible for those entering the Persian Gulf to collect cargoes of Middle East crude. Shipowners were directed to sail closer to the United Arab Emirates and Oman when transiting the Strait of Hormuz, according to two advisories seen by Bloomberg. They were issued by the Greek shipping ministry to local shipowner associations on Jan. 27 and 29. The advisories said more warships were operating near the strait and warned that the European Union’s latest sanctions on Iran risked further inflaming tensions around Hormuz, the Persian Gulf and southern parts of the Red Sea. They were sent to the Hellenic Chamber of Shipping, the Union of Greek Shipowners and the Hellenic Shortsea Shipowners Association. A spokesman for Greece’s shipping ministry confirmed the notices had been sent. The global shipping community and oil traders are closely watching developments in the Middle East after the US dispatched an aircraft-carrier strike group to the region. President Donald Trump said he hoped he would not have to use it against Iran, which monitoring groups have accused of killing thousands of people during recent protests. The Strait of Hormuz is critical to the global oil supply, with roughly a quarter of the world’s seaborne crude passing through the corridor. Much of that oil is transported on Greek-owned vessels. Greece is the biggest tanker owner by tonnage, according to Clarkson Research Services, a unit of the world’s largest shipbroker. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Startup Amutable plotting Linux security overhaul to counter hacking threats

Verifiable integrity Computing is full of security problems, and Linux is no exception to this rule, given convincing the protective free and open source software community of the wisdom of a radical new idea often turns out to be as big a challenge as the engineering itself. While Linux distros on desktop computers remain a niche, the technology’s invisible domination of online platforms and cloud container orchestration tools makes it the most important operating system in the world. That, not surprisingly, has made it a target for attacks, with cybercriminals taking advantage of vulnerabilities allowing privilege escalation, container escapes, and other exploits, as well as embedding backdoors in open source images across Linux’s complex supply chain. Judging from Amutable’s self-declared vision to bring “determinism and verifiable integrity to Linux system,” the founders see plenty of room for improvement. “Today’s infrastructure approaches security reactively. Software agents watch for vulnerabilities and intrusions; attackers refine their evasion. These defensive approaches are costly, brittle, and ineffective,” the company said. “Amutable’s mission is to deliver verifiable integrity to Linux workloads everywhere. We look forward to working towards this goal with the broader Linux community.”

Read More »

Microsoft and Meta’s Earnings Week Put the AI Data Center Cycle in Sharp Relief

If you’re trying to understand where the hyperscalers really are in the AI buildout, beyond the glossy campus renders and “superintelligence” rhetoric, this week’s earnings calls from Microsoft and Meta offered a more grounded view. Both companies are spending at a scale the data center industry has never had to absorb at once. Both are navigating the same hard constraints: power, capacity, supply chain, silicon allocation, and time-to-build.  But the market’s reaction split decisively, and that divergence tells its own story about what investors will tolerate in 2026. To wit: Massive capex is acceptable when the return narrative is already visible in the P&L…and far less so when the payoff is still being described as “early innings.” Microsoft: AI Demand Is Real. So Is the Cost Microsoft’s fiscal Q2 2026 results reinforced the core fact that has been driving North American hyperscale development for two years: Cloud + AI growth is still accelerating, and Azure remains one of the primary runways. Microsoft said Q2 total revenue rose to $81.3 billion, while Microsoft Cloud revenue reached $51.5 billion, up 26% (constant currency 24%). Intelligent Cloud revenue hit $32.9 billion, up 29%, and Azure and other cloud services revenue grew 39%. That’s the demand signal. The supply signal is more complicated. On the call and in follow-on reporting, Microsoft’s leadership framed the moment as a deliberate capacity build into persistent AI adoption. Yet the bill for that build is now impossible to ignore: Reuters reported Microsoft’s capital spending totaled $37.5 billion in the quarter, up nearly 66% year-over-year, with roughly two-thirds going toward computing chips. That “chips first” allocation matters for the data center ecosystem. It implies a procurement and deployment reality that many developers and colo operators have been living: the short pole is not only power and buildings; it’s GPU

Read More »

Rethinking Water in the AI Data Center Era

Finding Water by Eliminating Waste: Leakage as a Hidden Demand Driver ION Water and Meta frame leakage not as a marginal efficiency issue, but as one of the largest and least visible sources of water demand. According to the release, more than half of the water paid for at some properties can be lost to “invisible leaks,” including running toilets, aging water heaters, and faulty fixtures that go undetected for extended periods. ION’s platform is designed to surface that hidden demand. By monitoring water consumption at the unit level, the system flags anomalies in real time and directs maintenance teams to specific fixtures, rather than entire buildings. The company says this approach can reduce leak-driven water waste by as much as 60%. This represents an important evolution in how hyperscalers defend and contextualize their water footprints: Instead of focusing solely on their own direct WUE metrics, operators are investing in demand reduction within the same watershed where their data centers operate. That shift reframes the narrative from simply managing active water consumption to actively helping stabilize stressed local water systems. The Accounting Shift: Volumetric Water Benefits (VWB) The release explicitly positions the project as a model for Volumetric Water Benefits (VWB) initiatives, projects intended to deliver measurable environmental gains while also producing operational and financial benefits for underserved communities. This framing aligns with a broader stewardship accounting movement promoted by organizations such as the World Resources Institute, which has developed Volumetric Water Benefit Accounting (VWBA) as a standardized method for quantifying and valuing watershed-scale benefits. Meta is explicit that the project supports its water-positive commitment tied to its Temple, Texas data center community. The company has set a 2030 goal to restore more water than it consumes across its global operations and has increasingly emphasized “water stewardship in our data center

Read More »

From Silicon to Cooling: Dell’Oro Maps the AI Data Center Buildout

For much of the past decade, data center growth could be measured in incremental gains: another efficiency point here, another capacity tranche there. That era is over. According to a cascade of recent research from Dell’Oro Group, the AI investment cycle has crossed into a new phase, one defined less by experimentation and more by industrial-scale execution. Across servers, networks, power, and cooling, Dell’Oro’s latest data points to a market being reshaped end-to-end by AI workloads which are pulling forward capital spending, redefining bill-of-material assumptions, and forcing architectural transitions that are rapidly becoming non-negotiable. Capex Becomes the Signal The clearest indicator of the shift is spending. Dell’Oro reported that worldwide data center capital expenditures rose 59 percent year-over-year in 3Q 2025, marking the eighth consecutive quarter of double-digit growth. Importantly, this is no longer a narrow, training-centric surge. “The Top 4 US cloud service providers—Amazon, Google, Meta, and Microsoft—continue to raise data center capex expectations for 2025, supported by increased investments in both AI and general-purpose infrastructure,” said Baron Fung, Senior Research Director at Dell’Oro Group. He added that Oracle is on track to double its data center capex as it expands capacity for the Stargate project. “What is notable this cycle is not just the pace of spending, but the expanding scope of investment,” Fung said. Hyperscalers are now scaling accelerated compute, general-purpose servers, and the supporting infrastructure required to deploy AI at production scale, while simultaneously applying tighter discipline around asset lifecycles and depreciation to preserve cash flow. The result is a capex environment that looks less speculative and more structural, with investment signals extending well into 2026. Accelerators Redefine the Hardware Stack At the component level, the AI effect is even more pronounced. Dell’Oro found that global data center server and storage component revenue jumped 40 percent

Read More »

Applied Digital CEO Wes Cummins On the Hard Part of the AI Boom: Execution

Designing for What Comes After the Current AI Cycle Applied Digital’s design philosophy starts with a premise many developers still resist: today’s density assumptions may not hold. “We’re designing for maximum flexibility for the future—higher density power, lower density power, higher voltage delivery, and more floor space,” Cummins said. “It’s counterintuitive because densities are going up, but we don’t know what comes next.” That choice – to allocate more floor space even as rack densities climb – signals a long-view approach. Facilities are engineered to accommodate shifts in voltage, cooling topology, and customer requirements without forcing wholesale retrofits. Higher-voltage delivery, mixed cooling configurations, and adaptable data halls are baked in from the start. The goal is not to predict the future perfectly, Cummins stressed, but to avoid painting infrastructure into a corner. Supply Chain as Competitive Advantage If flexibility is the design thesis, supply chain control is the execution weapon. “It’s a huge advantage that we locked in our MEP supply chain 18 to 24 months ago,” Cummins said. “It’s a tight environment, and more timelines are going to get missed in 2026 because of it.” Applied Digital moved early to secure long-lead mechanical, electrical, and plumbing components; well before demand pressure fully rippled through transformers, switchgear, chillers, generators, and breakers. That foresight now underpins the company’s ability to make credible delivery commitments while competitors confront procurement bottlenecks. Cummins was blunt: many delays won’t stem from poor planning, but from simple unavailability. From 100 MW to 700 MW Without Losing Control The past year marked a structural pivot for Applied Digital. What began as a single, 100-megawatt “field of dreams” facility in North Dakota has become more than 700 MW under construction, with expansion still ahead. “A hundred megawatts used to be considered scale,” Cummins said. “Now we’re at 700

Read More »

Analyst Reveals ‘Market Consensus’ for Next OPEC+ Meeting

In a report sent to Rigzone by the Standard Chartered team on Wednesday, Standard Chartered Bank Energy Research Head Emily Ashford revealed the “market consensus” for the outcome of the next OPEC+ meeting, which is currently scheduled to take place on Sunday. “OPEC+ members meet virtually on February 1, with market consensus that the meeting will likely mirror January’s in both speed and outcome (rapid, with no policy change),” Ashford said in the publication. Ashford noted in the report that OPEC+’s monthly meetings “allow the group to be highly reactive and nimble in response to market conditions and sentiment”. “We have seen notable improvements in the forward curve over the past month,” Ashford said. “Backwardation at the front of the forward curve extends out through the 2026 contracts, while one month ago it was only the first three months, and the back of the curve has risen by $1 per barrel month on month,” the analyst added. “In addition, market sentiment appears to be gradually turning away from the overwhelmingly bearish ‘supply glut’ narrative that has dominated media reporting since Q4-2025,” Ashford continued. “However, we do not expect a pivot in strategy at this meeting and Q1 loadings remain paused,” the Standard Chartered analyst went on to state. The Standard Chartered Bank analyst projected in the report that “attention is more likely to fall on the OPEC+ overproducers’ updated compensation plans”. “Both Iraq and Kazakhstan have notable volumes to remove from their supply,” Ashford highlighted. “Kazakhstan’s compensation plan for January was 279,000 barrels per day, rising to 569,000 barrels per day for both February and March,” Ashford pointed out. Rigzone has contacted OPEC for comment on the Standard Chartered report. At the time of writing, OPEC has not responded to Rigzone. In a market update sent to Rigzone by the

Read More »

GeoPark to Acquire Frontera Energy Assets in Colombia

GeoPark Ltd has signed a deal to buy Frontera Energy Corp’s oil and gas exploration and production assets in Colombia for up to $400 million plus assumed debt. The acquisition consists of 17 blocks in the Llanos and Lower Magdalena basins, GeoPark said in an online statement. GeoPark expects the licenses to immediately add 148 million barrels of oil equivalent (MMboe) to its proven and probable (2P) reserves and 99 MMboe to its proven reserves. The Quifa field in the Llanos basin has “potential to add approximately 16 MMboe of incremental net 2P reserves, for which a development plan is already under discussion”, the Latin American company added. The acquisition would result in the “consolidation of GeoPark’s core Llanos operating hub, adding large-scale, long-life assets including the Quifa field and the CPE-6, Guatiquia and Cubiro blocks, creating a highly synergistic corridor with greater scale, infrastructure utilization and operating efficiency”, GeoPark said. The acquisition also provides GeoPark “greater exposure to gas and condensate through the VIM-1 and El Dificil blocks, enhancing commodity diversification at a time of rising domestic gas prices in Colombia”, GeoPark said. “Pro forma production is expected to exceed 92,000 boepd [barrels of oil equivalent per day] by 2028, with EBITDA of approximately $950 million, doubling GeoPark’s previously announced 2028 standalone outlook of 44,000-46,000 boepd and $490-520 million of EBITDA”, it said. “Increased scale and diversification are expected to enhance cash flow generation, lowering the cash breakeven by approximately $8 per barrel at current strip prices.  “The stronger and more stable cash flow base is expected to materially improve GeoPark’s capacity to fund its growth plans in [Argentinian shale play] Vaca Muerta, while maintaining its disciplined capital allocation”. “In addition to the upstream asset portfolio, the transaction includes Frontera Energy’s integrated water management and environmental sustainability project, comprised of the SAARA

Read More »

Odds-On for USA to Strike Iran Before June

Prediction markets now put the probability of the U.S. striking Iran before June this year at 71 percent. That’s what Erik Meyersson, Chief EM Strategist at Skandinaviska Enskilda Banken AB (SEB), said in a report sent to Rigzone by the SEB team on Friday. “Brent crude futures rose above $70 per barrel yesterday (it now stands at $69.8), indicative of a substantial risk premium,” he noted. In the report, Meyersson warned that “another war including a naval intervention may bring greater risks of an event in the Strait of Hormuz”. “Given the limitations to [a] direct U.S. military strike, a likely scenario is that the U.S. seeks to start with a naval intervention to intercede Iran’s shadow fleet and sanctioned oil exports,” he said. “This also implies a higher chance of a risk event involving the Strait of Hormuz than last year’s war,” he warned. “The Polymarket estimate of the probability that Iran may attempt to close the Strait is 31 percent, compared to 24 percent on June 11th last year. The oil VIX index is at 55.4 percent, compared to 43.2 percent last year,” he added. “Even the Brent oil price is now on par with the level prevailing just before the June war, despite starting from a lower baseline,” Meyersson continued. “The Brent futures curve has seen a larger shift upwards relative to the period preceding last year’s war. This is the case for both the front-month and 12-year contract,” he went on to state. In a Rystad Energy market update sent to Rigzone just before 3pm GMT time on January 29, Rystad noted that oil prices had “risen sharply to $71.50 per barrel over the past few hours, their highest level since late September 2025, driven by mounting market speculation of an imminent U.S. military strike on Iran”.

Read More »

TotalEnergies Restarts Mozambique LNG Construction Activities

TotalEnergies SE said Thursday the under-construction Mozambique LNG is back in full gear after a security-related force majeure declaration that had been in place for about five years. The relaunch comes amid a new legal challenge in France accusing the French energy giant of “complicity in war crimes” over the alleged 2021 killing by Mozambican armed forces of dozens of civilians at the project site in Cabo Delgado province’s Afungi peninsula. TotalEnergies has denied any wrongdoing. And on November 7, 2025 the company and its Mozambique LNG partners lifted the freeze declaration announced April 26, 2021. Operator TotalEnergies said that in a meeting Thursday in Afungi, Mozambican President Daniel Chapo confirmed to TotalEnergies chair and chief executive Patrick Pouyanné “all measures taken to address the security and the continued cooperation with Rwanda”. “Construction activities have now restarted both offshore and onshore at Afungi site, with over 4,000 workers currently mobilized of which over 3,000 are Mozambican nationals”, TotalEnergies said in an online statement. “First LNG is expected in 2029 as the project progress is currently at 40 percent – almost all engineering and procurement of main equipment have been executed during the force majeure period”. Mozambique LNG is designed to develop about 65 trillion cubic feet of recoverable gas deposits in the northern coast of the Eastern African country to deliver an initial 13 million metric tons per annum (MMtpa) of liquefied natural gas. The partners plan to expand that to up to 43 MMtpa according to the project website. Chapo was quoted in Thursday’s statement as saying, “It will have a direct and significant impact on job creation, both in construction phase and in the operational phase, stimulating the national labor market and promoting the capacity-building of Mozambican manpower”. “At the same time, it consolidates Mozambique’s positioning as a regional

Read More »

Zelenskiy Invites Russia to Energy Strikes Truce

Ukraine is ready to halt strikes on energy infrastructure if Russia agrees to abide by a US proposal for a weeklong truce, President Volodymyr Zelenskiy said. “If Russia does not strike our energy infrastructure – generation facilities or any other energy assets – we will not strike theirs,” Zelenskiy told reporters in Kyiv late Thursday. “We want to end the war and we are ready for de-escalation steps.” US President Donald Trump said earlier at the White House that he’d asked Russian leader Vladimir Putin “not to fire into Kyiv and the various towns for a week, and he agreed to do that.” So far, the Kremlin hasn’t confirmed an agreement to halt missile and drone attacks that have devastated energy infrastructure in Ukraine, causing power and heating outages during an extreme winter cold snap. Temperatures are forecast to drop below minus 20C (minus 4F) at night, adding to wartime hardships for the weary population with many buildings in Kyiv and other cities across Ukraine plunged into cold and darkness during protracted blackouts. With the frontline in eastern and southern Ukraine shifting only gradually as the war nears its fourth full year, Russia has sought to break morale among Ukrainians by stepping up attacks on civilian infrastructure. Ukraine has responded by waging a campaign to strike Russia’s oil refineries and other infrastructure, aiming to undermine its ability to fund the war with income from energy sales. Trump’s proposal for an energy truce to allow greater space for negotiations aimed at ending the war is “an opportunity rather than an agreement,” Zelenskiy said. “Whether it will work or not, and what exactly will work, I cannot say at this point.” Russia continued its attacks on Ukraine overnight, launching one ballistic missile and 111 combat drones, Ukraine’s Air Force said on Telegram. Still, the scale was far

Read More »

Brent Settles Above $70 as Iran Tensions Rise

Brent crude futures settled above $70 a barrel for the first time since July after US President Donald Trump warned Iran to make a nuclear deal or face military strikes. The global oil benchmark rose 3.4% on Thursday, marking a third straight day of gains, while US counterpart West Texas Intermediate topped $65. Oil prices climbed amid the renewed risk of conflict that could disrupt crude exports out of Iran or ripple effects across global markets if a critical shipping route is blocked. Trump said in a social media post on Wednesday that US ships he ordered to the region were ready to fulfill their mission “with speed and violence, if necessary.” The commodity rallied even higher after the Associated Press reported that Iran issued a warning to ships at sea that it planned to run a drill next week that would include live firing in the Strait of Hormuz, citing two Pakistani security officials and the EOS Risk Group. The report stoked fears of a potential closure of the narrow passage that separates Iran and the Arabian peninsula, through which about a fifth of the world’s oil passes. Iran itself accounts for about 3% of global supply, producing roughly 3.3 million barrels per day. Crude has rallied so far in 2026, countering expectations for a market pressured by significant oversupply. Instead, geopolitical tensions from Iran to Venezuela and major supply disruption in Kazakhstan have helped to bolster prices. Prices eased off intraday highs Thursday after equities weakened and the dollar briefly surged, before paring gains, making commodities priced in the currency less attractive. Oil has been trading more closely in tandem with the US dollar amid concerns over currency debasement. Trump’s latest threats have injected a risk premium into prices. Bullish call options have been more expensive than bearish

Read More »

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.  But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.  Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.  More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.  I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.  On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.  People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest).  People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.  Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?  In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good.  Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed.  And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.   But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing.  Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search.  “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly.  It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be.  But once you’ve used AI Overviews a bit, you realize they are different.  Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.)  “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.”  That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video.  When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai.  There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out?  I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too.  “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web.  “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful.  “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.”  But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?   Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.   “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says.  Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.”  Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”  “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.”  He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew?  A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.   “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says.  OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more.  “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience.  Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does.  “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.”  When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them.  “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed!  The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers.  It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.”  We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge.  The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets.  Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed.  “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information.  In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses.  But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on. 

Read More »

Subsea7 Scores Various Contracts Globally

Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Read More »

Driving into the future

Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.  We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen.  Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes. 
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake.  What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story. 
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa.  Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Read More »

Oil Holds at Highest Levels Since October

Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

Read More »

What to expect from NaaS in 2025

Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

Read More »

UK battery storage industry ‘back on track’

UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing  and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was  fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

Read More »

Accelerating discovery with the AI for Math Initiative

Mathematics is the foundational language of the universe, providing the tools to describe everything from the laws of physics to the intricacies of biology and the logic of computer science. For centuries, its frontiers have been expanded by human ingenuity alone. At Google DeepMind, we believe AI can serve as a powerful tool to collaborate with mathematicians, augmenting creativity and accelerating discovery.Today, we’re introducing the AI for Math Initiative, supported by Google DeepMind and Google.org. It brings together five of the world’s most prestigious research institutions to pioneer the use of AI in mathematical research.The inaugural partner institutions are:Imperial College LondonInstitute for Advanced StudyInstitut des Hautes Études Scientifiques (IHES)Simons Institute for the Theory of Computing (UC Berkeley)Tata Institute of Fundamental Research (TIFR)The initiative’s partners will work towards the shared goals of identifying the next generation of mathematical problems ripe for AI-driven insights, building the infrastructure and tools to power these advances and, ultimately, accelerating the pace of discovery.Google’s support includes funding from Google.org and access to Google DeepMind’s state-of-the-art technologies, such as an enhanced reasoning mode called Gemini Deep Think, our agent for algorithm discovery, AlphaEvolve, and our formal proof completion system, AlphaProof. The initiative will create a powerful feedback loop between fundamental research and applied AI, opening the door to deeper partnerships.A pivotal moment for AI and mathematicsThe AI for Math Initiative comes at a time of remarkable progress in AI’s reasoning capabilities; our own work has seen rapid advancement in recent months.In 2024, our AlphaGeometry and AlphaProof systems achieved a silver-medal standard at the International Mathematical Olympiad (IMO). More recently, our latest Gemini model, equipped with Deep Think, achieved a gold-medal level performance at this year’s IMO, perfectly solving five of the six problems and scoring 35 points.And we’ve seen further progress with another of our methods, AlphaEvolve, which was applied to over 50 open problems in mathematical analysis, geometry, combinatorics and number theory and improved the previously best known solutions in 20% of them. In mathematics and algorithm discovery, it has invented a new, more efficient method for matrix multiplication — a core calculation in computing. For the specific problem of multiplying 4×4 matrices, AlphaEvolve discovered an algorithm using just 48 scalar multiplications, breaking the 50-year-old record set by Strassen’s algorithm in 1969. In computer science, it helped researchers discover new mathematical structures that show certain complex problems are even harder for computers to solve than we previously knew. This gives us a clearer and more precise understanding of computational limits, which will help guide future research.This rapid progress is a testament to the fast-evolving capabilities of AI models. We hope this new initiative can explore how AI can accelerate discovery in mathematical research, and tackle harder problems.We are only at the beginning of understanding everything AI can do, and how it can help us think about the deepest questions in science. By combining the profound intuition of world-leading mathematicians with the novel capabilities of AI, we believe new pathways of research can be opened, advancing human knowledge and moving toward new breakthroughs across the scientific disciplines.

Read More »

The AI Hype Index: Grok makes porn, and Claude Code nails your job

Everyone is panicking because AI is very bad; everyone is panicking because AI is very good. It’s just that you never know which one you’re going to get. Grok is a pornography machine. Claude Code can do anything from building websites to reading your MRI. So of course Gen Z is spooked by what this means for jobs. Unnerving new research says AI is going to have a seismic impact on the labor market this year.
.cst-large,
.cst-default {
width: 100%;
}

@media (max-width: 767px) {
.cst-block {
overflow-x: hidden;
}
}

@media (min-width: 630px) {
.cst-large {
margin-left: -25%;
width: 150%;
}

@media (min-width: 960px) {
.cst-large {
margin-left: -16.666666666666664%;
width: 140.26%;
}
}

@media (min-width: 1312px) {
.cst-large {
width: 145.13%;
}
}
}
@media (min-width: 60rem) {
.flourish-embed {
width: 60vw;
transform: translateX(-50%);
left: 50%;
position: relative;
}
}
If you want to get a handle on all that, don’t expect any help from the AI companies—they’re turning on each other like it’s the last act in a zombie movie. Meta’s former chief AI scientist, Yann LeCun, is spilling tea, while Big Tech’s messiest exes, Elon Musk and OpenAI, are about to go to trial. Grab your popcorn.

Read More »

DHS is using Google and Adobe AI to make videos

The US Department of Homeland Security is using AI video generators from Google and Adobe to make and edit content shared with the public, a new document reveals. It comes as immigration agencies have flooded social media with content to support President Trump’s mass deportation agenda—some of which appears to be made with AI—and as workers in tech have put pressure on their employers to denounce the agencies’ activities.  The document, released on Wednesday, provides an inventory of which commercial AI tools DHS uses for tasks ranging from generating drafts of documents to managing cybersecurity.  In a section about “editing images, videos or other public affairs materials using AI,” it reveals for the first time that DHS is using Google’s Veo 3 video generator and Adobe Firefly, estimating that the agency has between 100 and 1,000 licenses for the tools. It also discloses that DHS uses Microsoft Copilot Chat for generating first drafts of documents and summarizing long reports and Poolside software for coding tasks, in addition to tools from other companies. Google, Adobe, and DHS did not immediately respond to requests for comment.
The news provides details about how agencies like Immigrations and Customs Enforcement, which is part of DHS, might be creating the large amounts of content they’ve shared on X and other channels as immigration operations have expanded across US cities. They’ve posted content celebrating “Christmas after mass deportations,” referenced Bible verses and Christ’s birth, showed faces of those the agency has arrested, and shared ads aimed at recruiting agents. The agencies have also repeatedly used music without permissions from artists in their videos. Some of the content, particularly videos, has the appearance of being AI-generated, but it hasn’t been clear until now what AI models the agencies might be using. This marks the first concrete evidence such generators are being used by DHS to create content shared with the public.
It still remains impossible to verify which company helped create a specific piece of content, or indeed if it was AI-generated at all. Adobe offers options to “watermark” a video made with its tools to disclose that it is AI-generated, for example, but this disclosure does not always stay intact when the content is uploaded and shared across different sites.  The document reveals that DHS has specifically been using Flow, a tool from Google that combines its Veo 3 video generator with a suite of filmmaking tools. Users can generate clips and assemble entire videos with AI, including videos that contain sound, dialogue, and background noise, making them hyperrealistic. Adobe launched its Firefly generator in 2023, promising that it does not use copyrighted content in its training or output. Like Google’s tools, Adobe’s can generate videos, images, soundtracks, and speech. The document does not reveal further details about how the agency is using these video generation tools. Workers at large tech companies, including more than 140 current and former employees from Google and more than 30 from Adobe, have been putting pressure on their employers in recent weeks to take a stance against ICE and the shooting of Alex Pretti on January 24. Google’s leadership has not made statements in response. In October, Google and Apple removed apps on their app stores that were intended to track sightings of ICE, citing safety risks.  An additional document released on Wednesday revealed new details about how the agency is using more niche AI products, including a facial recognition app used by ICE, as first reported by 404Media in June.

Read More »

Project Genie: Experimenting with infinite, interactive worlds

In August, we previewed Genie 3, a general-purpose world model capable of generating diverse, interactive environments. Even in this early form, trusted testers were able to create an impressive range of fascinating worlds and experiences, and uncovered entirely new ways to use it. The next step is to broaden access through a dedicated, interactive prototype focused on immersive world creation.Starting today, we’re rolling out access to Project Genie for Google AI Ultra subscribers in the U.S (18+). This experimental research prototype lets users create, explore and remix their own interactive worlds.How we’re advancing world modelsA world model simulates the dynamics of an environment, predicting how they evolve and how actions affect them. While Google DeepMind has a history of agents for specific environments like Chess or Go, building AGI requires systems that navigate the diversity of the real world.To meet this challenge and support our AGI mission, we developed Genie 3. Unlike explorable experiences in static 3D snapshots, Genie 3 generates the path ahead in real time as you move and interact with the world. It simulates physics and interactions for dynamic worlds, while its breakthrough consistency enables the simulation of any real-world scenario — from robotics and modelling animation and fiction, to exploring locations and historical settings.Building on our model research with trusted testers from across industries and domains, we are taking the next step with an experimental research prototype: Project Genie.How Project Genie worksProject Genie is a prototype web app powered by Genie 3, Nano Banana Pro and Gemini, which allows users to experiment with the immersive experiences of our world model firsthand. The experience is centred on three core capabilities:

Read More »

The Download: inside the Vitalism movement, and why AI’s “memory” is a privacy problem

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Meet the Vitalists: the hardcore longevity enthusiasts who believe death is “wrong” Last April, an excited crowd gathered at a compound in Berkeley, California, for a three-day event called the Vitalist Bay Summit. It was part of a longer, two-month residency that hosted various events to explore tools—from drug regulation to cryonics—that might be deployed in the fight against death.One of the main goals, though, was to spread the word of Vitalism, a somewhat radical movement established by Nathan Cheng and his colleague Adam Gries a few years ago. Consider it longevity for the most hardcore adherents—a sweeping mission to which nothing short of total devotion will do.Although interest in longevity has certainly taken off in recent years, not everyone in the broader longevity space shares Vitalists’ commitment to actually making death obsolete. And the Vitalists feel that momentum is building, not just for the science of aging and the development of lifespan-extending therapies, but for the acceptance of their philosophy that defeating death should be humanity’s top concern. Read the full story. —Jessica Hamzelou
This is the latest in our Big Story series, the home for MIT Technology Review’s most important, ambitious reporting. You can read the rest of the series here. 
What AI “remembers” about you is privacy’s next frontier —Miranda Bogen, director of the AI Governance Lab at the Center for Democracy & Technology, & Ruchika Joshi, fellow at the Center for Democracy & Technology specializing in AI safety and governance The ability to remember you and your preferences is rapidly becoming a big selling point for AI chatbots and agents.Personalized, interactive AI systems are built to act on our behalf, maintain context across conversations, and improve our ability to carry out all sorts of tasks, from booking travel to filing taxes.But their ability to store and retrieve increasingly intimate details about their users over time introduces alarming, and all-too-familiar, privacy vulnerabilities––many of which have loomed since “big data” first teased the power of spotting and acting on user patterns. Worse, AI agents now appear poised to plow through whatever safeguards had been adopted to avoid those vulnerabilities. So what can developers do to fix this problem? Read the full story. How the grid can ride out winter storms The eastern half of the US saw a monster snowstorm over the weekend. The good news is the grid has largely been able to keep up with the freezing temperatures and increased demand. But there were some signs of strain, particularly for fossil-fuel plants.One analysis found that PJM, the nation’s largest grid operator, saw significant unplanned outages in plants that run on natural gas and coal. Historically, these facilities can struggle in extreme winter weather.Much of the country continues to face record-low temperatures, and the possibility is looming for even more snow this weekend. What lessons can we take from this storm, and how might we shore up the grid to cope with extreme weather? Read the full story. —Casey Crownhart This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Telegram has been flooded with deepfake nudes Millions of users are creating and sharing falsified images in dedicated channels. (The Guardian) 2 China has executed 11 people linked to Myanmar scam centersThe members of the “Ming family criminal gang” caused the death of at least 14 Chinese citizens. (Bloomberg $)+ Inside a romance scam compound—and how people get tricked into being there. (MIT Technology Review) 3 This viral personal AI assistant is a major privacy concernSecurity researchers are sounding the alarm on Moltbot, formerly known as Clawdbot. (The Register)+ It requires a great deal more technical know-how than most agentic bots. (TechCrunch) 4 OpenAI has a plan to keep bots off its future social networkIt’s putting its faith in biometric “proof of personhood” promised by the likes of World’s eyeball-scanning orb. (Forbes)+ We reported on how World recruited its first half a million test users back in 2022. (MIT Technology Review)5 Here’s just some of the technologies ICE is deployingFrom facial recognition to digital forensics. (WP $)+ Agents are also using Palantir’s AI to sift through tip-offs. (Wired $)6 Tesla is axing its Model S and Model X cars 🚗Its Fremont factory will switch to making Optimus robots instead. (TechCrunch)+ It’s the latest stage of the company’s pivot to AI… (FT $)+ …as profit falls by 46%. (Ars Technica)+ Tesla is still struggling to recover from the damage of Elon Musk’s political involvement. (WP $) 7 X is rife with weather influencers spreading misinformationThey’re whipping up hype ahead of massive storms hitting. (New Yorker $)
8  Retailers are going all-in on AIBut giants like Amazon and Walmart are taking very different approaches. (FT $)+ Mark Zuckerberg has hinted that Meta is working on agentic commerce tools. (TechCrunch)+ We called it—what’s next for AI in 2026. (MIT Technology Review) 9 Inside the rise of the offline hangoutNo phones, no problem. (Wired $)10 Social media is obsessed with 2016…why, exactly? (WSJ $)
Quote of the day “The amount of crap I get for putting out a hobby project for free is quite something.” —Peter Steinberger, the creator of the viral AI agent Moltbot, complains about the backlash his project has received from security researchers pointing out its flaws in a post on X. One more thing
The flawed logic of rushing out extreme climate solutionsEarly in 2022, entrepreneur Luke Iseman says, he released a pair of sulfur dioxide–filled weather balloons from Mexico’s Baja California peninsula, in the hope that they’d burst miles above Earth.It was a trivial act in itself, effectively a tiny, DIY act of solar geoengineering, the controversial proposal that the world could counteract climate change by releasing particles that reflect more sunlight back into space.Entrepreneurs like Iseman invoke the stark dangers of climate change to explain why they do what they do—even if they don’t know how effective their interventions are. But experts say that urgency doesn’t create a social license to ignore the underlying dangers or leapfrog the scientific process. Read the full story.—James Temple We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + The hottest thing in art right now? Vertical paintings.+ There’s something in the water around Monterey Bay—a tail walking dolphin!+ Fed up of hairstylists not listening to you? Remember these handy tips the next time you go for a cut.+ Get me a one-way ticket to Japan’s tastiest island.

Read More »

How the grid can ride out winter storms

The eastern half of the US saw a monster snowstorm over the weekend. The good news is the grid has largely been able to keep up with the freezing temperatures and increased demand. But there were some signs of strain, particularly for fossil-fuel plants. One analysis found that PJM, the nation’s largest grid operator, saw significant unplanned outages in plants that run on natural gas and coal. Historically, these facilities can struggle in extreme winter weather. Much of the country continues to face record-low temperatures, and the possibility is looming for even more snow this weekend. What lessons can we take from this storm, and how might we shore up the grid to cope with extreme weather? Living in New Jersey, I have the honor of being one of the roughly 67 million Americans covered by the PJM Interconnection.
So I was in the thick of things this weekend, when PJM saw unplanned outages of over 20 gigawatts on Sunday during the height of the storm. (That’s about 16% of the grid’s demand that afternoon.) Other plants were able to make up the difference, and thankfully, the power didn’t go out in my area. But that’s a lot of capacity offline. Typically, the grid operator doesn’t announce details about why an outage occurs until later. But analysts at Energy Innovation, a policy and research firm specializing in energy and climate, went digging. By examining publicly available grid mix data (a breakdown of what types of power plants are supplying the grid), the team came to a big conclusion: Fossil fuels failed during the storm.
The analysts found that gas-fired power plants were producing about 10 gigawatts less power on Sunday than the peak demand on Saturday, even while electricity prices were high. Coal- and oil-burning plants were down too. Because these plants weren’t operating, even when high prices would make it quite lucrative, they were likely a significant part of the problem, says Michelle Solomon, a manager in the electricity program at Energy Innovation. PJM plans to share more details about the outages at an upcoming committee meeting once the cold snap passes, Dan Lockwood, a PJM spokesperson, told me via email. Fossil-fuel plants can see reliability challenges during winter: When temperatures drop, pressures in natural-gas lines fall too, which can lead to issues for fuel supply. Freezing temperatures can throw compression stations and other mechanical equipment offline and even freeze piles of coal. One of the starkest examples came in 2021, when Texas faced freezing temperatures that took many power plants offline and threw the grid into chaos. Many homes lost power for days, and at least 246 people died during that storm. Texas fared much better this time around. After 2021, the state shored up its grid, adding winter weatherization for power plants and transmission systems. Texas has also seen a huge flood of batteries come online, which has greatly helped the grid during winter demand peaks, especially in the early mornings. Texas was also simply lucky that this storm was less severe there, as one expert told Inside Climate News this week. Here on the East Coast, we’re not out of the woods yet. The snow has stopped falling, but grids are still facing high electricity demand because of freezing temperatures. (I’ve certainly been living under my heated blanket these last few days.) PJM could see a peak power demand of 130 gigawatts for seven straight days, a winter streak that the local grid has never experienced, according to an update to the utility’s site on Tuesday morning. The US Department of Energy issued emergency orders to several grid operators, including PJM, that allow power plants to run while basically ignoring emissions regulations. The department also issued orders allowing several grids to tell data centers and other facilities to begin using backup generators. (This is good news for reliability but bad news for clean air and the climate, since these power sources are often incredibly emissions-intensive.)

We here on the East Coast could learn a thing or two from Texas so we don’t need to resort to these polluting emergency measures to keep the lights on. More energy storage could be a major help in future winter storms, lending flexibility to the grid to help ride out the worst times, Solomon says. Getting offshore wind online could also help, since those facilities typically produce reliable power in the winter.  No one energy source will solve the massive challenge of building and maintaining a resilient grid. But as we face the continued threat of extreme storms, renewables might actually help us weather them.  This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Read More »

Greece Warns Shipowners Against Sailing Near Iran Coast

Greece, home to the world’s largest oil tanker fleet, told the nation’s vessel owners to do what they can to stay away from Iran’s coast — a task that is all but impossible for those entering the Persian Gulf to collect cargoes of Middle East crude. Shipowners were directed to sail closer to the United Arab Emirates and Oman when transiting the Strait of Hormuz, according to two advisories seen by Bloomberg. They were issued by the Greek shipping ministry to local shipowner associations on Jan. 27 and 29. The advisories said more warships were operating near the strait and warned that the European Union’s latest sanctions on Iran risked further inflaming tensions around Hormuz, the Persian Gulf and southern parts of the Red Sea. They were sent to the Hellenic Chamber of Shipping, the Union of Greek Shipowners and the Hellenic Shortsea Shipowners Association. A spokesman for Greece’s shipping ministry confirmed the notices had been sent. The global shipping community and oil traders are closely watching developments in the Middle East after the US dispatched an aircraft-carrier strike group to the region. President Donald Trump said he hoped he would not have to use it against Iran, which monitoring groups have accused of killing thousands of people during recent protests. The Strait of Hormuz is critical to the global oil supply, with roughly a quarter of the world’s seaborne crude passing through the corridor. Much of that oil is transported on Greek-owned vessels. Greece is the biggest tanker owner by tonnage, according to Clarkson Research Services, a unit of the world’s largest shipbroker. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Startup Amutable plotting Linux security overhaul to counter hacking threats

Verifiable integrity Computing is full of security problems, and Linux is no exception to this rule, given convincing the protective free and open source software community of the wisdom of a radical new idea often turns out to be as big a challenge as the engineering itself. While Linux distros on desktop computers remain a niche, the technology’s invisible domination of online platforms and cloud container orchestration tools makes it the most important operating system in the world. That, not surprisingly, has made it a target for attacks, with cybercriminals taking advantage of vulnerabilities allowing privilege escalation, container escapes, and other exploits, as well as embedding backdoors in open source images across Linux’s complex supply chain. Judging from Amutable’s self-declared vision to bring “determinism and verifiable integrity to Linux system,” the founders see plenty of room for improvement. “Today’s infrastructure approaches security reactively. Software agents watch for vulnerabilities and intrusions; attackers refine their evasion. These defensive approaches are costly, brittle, and ineffective,” the company said. “Amutable’s mission is to deliver verifiable integrity to Linux workloads everywhere. We look forward to working towards this goal with the broader Linux community.”

Read More »

Microsoft and Meta’s Earnings Week Put the AI Data Center Cycle in Sharp Relief

If you’re trying to understand where the hyperscalers really are in the AI buildout, beyond the glossy campus renders and “superintelligence” rhetoric, this week’s earnings calls from Microsoft and Meta offered a more grounded view. Both companies are spending at a scale the data center industry has never had to absorb at once. Both are navigating the same hard constraints: power, capacity, supply chain, silicon allocation, and time-to-build.  But the market’s reaction split decisively, and that divergence tells its own story about what investors will tolerate in 2026. To wit: Massive capex is acceptable when the return narrative is already visible in the P&L…and far less so when the payoff is still being described as “early innings.” Microsoft: AI Demand Is Real. So Is the Cost Microsoft’s fiscal Q2 2026 results reinforced the core fact that has been driving North American hyperscale development for two years: Cloud + AI growth is still accelerating, and Azure remains one of the primary runways. Microsoft said Q2 total revenue rose to $81.3 billion, while Microsoft Cloud revenue reached $51.5 billion, up 26% (constant currency 24%). Intelligent Cloud revenue hit $32.9 billion, up 29%, and Azure and other cloud services revenue grew 39%. That’s the demand signal. The supply signal is more complicated. On the call and in follow-on reporting, Microsoft’s leadership framed the moment as a deliberate capacity build into persistent AI adoption. Yet the bill for that build is now impossible to ignore: Reuters reported Microsoft’s capital spending totaled $37.5 billion in the quarter, up nearly 66% year-over-year, with roughly two-thirds going toward computing chips. That “chips first” allocation matters for the data center ecosystem. It implies a procurement and deployment reality that many developers and colo operators have been living: the short pole is not only power and buildings; it’s GPU

Read More »

Rethinking Water in the AI Data Center Era

Finding Water by Eliminating Waste: Leakage as a Hidden Demand Driver ION Water and Meta frame leakage not as a marginal efficiency issue, but as one of the largest and least visible sources of water demand. According to the release, more than half of the water paid for at some properties can be lost to “invisible leaks,” including running toilets, aging water heaters, and faulty fixtures that go undetected for extended periods. ION’s platform is designed to surface that hidden demand. By monitoring water consumption at the unit level, the system flags anomalies in real time and directs maintenance teams to specific fixtures, rather than entire buildings. The company says this approach can reduce leak-driven water waste by as much as 60%. This represents an important evolution in how hyperscalers defend and contextualize their water footprints: Instead of focusing solely on their own direct WUE metrics, operators are investing in demand reduction within the same watershed where their data centers operate. That shift reframes the narrative from simply managing active water consumption to actively helping stabilize stressed local water systems. The Accounting Shift: Volumetric Water Benefits (VWB) The release explicitly positions the project as a model for Volumetric Water Benefits (VWB) initiatives, projects intended to deliver measurable environmental gains while also producing operational and financial benefits for underserved communities. This framing aligns with a broader stewardship accounting movement promoted by organizations such as the World Resources Institute, which has developed Volumetric Water Benefit Accounting (VWBA) as a standardized method for quantifying and valuing watershed-scale benefits. Meta is explicit that the project supports its water-positive commitment tied to its Temple, Texas data center community. The company has set a 2030 goal to restore more water than it consumes across its global operations and has increasingly emphasized “water stewardship in our data center

Read More »

From Silicon to Cooling: Dell’Oro Maps the AI Data Center Buildout

For much of the past decade, data center growth could be measured in incremental gains: another efficiency point here, another capacity tranche there. That era is over. According to a cascade of recent research from Dell’Oro Group, the AI investment cycle has crossed into a new phase, one defined less by experimentation and more by industrial-scale execution. Across servers, networks, power, and cooling, Dell’Oro’s latest data points to a market being reshaped end-to-end by AI workloads which are pulling forward capital spending, redefining bill-of-material assumptions, and forcing architectural transitions that are rapidly becoming non-negotiable. Capex Becomes the Signal The clearest indicator of the shift is spending. Dell’Oro reported that worldwide data center capital expenditures rose 59 percent year-over-year in 3Q 2025, marking the eighth consecutive quarter of double-digit growth. Importantly, this is no longer a narrow, training-centric surge. “The Top 4 US cloud service providers—Amazon, Google, Meta, and Microsoft—continue to raise data center capex expectations for 2025, supported by increased investments in both AI and general-purpose infrastructure,” said Baron Fung, Senior Research Director at Dell’Oro Group. He added that Oracle is on track to double its data center capex as it expands capacity for the Stargate project. “What is notable this cycle is not just the pace of spending, but the expanding scope of investment,” Fung said. Hyperscalers are now scaling accelerated compute, general-purpose servers, and the supporting infrastructure required to deploy AI at production scale, while simultaneously applying tighter discipline around asset lifecycles and depreciation to preserve cash flow. The result is a capex environment that looks less speculative and more structural, with investment signals extending well into 2026. Accelerators Redefine the Hardware Stack At the component level, the AI effect is even more pronounced. Dell’Oro found that global data center server and storage component revenue jumped 40 percent

Read More »

Applied Digital CEO Wes Cummins On the Hard Part of the AI Boom: Execution

Designing for What Comes After the Current AI Cycle Applied Digital’s design philosophy starts with a premise many developers still resist: today’s density assumptions may not hold. “We’re designing for maximum flexibility for the future—higher density power, lower density power, higher voltage delivery, and more floor space,” Cummins said. “It’s counterintuitive because densities are going up, but we don’t know what comes next.” That choice – to allocate more floor space even as rack densities climb – signals a long-view approach. Facilities are engineered to accommodate shifts in voltage, cooling topology, and customer requirements without forcing wholesale retrofits. Higher-voltage delivery, mixed cooling configurations, and adaptable data halls are baked in from the start. The goal is not to predict the future perfectly, Cummins stressed, but to avoid painting infrastructure into a corner. Supply Chain as Competitive Advantage If flexibility is the design thesis, supply chain control is the execution weapon. “It’s a huge advantage that we locked in our MEP supply chain 18 to 24 months ago,” Cummins said. “It’s a tight environment, and more timelines are going to get missed in 2026 because of it.” Applied Digital moved early to secure long-lead mechanical, electrical, and plumbing components; well before demand pressure fully rippled through transformers, switchgear, chillers, generators, and breakers. That foresight now underpins the company’s ability to make credible delivery commitments while competitors confront procurement bottlenecks. Cummins was blunt: many delays won’t stem from poor planning, but from simple unavailability. From 100 MW to 700 MW Without Losing Control The past year marked a structural pivot for Applied Digital. What began as a single, 100-megawatt “field of dreams” facility in North Dakota has become more than 700 MW under construction, with expansion still ahead. “A hundred megawatts used to be considered scale,” Cummins said. “Now we’re at 700

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE