Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Favorable Wi-Fi 7 prices won’t be around for long, Dell’Oro Group warns

Another contributing factor is that some Wi-Fi 7 access points have only two radios, whereas Wi-Fi 6 APs generally have three to support 2.4, 5 and 6 GHz bands, Morgan says. Finally, some vendors offer a wider range of Wi-Fi 7 equipment models than in previous generations. The lower-end models in their portfolios help reduce the average price of all Wi-Fi 7 products, Morgan’s research shows. So, whether you pay a premium for Wi-Fi 7 vs. Wi-Fi 6 or 6E may depend on which models you need. Act now, these deals won’t last Whatever your particular case, if you are in the market for a Wi-Fi 7 upgrade, don’t dally. “In the overall wireless LAN market, not just Wi-Fi 7, we’re going to start to see prices rise,” Morgan says. Price hikes will be largely due to the uncertain availability of memory chips required for WLAN hardware – an issue that’s driving price hikes across all sorts of equipment. “Vendors have already started to raise list prices, even though it’s been in the few percentage points so far,” she said. “We expect further price hikes over the next year.” Lead times are also volatile. Channel partners are telling Dell’Oro that lead times can vary day-to-day, measured in months one day and weeks the next. “There doesn’t seem to be a consistent trend across specific products or specific vendors. It seems volatile across the whole market,” Morgan says. As a result, partners are tightening the windows on how long quotes are valid, because they don’t know how or whether their own pricing will change. While there’s no hard-and-fast rule of thumb, and timing may depend on existing contracts, Morgan says the typical window is probably a matter of weeks.

Read More »

Raising the temp on liquid cooling

IBM isn’t the only one. “We’ve been doing liquid cooling since 2012 on our supercomputers,” says Scott Tease, vice president and general manager of AI and high-performance computing at Lenovo’s infrastructure solutions group. “And we’ve been improving it ever since—we’re now on the sixth generation of that technology.” And the liquid Lenovo uses in its Neptune liquid cooling solution is warm water. Or, more precisely, hot water: 45 degrees Celsius. And when the water leaves the servers, it’s even hotter, Tease says. “I don’t have to chill that water, even if I’m in a hot climate,” he says. Even at high temperatures, the water still provides enough cooling to the chips that it has real value. “Generally, a data center will use evaporation to chill water down,” Tease adds. “Since we don’t have to chill the water, we don’t have to use evaporation. That’s huge amounts of savings on the water. For us, it’s almost like a perfect solution. It delivers the highest performance possible, the highest density possible, the lowest power consumption. So, it’s the most sustainable solution possible.” So, how is the water cooled down? It gets piped up to the roof, Tease says, where there are giant radiators with massive amounts of surface area. The heat radiates away, and then all the water flows right back to the servers again. Though not always. The hot water can also be used to, say, heat campus or community swimming pools. “We have data centers in the Nordics who are giving the heat to the local communities’ water systems,” Tease says.

Read More »

GenAI Pushes Cloud to $119B Quarter as AI Networking Race Intensifies

Cisco Targets the AI Fabric Bottleneck Cisco introduced its Silicon One G300, a new switching ASIC delivering 102.4 Tbps of throughput and designed specifically for large-scale AI cluster deployments. The chip will power next-generation Cisco Nexus 9000 and 8000 systems aimed at hyperscalers, neocloud providers, sovereign cloud operators, and enterprises building AI infrastructure. The company is positioning the platform around a simple premise: at AI-factory scale, the network becomes part of the compute plane. According to Cisco, the G300 architecture enables: 33% higher network utilization 28% reduction in AI job completion time Support for emerging 1.6T Ethernet environments Integrated telemetry and path-based load balancing Martin Lund, EVP of Cisco’s Common Hardware Group, emphasized the growing centrality of data movement. “As AI training and inference continues to scale, data movement is the key to efficient AI compute; the network becomes part of the compute itself,” Lund said. The new systems also reflect another emerging trend in AI infrastructure: the spread of liquid cooling beyond servers and into the networking layer. Cisco says its fully liquid-cooled switch designs can deliver nearly 70% energy efficiency improvement compared with prior approaches, while new 800G linear pluggable optics aim to reduce optical power consumption by up to 50%. Ethernet’s Next Big Test Industry analysts increasingly view AI networking as one of the most consequential battlegrounds in the current infrastructure cycle. Alan Weckel, founder of 650 Group, noted that backend AI networks are rapidly moving toward 1.6T architectures, a shift that could push the Ethernet data center switch market above $100 billion annually. SemiAnalysis founder Dylan Patel was even more direct in framing the stakes. “Networking has been the fundamental constraint to scaling AI,” Patel said. “At this scale, networking directly determines how much AI compute can actually be utilized.” That reality is driving intense innovation

Read More »

From Lab to Gigawatt: CoreWeave’s ARENA and the AI Validation Imperative

The Production Readiness Gap AI teams continue to confront a familiar challenge: moving from experimentation to predictable production performance. Models that train successfully on small clusters or sandbox environments often behave very differently when deployed at scale. Performance characteristics shift. Data pipelines strain under sustained load. Cost assumptions unravel. Synthetic benchmarks and reduced test sets rarely capture the complex interactions between compute, storage, networking, and orchestration that define real-world AI systems. The result can be an expensive “Day One” surprise:  unexpected infrastructure costs, bottlenecks across distributed components, and delays that ripple across product timelines. CoreWeave’s view is that benchmarking and production launch can no longer be treated as separate phases. Instead, validation must occur in environments that replicate the architectural, operational, and economic realities of live deployment. ARENA is designed around that premise. The platform allows customers to run full workloads on CoreWeave’s production-grade GPU infrastructure, using standardized compute stacks, network configurations, data paths, and service integrations that mirror actual deployment environments. Rather than approximating production behavior, the goal is to observe it directly. Key capabilities include: Running real workloads on GPU clusters that match production configurations. Benchmarking both performance and cost under realistic operational conditions. Diagnosing bottlenecks and scaling behavior across compute, storage, and networking layers. Leveraging standardized observability tools and guided engineering support. CoreWeave positions ARENA as an alternative to traditional demo or sandbox environments; one informed by its own experience operating large-scale AI infrastructure. By validating workloads under production conditions early in the lifecycle, teams gain empirical insight into performance dynamics and cost curves before committing capital and operational resources. Why Production-Scale Validation Has Become Strategic The demand for environments like ARENA reflects how fundamentally AI workloads have changed. Several structural shifts are driving the need for production-scale validation: Continuous, Multi-Layered Workloads AI systems are no longer

Read More »

Utah’s 4 GW AI Campus Tests the Limits of Speed-to-Power

Back in September 2025, we examined an ambitious proposal from infrastructure developer Joule Capital Partners – often branding the effort as “Joule Power” – in partnership with Caterpillar. The concept is straightforward but consequential: acquire a vast rural tract in Millard County, Utah, and pair an AI-focused data center campus with large-scale, on-site “behind-the-meter” generation to bypass the interconnection queues, transmission constraints, and substation bottlenecks slowing projects nationwide. The appeal is clear: speed-to-power and greater control over delivery timelines. But that speed shifts the project’s risk profile. Instead of navigating traditional utility procurement, the development begins to resemble a distributed power plant subject to industrial permitting, fuel supply logistics, air emissions scrutiny, noise controls, and groundwater governance. These are issues communities typically associate with generation facilities, not hyperscale data centers. Our earlier coverage focused on the technical and strategic logic of pairing compute with on-site generation. Now the story has evolved. Community opposition is emerging as a material variable that could influence schedule and scope. Although groundbreaking was held in November 2025, final site plans and key conditional use permits remain pending at the time of publication. What Is Actually Being Proposed? Public records from Millard County show Joule pursuing a zone change for approximately 4,000 acres (about 6.25 square miles), converting agricultural land near 11000 N McCornick Road to Heavy Industrial use. At a July 2025 public meeting, residents raised familiar concerns that surface when a rural landscape is targeted for hyperscale development: labor influx and housing strain, water use, traffic, dust and wildfire risk, wildlife disruption, and the broader loss of farmland and local character. What has proven less clear is the precise scale and sequencing of the buildout. Local reporting describes an initial phase of six data center buildings, each supported by a substantial fleet of Caterpillar

Read More »

Execution, Power, and Public Trust: Rich Miller on 2026’s Data Center Reality and Why He Built Data Center Richness

DCF founder Rich Miller has spent much of his career explaining how the data center industry works. Now, with his latest venture, Data Center Richness, he’s also examining how the industry learns. That thread provided the opening for the latest episode of The DCF Show Podcast, where Miller joined present Data Center Frontier Editor in Chief Matt Vincent and Senior Editor David Chernicoff for a wide-ranging discussion that ultimately landed on a simple conclusion: after two years of unprecedented AI-driven announcements, 2026 will be the year reality asserts itself. Projects will either get built, or they won’t. Power will either materialize, or it won’t. Communities will either accept data center expansion – or they’ll stop it. In other words, the industry is entering its execution phase. Why Data Center Richness Matters Now Miller launched Data Center Richness as both a podcast and a Substack publication, an effort to experiment with formats and better understand how professionals now consume industry information. Podcasts have become a primary way many practitioners follow the business, while YouTube’s discovery advantages increasingly make video versions essential. At the same time, Miller remains committed to written analysis, using Substack as a venue for deeper dives and format experimentation. One example is his weekly newsletter distilling key industry developments into just a handful of essential links rather than overwhelming readers with volume. The approach reflects a broader recognition: the pace of change has accelerated so much that clarity matters more than quantity. The topic of how people learn about data centers isn’t separate from the industry’s trajectory; it’s becoming part of it. Public perception, regulatory scrutiny, and investor expectations are now shaped by how stories are told as much as by how facilities are built. That context sets the stage for the conversation’s core theme. Execution Defines 2026 After

Read More »

Favorable Wi-Fi 7 prices won’t be around for long, Dell’Oro Group warns

Another contributing factor is that some Wi-Fi 7 access points have only two radios, whereas Wi-Fi 6 APs generally have three to support 2.4, 5 and 6 GHz bands, Morgan says. Finally, some vendors offer a wider range of Wi-Fi 7 equipment models than in previous generations. The lower-end models in their portfolios help reduce the average price of all Wi-Fi 7 products, Morgan’s research shows. So, whether you pay a premium for Wi-Fi 7 vs. Wi-Fi 6 or 6E may depend on which models you need. Act now, these deals won’t last Whatever your particular case, if you are in the market for a Wi-Fi 7 upgrade, don’t dally. “In the overall wireless LAN market, not just Wi-Fi 7, we’re going to start to see prices rise,” Morgan says. Price hikes will be largely due to the uncertain availability of memory chips required for WLAN hardware – an issue that’s driving price hikes across all sorts of equipment. “Vendors have already started to raise list prices, even though it’s been in the few percentage points so far,” she said. “We expect further price hikes over the next year.” Lead times are also volatile. Channel partners are telling Dell’Oro that lead times can vary day-to-day, measured in months one day and weeks the next. “There doesn’t seem to be a consistent trend across specific products or specific vendors. It seems volatile across the whole market,” Morgan says. As a result, partners are tightening the windows on how long quotes are valid, because they don’t know how or whether their own pricing will change. While there’s no hard-and-fast rule of thumb, and timing may depend on existing contracts, Morgan says the typical window is probably a matter of weeks.

Read More »

Raising the temp on liquid cooling

IBM isn’t the only one. “We’ve been doing liquid cooling since 2012 on our supercomputers,” says Scott Tease, vice president and general manager of AI and high-performance computing at Lenovo’s infrastructure solutions group. “And we’ve been improving it ever since—we’re now on the sixth generation of that technology.” And the liquid Lenovo uses in its Neptune liquid cooling solution is warm water. Or, more precisely, hot water: 45 degrees Celsius. And when the water leaves the servers, it’s even hotter, Tease says. “I don’t have to chill that water, even if I’m in a hot climate,” he says. Even at high temperatures, the water still provides enough cooling to the chips that it has real value. “Generally, a data center will use evaporation to chill water down,” Tease adds. “Since we don’t have to chill the water, we don’t have to use evaporation. That’s huge amounts of savings on the water. For us, it’s almost like a perfect solution. It delivers the highest performance possible, the highest density possible, the lowest power consumption. So, it’s the most sustainable solution possible.” So, how is the water cooled down? It gets piped up to the roof, Tease says, where there are giant radiators with massive amounts of surface area. The heat radiates away, and then all the water flows right back to the servers again. Though not always. The hot water can also be used to, say, heat campus or community swimming pools. “We have data centers in the Nordics who are giving the heat to the local communities’ water systems,” Tease says.

Read More »

GenAI Pushes Cloud to $119B Quarter as AI Networking Race Intensifies

Cisco Targets the AI Fabric Bottleneck Cisco introduced its Silicon One G300, a new switching ASIC delivering 102.4 Tbps of throughput and designed specifically for large-scale AI cluster deployments. The chip will power next-generation Cisco Nexus 9000 and 8000 systems aimed at hyperscalers, neocloud providers, sovereign cloud operators, and enterprises building AI infrastructure. The company is positioning the platform around a simple premise: at AI-factory scale, the network becomes part of the compute plane. According to Cisco, the G300 architecture enables: 33% higher network utilization 28% reduction in AI job completion time Support for emerging 1.6T Ethernet environments Integrated telemetry and path-based load balancing Martin Lund, EVP of Cisco’s Common Hardware Group, emphasized the growing centrality of data movement. “As AI training and inference continues to scale, data movement is the key to efficient AI compute; the network becomes part of the compute itself,” Lund said. The new systems also reflect another emerging trend in AI infrastructure: the spread of liquid cooling beyond servers and into the networking layer. Cisco says its fully liquid-cooled switch designs can deliver nearly 70% energy efficiency improvement compared with prior approaches, while new 800G linear pluggable optics aim to reduce optical power consumption by up to 50%. Ethernet’s Next Big Test Industry analysts increasingly view AI networking as one of the most consequential battlegrounds in the current infrastructure cycle. Alan Weckel, founder of 650 Group, noted that backend AI networks are rapidly moving toward 1.6T architectures, a shift that could push the Ethernet data center switch market above $100 billion annually. SemiAnalysis founder Dylan Patel was even more direct in framing the stakes. “Networking has been the fundamental constraint to scaling AI,” Patel said. “At this scale, networking directly determines how much AI compute can actually be utilized.” That reality is driving intense innovation

Read More »

From Lab to Gigawatt: CoreWeave’s ARENA and the AI Validation Imperative

The Production Readiness Gap AI teams continue to confront a familiar challenge: moving from experimentation to predictable production performance. Models that train successfully on small clusters or sandbox environments often behave very differently when deployed at scale. Performance characteristics shift. Data pipelines strain under sustained load. Cost assumptions unravel. Synthetic benchmarks and reduced test sets rarely capture the complex interactions between compute, storage, networking, and orchestration that define real-world AI systems. The result can be an expensive “Day One” surprise:  unexpected infrastructure costs, bottlenecks across distributed components, and delays that ripple across product timelines. CoreWeave’s view is that benchmarking and production launch can no longer be treated as separate phases. Instead, validation must occur in environments that replicate the architectural, operational, and economic realities of live deployment. ARENA is designed around that premise. The platform allows customers to run full workloads on CoreWeave’s production-grade GPU infrastructure, using standardized compute stacks, network configurations, data paths, and service integrations that mirror actual deployment environments. Rather than approximating production behavior, the goal is to observe it directly. Key capabilities include: Running real workloads on GPU clusters that match production configurations. Benchmarking both performance and cost under realistic operational conditions. Diagnosing bottlenecks and scaling behavior across compute, storage, and networking layers. Leveraging standardized observability tools and guided engineering support. CoreWeave positions ARENA as an alternative to traditional demo or sandbox environments; one informed by its own experience operating large-scale AI infrastructure. By validating workloads under production conditions early in the lifecycle, teams gain empirical insight into performance dynamics and cost curves before committing capital and operational resources. Why Production-Scale Validation Has Become Strategic The demand for environments like ARENA reflects how fundamentally AI workloads have changed. Several structural shifts are driving the need for production-scale validation: Continuous, Multi-Layered Workloads AI systems are no longer

Read More »

Utah’s 4 GW AI Campus Tests the Limits of Speed-to-Power

Back in September 2025, we examined an ambitious proposal from infrastructure developer Joule Capital Partners – often branding the effort as “Joule Power” – in partnership with Caterpillar. The concept is straightforward but consequential: acquire a vast rural tract in Millard County, Utah, and pair an AI-focused data center campus with large-scale, on-site “behind-the-meter” generation to bypass the interconnection queues, transmission constraints, and substation bottlenecks slowing projects nationwide. The appeal is clear: speed-to-power and greater control over delivery timelines. But that speed shifts the project’s risk profile. Instead of navigating traditional utility procurement, the development begins to resemble a distributed power plant subject to industrial permitting, fuel supply logistics, air emissions scrutiny, noise controls, and groundwater governance. These are issues communities typically associate with generation facilities, not hyperscale data centers. Our earlier coverage focused on the technical and strategic logic of pairing compute with on-site generation. Now the story has evolved. Community opposition is emerging as a material variable that could influence schedule and scope. Although groundbreaking was held in November 2025, final site plans and key conditional use permits remain pending at the time of publication. What Is Actually Being Proposed? Public records from Millard County show Joule pursuing a zone change for approximately 4,000 acres (about 6.25 square miles), converting agricultural land near 11000 N McCornick Road to Heavy Industrial use. At a July 2025 public meeting, residents raised familiar concerns that surface when a rural landscape is targeted for hyperscale development: labor influx and housing strain, water use, traffic, dust and wildfire risk, wildlife disruption, and the broader loss of farmland and local character. What has proven less clear is the precise scale and sequencing of the buildout. Local reporting describes an initial phase of six data center buildings, each supported by a substantial fleet of Caterpillar

Read More »

Execution, Power, and Public Trust: Rich Miller on 2026’s Data Center Reality and Why He Built Data Center Richness

DCF founder Rich Miller has spent much of his career explaining how the data center industry works. Now, with his latest venture, Data Center Richness, he’s also examining how the industry learns. That thread provided the opening for the latest episode of The DCF Show Podcast, where Miller joined present Data Center Frontier Editor in Chief Matt Vincent and Senior Editor David Chernicoff for a wide-ranging discussion that ultimately landed on a simple conclusion: after two years of unprecedented AI-driven announcements, 2026 will be the year reality asserts itself. Projects will either get built, or they won’t. Power will either materialize, or it won’t. Communities will either accept data center expansion – or they’ll stop it. In other words, the industry is entering its execution phase. Why Data Center Richness Matters Now Miller launched Data Center Richness as both a podcast and a Substack publication, an effort to experiment with formats and better understand how professionals now consume industry information. Podcasts have become a primary way many practitioners follow the business, while YouTube’s discovery advantages increasingly make video versions essential. At the same time, Miller remains committed to written analysis, using Substack as a venue for deeper dives and format experimentation. One example is his weekly newsletter distilling key industry developments into just a handful of essential links rather than overwhelming readers with volume. The approach reflects a broader recognition: the pace of change has accelerated so much that clarity matters more than quantity. The topic of how people learn about data centers isn’t separate from the industry’s trajectory; it’s becoming part of it. Public perception, regulatory scrutiny, and investor expectations are now shaped by how stories are told as much as by how facilities are built. That context sets the stage for the conversation’s core theme. Execution Defines 2026 After

Read More »

Russia’s crude exports signal narrowing buyer pool

A growing number of vessels sailing to unknown destinations and a sharp rise in Russian oil held on water—up as much as 49 million bbl since November 2025—suggest a shrinking pool of willing buyers. Russian crude exports declined by 350,000 b/d m-o-m, reversing most of December’s 360,000 b/d increase. The bulk of the drop came from the Black Sea, while product exports rose by 260,000 b/d, largely driven by heavy product flows (+200,000 b/d). Higher prices boosted revenues across both crude and products. Product revenues climbed by $330 million, more than offsetting a $210 million decline in crude export revenues. Separately, Russia reported a 24% year-on-year decline in 2025 oil and gas tax revenues to about $110 billion. Under the European Union (EU)’s revised mechanism, the price cap on Russian crude was lowered to $44.10/bbl as of Feb. 2. Urals Primorsk averaged $40.06/bbl in January. Of total crude exports, 65% were sold by Russian proxy companies, 13% by sanctioned firms, and 21% by other companies. Among the proxy companies, Redwood Global FZE LLC—Rosneft’s substitute—remained the largest crude exporter, supplying 1 million b/d to China and India last month. Russian crude imports EU enforcement measures are beginning to reshape trade flows. Since Jan. 21, EU buyers have been required to more rigorously verify the origin of imported products. In 2025, the EU-27 and UK sourced 12% of their middle distillate imports from refineries in India and Türkiye processing Russian crude. India’s Jamnagar refinery halted Russian crude imports in mid-December to comply, as Europe accounted for 40% of its middle distillate exports last year. As a result, EU and UK reliance on seaborne Russian-origin molecules fell to 1.6% in January, with most cargoes shipped before Jan. 21 and largely originating from Türkiye. Meanwhile, EU middle distillate imports from the US rose by

Read More »

Commonwealth LNG signs 20-year LNG supply deal with Aramco Trading

Commonwealth LNG, a Caturus company, signed a long-term LNG supply agreement with Aramco Trading, a subsidiary of Saudi Aramco. Under a sale and purchase agreement, Aramco Trading will purchase 1 million tonnes/year (tpy) of LNG from the 9.5-million tpy Commonwealth LNG plant currently under development on the Gulf Coast in Cameron Parish, La. Caturus is working to secure the project’s remaining capacity as it aims for a final investment decision (FID) on the plant. The company holds long-term offtake contracts with Glencore, JERA, Petronas, Mercuria, and EQT. In December 2025, the company authorized full purchase orders to certain industry partners supporting development of the project. The purchase orders are being executed via Commonwealth’s engineering, procurement and construction partner Technip Energies. The purchase orders address long lead time equipment needed to advance construction features of Commonwealth’s modular approach. They include orders with Baker Hughes for six mixed-refrigerant compressors driven by LM9000 gas turbines; Honeywell, to supply six main cryogenic heat exchangers; and Solar Turbines, providing four Titan 350 gas turbine-generators.  At the time, Caturus said the FID on the project was expected in first-quarter 2026.

Read More »

Exxon Mobil Guyana prepares Errea Wittu FPSO at Uaru field

Exxon Mobil Guyana Ltd. subcontractor Jumbo Offshore, on behalf of Modec, has completed mooring pre-installation for the Errea Wittu floating, production, storage, and offloading (FPSO) unit at Uaru field, Stabroek block, offshore Guyana. Jumbo Offshore performed installation engineering, procurement, mobilization, and marshaling activities to support the deepwater pre-lay mooring project. The offshore campaign was executed using the Fairplayer J-class installation vessel. Errea Wittu is expected to produce 250,000 b/d of oil and will have a gas treatment capacity of 540 MMcfd. The unit will have a water injection capacity of 350,000 b/d, a produced water capacity of 300,000 b/d, and a storage capacity of 2 million bbl of crude oil. Uaru field lies 200 km offshore Guyana at a depth of 1,750 m. The fifth project on Guyana’s offshore Stabroek block, Uaru is estimated to hold more than 800 million bbl of oil. First oil is expected this year.  As part of its fourth-quarter 2025 earnings call Jan. 30, 2026, the company noted record full-year production from Guyana of more than 700,000 b/d with its first four developments.

Read More »

US rig count unchanged, Canada rig count dips

The active rig count in the US was unchanged from last week with 551 rigs running for the week ended Feb. 13, according to Baker Hughes data. The number of working oil-directed rigs in the US decreased by 3 units to 409 for the week. The count is down 72 units year-over-year. Gas-directed rigs increased by 3 units to 133, up 32 units year-over-year. Nine rigs considered unclassified remained active during the week, unchanged from last week. The number of working US land-based rigs declined by 1 to 531. Horizontal rigs decreased 2 units to 481. Directional drilling rigs increased by 2 to 57 for the week. The vertical rig count was unchanged this week at 13 rigs working. The number of rigs working offshore increased by 1 to end the week with 17 working rigs. Louisiana saw its rig count increase by 2 units to end the week with 41 rigs. New Mexico, Pennsylvania, and Wyoming each saw rig counts increase by a single unit this week to respective counts of 102, 20, and 17. Texas dropped 3 rigs to leave 229 running for the week. Rig counts in Oklahoma and North Dakota fell by one unit each, leaving 45 rigs running in Oklahoma and 26 in North Dakota. Canada’s rig count fell by 6 rigs to 222. The count is down 23 units from this time a year-ago. The number of gas-directed rigs decreased by 4 units to 69. The oil-directed rig count fell by 2 units to leave 153 units working.

Read More »

No action in US-Iran conflict reduces market risk premium

Oil, fundamental analysis With little progress being made in the US-Iran talks and, with no military action by either side, traders reduced the market risk premium this week with a ‘wait-and-see’ attitude. An unexpectedly large gain in crude inventory and an increase in gasoline stocks provided bearish momentum for prices to move lower. US prices still remained above the key $60/bbl level. WTI had a High of $65.85/bbl on Wednesday with a weekly Low of $61.15 on Friday. Brent crude’s High was $70.70/bbl on Wednesday while its Low was $66.90 Friday. Both grades settled lower week-on-week. The WTI/Brent spread has widened to ($4.90) on the earlier week rally. Look for this to tighten next week. US-Iran talks are scheduled to continue which lends an optimistic tone however, a second US aircraft carrier is reported to be heading into the Middle East, a move that could add risk premium back into oil markets. US-flagged ships were told to avoid Iranian waters when  traversing the Strait of Hormuz. Israeli PM Netanyahu visited the White House this week to present his countries demands for limitations on Iran’s uranium enrichment and its backing of rebel groups like Hamas and Hezbollah. With near-term concerns regarding supply disruption abating, the market has returned to a focus on over supply. The International Energy Agency (IEA) in Paris has lowered its forecast for global crude demand for this year while stating that global inventories last year grew at their strongest pace since 2020. On the other hand, the OPEC+ group output for January fell by 440,000 b/d.  As part of a wider trade deal, India has agreed to halt its purchases of Russian crude. In return, the US will slash the tariffs on imports from India from the punitive 50% back down to the 18% level. US exploration and

Read More »

Energy Secretary Prevents Closure of Coal Plant That Provided Essential Power During Winter Storm

WASHINGTON—U.S. Secretary of Energy Chris Wright renewed an emergency order to address critical grid reliability issues facing the Midwestern region of the United States. The emergency order directs the Midcontinent Independent System Operator (MISO), in coordination with Consumers Energy, to ensure that the J.H. Campbell coal-fired power plant (Campbell Plant) in West Olive, Michigan shall take all steps necessary to remain available to operate and to employ economic dispatch to minimize costs for the American people. The Campbell Plant was originally scheduled to shut down on May 31, 2025 — 15 years before the end of its scheduled design life. “The energy sources that perform when you need them most are inherently the most valuable—that’s why beautiful, clean coal was the MVP of recent winter storms,” Secretary Wright said. “Hundreds of American lives have likely been saved because of President Trump’s actions saving America’s coal plants, including this Michigan coal plant which ran daily during Winter Storm Fern. This emergency order will mitigate the risk of blackouts and maintain affordable, reliable, and secure electricity access across the region.” The Campbell Plant was integral in stabilizing the grid during the recent winter storms. The plant operated at over 650 megawatts every day before and during Winter Storm Fern, January 21-February 1, proving that allowing it to cease operations would needlessly contribute to grid fragility. Thanks to President Trump’s leadership, coal plants across the country are reversing plans to shut down. In 2025, more than 17 gigawatts of coal-powered electricity generation were saved ahead of Winter Storm Fern. Since the Department of Energy’s (DOE) original order issued on May 23, the Campbell Plant has proven critical to MISO’s operations, operating regularly during periods of high energy demand and low levels of intermittent energy production. Subsequent orders were issued on August 20, 2025 and November 18, 2025. As outlined in DOE’s Resource

Read More »

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.  But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.  Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.  More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.  I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.  On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.  People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest).  People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.  Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?  In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good.  Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed.  And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.   But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing.  Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search.  “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly.  It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be.  But once you’ve used AI Overviews a bit, you realize they are different.  Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.)  “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.”  That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video.  When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai.  There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out?  I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too.  “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web.  “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful.  “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.”  But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?   Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.   “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says.  Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.”  Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”  “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.”  He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew?  A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.   “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says.  OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more.  “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience.  Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does.  “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.”  When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them.  “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed!  The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers.  It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.”  We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge.  The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets.  Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed.  “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information.  In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses.  But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on. 

Read More »

Subsea7 Scores Various Contracts Globally

Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Read More »

Driving into the future

Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.  We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen.  Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes. 
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake.  What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story. 
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa.  Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Read More »

Oil Holds at Highest Levels Since October

Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

Read More »

What to expect from NaaS in 2025

Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

Read More »

UK battery storage industry ‘back on track’

UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing  and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was  fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

Read More »

Gemini 3.1 Pro: A smarter model for your most complex tasks

What’s nextSince releasing Gemini 3 Pro in November, your feedback and the pace of progress have driven these rapid improvements. We are releasing 3.1 Pro in preview today to validate these updates and continue to make further advancements in areas such as ambitious agentic workflows before we make it generally available soon.Starting today, Gemini 3.1 Pro in the Gemini app is rolling out with higher limits for users with the Google AI Pro and Ultra plans. 3.1 Pro is also now available on NotebookLM exclusively for Pro and Ultra users. And developers and enterprises can access 3.1 Pro now in preview in the Gemini API via AI Studio, Antigravity, Vertex AI, Gemini Enterprise, Gemini CLI and Android Studio.We can’t wait to see what you build and discover with it.

Read More »

The Download: autonomous narco submarines, and virtue signaling chatbots

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. How uncrewed narco subs could transform the Colombian drug trade For decades, handmade narco subs have been some of the cocaine trade’s most elusive and productive workhorses, ferrying multi-ton loads of illicit drugs from Colombian estuaries toward markets in North America and, increasingly, the rest of the world. Now off-the-shelf technology—Starlink terminals, plug-and-play nautical autopilots, high-resolution video cameras—may be advancing that cat-and-mouse game into a new phase.Uncrewed subs could move more cocaine over longer distances, and they wouldn’t put human smugglers at risk of capture. And law enforcement around the world is just beginning to grapple with what this means for the future. Read the full story. —Eduardo Echeverri López
This story is from the next print issue of MIT Technology Review magazine, which is all about crime. If you haven’t already, subscribe now to receive future issues once they land.
 Google DeepMind wants to know if chatbots are just virtue signaling The news: Google DeepMind is calling for the moral behavior of large language models—such as what they do when called on to act as companions, therapists, medical advisors, and so on—to be scrutinized with the same kind of rigor as their ability to code or do math. Why it matters: As LLMs improve, people are asking them to play more and more sensitive roles in their lives. Agents are starting to take actions on people’s behalf. LLMs may be able to influence human decision-making. And yet nobody knows how trustworthy this technology really is at such tasks. Read the full story. —Will Douglas Heaven The building legal case for global climate justice The United States and the European Union grew into economic superpowers by committing climate atrocities. They have burned a wildly disproportionate share of the world’s oil and gas, planting carbon time bombs that will detonate first in the poorest, hottest parts of the globe.Morally, there’s an ironclad case that the countries or companies responsible for this mess should provide compensation. Legally, though, the case has been far harder to make. But now those tides might be turning. Read the full story. —James Temple

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 The US is building an online portal to access content banned elsewhere The freedom.gov site is Washington’s broadbrush solution to global censorship. (Reuters)+ The Trump administration is on a mission to train a cadre of elite coders. (FT $)2 Mark Zuckerberg overruled wellbeing experts to keep beauty filters on InstagramBecause removing them may have impinged on “free expression,” apparently. (FT $)+ The CEO claims that increasing engagement is not Instagram’s goal. (CNBC)+ Instead, the company’s true calling is to give its users “something useful”. (WSJ $)+ A new investigation found Meta is failing to protect children from predators. (WP $)3 Silicon Valley is working on a shadow power grid for US data centersAI firms are planning to build their own private power plants across the US. (WP $)+ They’re pushing the narrative that generative AI will save the Earth. (Wired $)+ We need better metrics to measure data center sustainability with. (IEEE Spectrum)+ The data center boom in the desert. (MIT Technology Review) 4 Russian forces are struggling with Starlink and Telegram crackdownsNew restrictions have left troops without a means to communicate. (Bloomberg $)5 Bill Gates won’t speak at India’s AI summit after allGiven the growing controversy surrounding his ties to Jeffrey Epstein. (BBC)+ The event has been accused of being disorganized and poorly managed. (Reuters)+ AI leaders didn’t appreciate this awkward photoshoot. (Bloomberg $) 6 AI software sales are slowing downLast year’s boom appears to be waning, vendors have warned. (WSJ $)+ What even is the AI bubble? (MIT Technology Review) 7 eBay has acquired its clothes resale rival Depop 👚It’s a naked play to corner younger Gen Z shoppers. (NYT $)
8 There’s a lot more going on inside cells than we originally thoughtIt’s seriously crowded inside there. (Quanta Magazine) 9 What it means to create a chart-topping appDoes anyone care any more? (The Verge)10 Do we really need eight hours of sleep?Research suggests some people really are fine operating on as little as four hours of snooze time. (New Yorker $)
Quote of the day “Too often, those victims have been left to fight alone…That is not justice. It is failure.” —Keir Starmer, the UK’s prime minister, outlines plans to force technology firms to remove deepfake nudes and revenge porn within 48 hours or risk being blocked in the UK, the Guardian reports. One more thing
End of life decisions are difficult and distressing. Could AI help?End-of-life decisions can be extremely upsetting for surrogates—the people who have to make those calls on behalf of another person. Friends or family members may disagree over what’s best for their loved one, which can lead to distressing situations.David Wendler, a bioethicist at the US National Institutes of Health, and his colleagues have been working on an idea for something that could make things easier: an artificial intelligence-based tool that can help surrogates predict what the patients themselves would want in any given situation.Wendler hopes to start building their tool as soon as they secure funding for it, potentially in the coming months. But rolling it out won’t be simple. Critics wonder how such a tool can ethically be trained on a person’s data, and whether life-or-death decisions should ever be entrusted to AI. Read the full story.—Jessica Hamzelou We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Oakland Library keeps a remarkable public log of all the weird and wonderful artefacts their librarians find tucked away in the pages of their books.+ Orchids are beautiful, but temperamental. Here’s how to keep them alive.+ I love that New York’s Transit Museum is holding a Pizza Rat Debunked event.+ These British indie bands aren’t really lauded at home—but in China, they’re treated like royalty.

Read More »

How uncrewed narco subs could transform the Colombian drug trade

On a bright morning last April, a surveillance plane operated by the Colombian military spotted a 40-foot-long shark-like silhouette idling in the ocean just off Tayrona National Park. It was, unmistakably, a “narco sub,” a stealthy fiberglass vessel that sails with its hull almost entirely underwater, used by drug cartels to move cocaine north. The plane’s crew radioed it in, and eventually nearby coast guard boats got the order, routine but urgent: Intercept. In Cartagena, about 150 miles from the action, Captain Jaime González Zamudio, commander of the regional coast guard group, sat down at his desk to watch what happened next. On his computer monitor, icons representing his patrol boats raced toward the sub’s coordinates as updates crackled over his radio from the crews at sea. This was all standard; Colombia is the world’s largest producer of cocaine, and its navy has been seizing narco subs for decades. And so the captain was pretty sure what the outcome would be. His crew would catch up to the sub, just a bit of it showing above the water’s surface. They’d bring it to heel, board it, and force open the hatch to find two, three, maybe four exhausted men suffocating in a mix of diesel fumes and humidity, and a cargo compartment holding several tons of cocaine. The boats caught up to the sub. A crew boarded, forced open the hatch, and confirmed that the vessel was secure. But from that point on, things were different. First, some unexpected details came over the radio: There was no cocaine on board. Neither was there a crew, nor a helm, nor even enough room for a person to lie down. Instead, inside the hull the crew found a fuel tank, an autopilot system and control electronics, and a remotely monitored security camera. González Zamudio’s crew started sending pictures back to Cartagena: Bolted to the hull was another camera, as well as two plastic rectangles, each about the size of a cookie sheet—antennas for connecting to Starlink satellite internet.
The authorities towed the boat back to Cartagena, where military techs took a closer look. Weeks later, they came to an unsettling conclusion: This was Colombia’s first confirmed uncrewed narco sub. It could be operated by remote control, but it was also capable of some degree of autonomous travel. The techs concluded that the sub was likely a prototype built by the Clan del Golfo, a powerful criminal group that operates along the Caribbean coast. For decades, handmade narco subs have been some of the cocaine trade’s most elusive and productive workhorses, ferrying multi-ton loads of illicit drugs from Colombian estuaries toward markets in North America and, increasingly, the rest of the world. Now off-the-shelf technology—Starlink terminals, plug-and-play nautical autopilots, high-resolution video cameras—may be advancing that cat-and-mouse game into a new phase.
Uncrewed subs could move more cocaine over longer distances, and they wouldn’t put human smugglers at risk of capture. Law enforcement around the world is just beginning to grapple with what the Tayrona sub means for the future—whether it was merely an isolated experiment or the opening move in a new era of autonomous drug smuggling at sea. Drug traffickers love the ocean. “You can move drug traffic through legal and illegal routes,” says Juan Pablo Serrano, a captain in the Colombian navy and head of the operational coordination center for Orión, a multiagency, multinational counternarcotics effort. The giant container ships at the heart of global commerce offer a favorite approach, Serrano says. Bribe a chain of dockworkers and inspectors, hide a load in one of thousands of cargo boxes, and put it on a totally legal commercial vessel headed to Europe or North America. That route is slow and expensive—involving months of transit and bribes spread across a wide network—but relatively low risk. “A ship can carry 5,000 containers. Good luck finding the right one,” he says. Far less legal, but much faster and cheaper, are small, powerful motorboats. Quick to build and cheap to crew, these “go-fasts” top out at just under 50 feet long and can move smaller loads in hours rather than days. But they’re also easy for coastal radars and patrols to spot. Submersibles—or, more accurately, “semisubmersibles”—fit somewhere in the middle. They take more money and engineering to build than an open speedboat, but they buy stealth—even if a bit of the vessel rides at the surface, the bulk stays hidden underwater. That adds another option to a portfolio that smugglers constantly rebalance across three variables: risk, time, and cost. When US and Colombian authorities tightened control over air routes and commercial shipping in the early 1990s, subs became more attractive. The first ones were crude wooden hulls with a fiberglass shell and extra fuel tanks, cobbled together in mangrove estuaries, hidden from prying eyes. Today’s fiberglass semisubmersible designs ride mostly below the surface, relying on diesel engines that can push multi-ton loads for days at a time while presenting little more than a ripple and a hot exhaust pipe to radar and infrared sensors. A typical semisubmersible costs under $2 million to build and can carry three metric tons of cocaine. That’s worth over $160 million in Europe—wholesale. Most ferry between South American coasts and handoff points in Central America and Mexico, where allied criminal organizations break up the cargo and slowly funnel it toward the US. But some now go much farther. In 2019, Spanish authorities intercepted a semisubmersible after a 27-day transatlantic voyage from Brazil. In 2024, police in the Solomon Islands found the first narco sub in the Asia-Pacific region, a semisubmersible probably originating from Colombia on its way to Australia or New Zealand. If the variables are risk, time, and cost, then the economics of a narco sub are simple. Even if they spend more time on the water than a powerboat, they’re less likely to get caught—and a relative bargain to produce. A narco sub might cost between $1 million and $2 million to build, but a kilo of cocaine costs just about $500 to make. “By the time that kilo reaches Europe, it can sell for between $44,000 and $55,000,” Serrano says. A typical semisubmersible carries up to three metric tons—cargo worth well over $160 million at European wholesale prices. Off-the-shelf nautical autopilots, WiFi antennas, Starlink satellite internet connections, and remote cameras are all drug smugglers need to turn semisubmersibles into drone ships. As a result, narco subs are getting more common. Seizures by authorities tripled in the last 20 years, according to Colombia’s International Center for Research and Analysis Against Maritime Drug Trafficking (CMCON), and Serrano admits that the Orión alliance has enough ships and aircraft to catch only a fraction of what sails. Until now, though, narco subs have had one major flaw: They depended on people, usually poor fishermen or low-level recruits sealed into stifling compartments for days at a time, steering by GPS and sight, hoping not to be spotted. That made the subs expensive and a risk to drug sellers if captured. Like good capitalists, the Tayrona boat’s builders seem to have been trying to obviate labor costs with automation. No crew means more room for drugs or fuel and no sailors to pay—or to get arrested or flip if a mission goes wrong.

“If you don’t have a person or people on board, that makes the transoceanic routes much more feasible,” says Henry Shuldiner, a researcher at InSight Crime who has analyzed hundreds of narco-sub cases. It’s one thing, he notes, to persuade someone to spend a day or two going from Colombia to Panama for a big payout; it’s another to ask four people to spend three weeks sealed inside a cramped tube, sleeping, eating, and relieving themselves in the same space. “That’s a hard sell,” Shuldiner says. An uncrewed sub doesn’t have to race to a rendezvous because its crew can endure only a few days inside. It can move more slowly and stealthily. It can wait out patrols or bad weather, loiter near a meeting point, or take longer and less well-monitored routes. And if something goes wrong—if a military plane appears or navigation fails—its owners can simply scuttle the vessel from afar. Meanwhile, the basic technology to make all that work is getting more and more affordable, and the potential profit margins are rising. “The rapidly approaching universality of autonomous technology could be a nightmare for the U.S. Coast Guard,” wrote two Coast Guard officers in the US Naval Institute’s journal Proceedings in 2021. And as if to prove how good an idea drone narco subs are, the US Marine Corps and the weapons builder Leidos are testing a low-profile uncrewed vessel called the Sea Specter, which they describe as being “inspired” by narco-sub design. The possibility that drug smugglers are experimenting with autonomous subs isn’t just theoretical. Law enforcement agencies on other smuggling routes have found signs the Tayrona sub isn’t an isolated case. In 2022, Spanish police seized three small submersible drones near Cádiz, on Spain’s southern coast. Two years later, Italian authorities confiscated a remote-­controlled minisubmarine they believed was intended for drug runs. “The probability of expansion is high,” says Diego Cánovas, a port and maritime security expert in Spain. Tayrona, the biggest and most technologically advanced uncrewed narco sub found so far, is more likely a preview than an anomaly. Today, the Tayrona semisubmersible sits on a strip of grass at the ARC Bolívar naval base in Cartagena. It’s exposed to the elements; rain has streaked its paint. To one side lies an older, bulkier narco sub seized a decade ago, a blue cylinder with a clumsy profile. The Tayrona’s hull looks lower, leaner, and more refined. Up close, it is also unmistakably handmade. The hull is a dull gray-blue, the fiberglass rough in places, with scrapes and dents from the tow that brought it into port. It has no identifying marks on the exterior—nothing that would tie it to a country, a company, or a port. On the upper surface sit the two Starlink antennas, painted over in the same gray-blue to keep them from standing out against the sea. I climb up a ladder and drop through the small hatch near the stern. Inside, the air is damp and close, the walls beaded with condensation. Small puddles of fuel have collected in the bilge. The vessel has no seating, no helm or steering wheel, and not enough space to stand up straight or lie down. It’s clear it was never meant to carry people. A technical report by CMCON found that the sub would have enough fuel for a journey of some 800 nautical miles, and the central cargo bay would hold between 1 and 1.5 tons of cocaine. At the aft end, the machinery compartment is a tangle of hardware: diesel engine, batteries, pumps, and a chaotic bundle of cables feeding an electronics rack. All the core components are still there. Inside that rack, investigators identified a NAC-3 autopilot processor, a commercial unit designed to steer midsize boats by tying into standard hydraulic pumps, heading sensors, and rudder-­feedback systems. They cost about $2,200 on Amazon.
“These are plug-and-play technologies,” says Wilmar Martínez, a mechatronics professor at the University of America in Bogotá, when I show him pictures of the inside of the sub. “Midcareer mechatronics students could install them.” For all its advantages, an autonomous drug-smuggling submarine wouldn’t be invincible. Even without a crew on board, there are still people in the chain. Every satellite internet terminal—Starlink or not—comes with a billing address, a payment method, and a log of where and when it pings the constellation. Colombian officers have begun to talk about negotiating formal agreements with providers, asking them to alert authorities when a transceiver’s movements match known smuggling patterns. Brazil’s government has already cut a deal with Starlink to curb criminal use of its service in the Amazon.
The basic playbook for finding a drone sub will look much like the one for crewed semisubmersibles. Aircraft and ships will use radar to pick out small anomalies and infrared cameras to look for the heat of a diesel engine or the turbulence of a wake. That said, it might not work. “If they wind up being smaller, they’re going to be darn near impossible to detect,” says Michael Knickerbocker, a former US Navy officer who advises defense tech firms. Autonomous drug subs are “a great example of how resilient cocaine traffickers are, and how they’re continuously one step ahead of authorities,” says one researcher. Even worse, navies already act on only a fraction of their intelligence leads because they don’t have enough ships and aircraft. The answer, Knickerbocker argues, is “robot on robot.” Navies and coast guards will need swarms of their own small, relatively cheap uncrewed systems—surface vessels, underwater gliders, and long-endurance aerial vehicles that can loiter, sense, and relay data back to human operators. Those experiments have already begun. The US 4th Fleet, which covers Latin America and the Caribbean, is experimenting with uncrewed platforms in counternarcotics patrols. Across the Atlantic, the European Union’s European Maritime Safety Agency operates drones for maritime surveillance. Today, though, the major screens against oceangoing vessels of all kinds are coastal radar networks. Spain operates SIVE to watch over choke points like the Strait of Gibraltar, and in the Pacific, Australia’s over-the-horizon radar network, JORN, can spot objects hundreds of miles away, far beyond the range of conventional radar. Even so, it’s not enough to just spot an uncrewed narco sub. Law enforcement also has to stop it—and that will be tricky. To find drone subs, international law enforcement will likely have to rely on networks of surveillance systems and, someday, swarms of their own drones.CARLOS PARRA RIOS With a crewed vessel, Colombian doctrine says coast guard units should try to hail the boat first with lights, sirens, radio calls, and warning shots. If that fails, interceptor crews sometimes have to jump aboard and force the hatch. Officers worry that future autonomous craft could be wired to sink or even explode if someone gets too close. “If they get destroyed, we may lose the evidence,” says Víctor González Badrán, a navy captain and director of CMCON. “That means no seizure and no legal proceedings against that organization.”  That’s where electronic warfare enters the picture—radio-frequency jamming, cyber tools, perhaps more exotic options. In the simplest version, jamming means flooding the receiver with noise so that commands from the operator never reach the vessel. Spoofing goes a step further, feeding fake signals so that the sub thinks it’s somewhere else or obediently follows a fake set of waypoints. Cyber tools might aim higher up the chain, trying to penetrate the software that runs the vessel or the networks it uses to talk to satellite constellations. At the cutting edge of these countermeasures are electromagnetic pulses designed to fry electronics outright, turning a million-dollar narco sub into a dead hull drifting at sea.
In reality, the tools that might catch a future Tayrona sub are unevenly distributed, politically sensitive, and often experimental. Powerful cyber or electromagnetic tricks are closely guarded secrets; using them in a drug case risks exposing capabilities that militaries would rather reserve for wars. Systems like Australia’s JORN radar are tightly held national security assets, their exact performance specs classified, and sharing raw data with countries on the front lines of the cocaine trade would inevitably mean revealing hints as to how they got it. “Just because a capability exists doesn’t mean you employ it,” Knickerbocker says.  Analysts don’t think uncrewed narco subs will reshape the global drug trade, despite the technological leap. Trafficking organizations will still hedge their bets across those three variables, hiding cocaine in shipping containers, dissolving it into liquids and paints, racing it north in fast boats. “I don’t think this is revolutionary,” Shuldiner says. “But it’s a great example of how resilient cocaine traffickers are, and how they’re continuously one step ahead of authorities.” There’s still that chance, though, that everything international law enforcement agencies know about drug smuggling is about to change. González Zamudio says he keeps getting requests from foreign navies, coast guards, and security agencies to come see the Tayrona sub. He greets their delegations, takes them out to the strip of grass on the base, and walks them around it, gives them tours. It has become a kind of pilgrimage. Everyone who makes it worries that the next time a narco sub appears near a distant coastline, they’ll board it as usual, force the hatch—and find it full of cocaine and gadgets, but without a single human occupant. And no one knows what happens after that.  Eduardo Echeverri López is a journalist based in Colombia.

Read More »

The building legal case for global climate justice

The United States and the European Union grew into economic superpowers by committing climate atrocities. They have burned a wildly disproportionate share of the world’s oil and gas, planting carbon time bombs that will detonate first in the poorest, hottest parts of the globe.  Meanwhile, places like the Solomon Islands and Chad—low-lying or just plain sweltering—have emitted relatively little carbon dioxide, but by dint of their latitude and history, they rank among the countries most vulnerable to the fiercest consequences of global warming. That means increasingly devastating cyclones, heat waves, famines, and floods. Morally, there’s an ironclad case that the countries or companies responsible for this mess should provide compensation for the homes that will be destroyed, the shorelines that will disappear beneath rising seas, and the lives that will be cut short. By one estimate, the major economies owe a climate debt to the rest of the world approaching $200 trillion in reparations. Legally, though, the case has been far harder to make. Even putting aside the jurisdictional problems, early climate science couldn’t trace the provenance of airborne molecules of carbon dioxide across oceans and years. Deep-pocketed corporations with top-tier legal teams easily exploited those difficulties. 
Now those tides might be turning. More climate-related lawsuits are getting filed, particularly in the Global South. Governments, nonprofits, and citizens in the most climate-exposed nations continue to test new legal arguments in new courts, and some of those courts are showing a new willingness to put nations and their industries on the hook as a matter of human rights. In addition, the science of figuring out exactly who is to blame for specific weather disasters, and to what degree, is getting better and better.  It’s true that no court has yet held any climate emitter liable for climate-related damages. For starters, nations are generally immune from lawsuits originating in other countries. That’s why most cases have focused on major carbon producers. But they’ve leaned on a pretty powerful defense. 
While oil and gas companies extract, refine, and sell the world’s fossil fuels, most of the emissions come out of “the vehicles, power plants, and factories that burn the fuel,” as Michael Gerrard and Jessica Wentz, of Columbia Law School’s Sabin Center, note in a recent piece in Nature. In other words, companies just dig the stuff up. It’s not their fault someone else sets it on fire. So victims of extreme weather events continue to try new legal avenues and approaches, backed by ever-more-convincing science. Plaintiffs in the Philippines recently sued the oil giant Shell over its role in driving Super Typhoon Odette, a 2021 storm that killed more than 400 people and displaced nearly 800,000. The case relies partially on an attribution study that found climate change made extreme rainfall like that seen in Odette twice as likely. 

Read More »

What It Takes to Make Agentic AI Work in Retail

Thank you for joining us on the “Enterprise AI hub.” In this episode of the Infosys Knowledge Institute Podcast, Dylan Cosper speaks with Prasad Banala, Director of Software Engineering at a large US-based retail organization, about operationalizing agentic AI across the software development lifecycle. Prasad explains how his team applies AI to validate requirements, generate and analyze test cases, and accelerate issue resolution, while maintaining strict governance, human-in-the-loop review, and measurable quality outcomes. Click here to continue.

Read More »

From Integration Chaos to Digital Clarity: Nutrien Ag Solutions’ Post-Acquisition Reset

Thank you for joining us on the “Enterprise AI hub.” In this episode of the Infosys Knowledge Institute Podcast, Dylan Cosper speaks with Sriram Kalyan, Head of Applications and Data at Nutrien Ag Solutions, Australia, about turning a high-risk post-acquisition IT landscape into a scalable digital foundation. Sriram shares how the merger of two major Australian agricultural companies created duplicated systems, fragile integrations, and operational risk, compounded by the sudden loss of key platform experts and partners. He explains how leadership alignment, disciplined platform consolidation, and a clear focus on business outcomes transformed integration from an invisible liability into a strategic enabler, positioning Nutrien Ag Solutions for future growth, cloud transformation, and enterprise scale. Click here to continue.

Read More »

Favorable Wi-Fi 7 prices won’t be around for long, Dell’Oro Group warns

Another contributing factor is that some Wi-Fi 7 access points have only two radios, whereas Wi-Fi 6 APs generally have three to support 2.4, 5 and 6 GHz bands, Morgan says. Finally, some vendors offer a wider range of Wi-Fi 7 equipment models than in previous generations. The lower-end models in their portfolios help reduce the average price of all Wi-Fi 7 products, Morgan’s research shows. So, whether you pay a premium for Wi-Fi 7 vs. Wi-Fi 6 or 6E may depend on which models you need. Act now, these deals won’t last Whatever your particular case, if you are in the market for a Wi-Fi 7 upgrade, don’t dally. “In the overall wireless LAN market, not just Wi-Fi 7, we’re going to start to see prices rise,” Morgan says. Price hikes will be largely due to the uncertain availability of memory chips required for WLAN hardware – an issue that’s driving price hikes across all sorts of equipment. “Vendors have already started to raise list prices, even though it’s been in the few percentage points so far,” she said. “We expect further price hikes over the next year.” Lead times are also volatile. Channel partners are telling Dell’Oro that lead times can vary day-to-day, measured in months one day and weeks the next. “There doesn’t seem to be a consistent trend across specific products or specific vendors. It seems volatile across the whole market,” Morgan says. As a result, partners are tightening the windows on how long quotes are valid, because they don’t know how or whether their own pricing will change. While there’s no hard-and-fast rule of thumb, and timing may depend on existing contracts, Morgan says the typical window is probably a matter of weeks.

Read More »

Raising the temp on liquid cooling

IBM isn’t the only one. “We’ve been doing liquid cooling since 2012 on our supercomputers,” says Scott Tease, vice president and general manager of AI and high-performance computing at Lenovo’s infrastructure solutions group. “And we’ve been improving it ever since—we’re now on the sixth generation of that technology.” And the liquid Lenovo uses in its Neptune liquid cooling solution is warm water. Or, more precisely, hot water: 45 degrees Celsius. And when the water leaves the servers, it’s even hotter, Tease says. “I don’t have to chill that water, even if I’m in a hot climate,” he says. Even at high temperatures, the water still provides enough cooling to the chips that it has real value. “Generally, a data center will use evaporation to chill water down,” Tease adds. “Since we don’t have to chill the water, we don’t have to use evaporation. That’s huge amounts of savings on the water. For us, it’s almost like a perfect solution. It delivers the highest performance possible, the highest density possible, the lowest power consumption. So, it’s the most sustainable solution possible.” So, how is the water cooled down? It gets piped up to the roof, Tease says, where there are giant radiators with massive amounts of surface area. The heat radiates away, and then all the water flows right back to the servers again. Though not always. The hot water can also be used to, say, heat campus or community swimming pools. “We have data centers in the Nordics who are giving the heat to the local communities’ water systems,” Tease says.

Read More »

GenAI Pushes Cloud to $119B Quarter as AI Networking Race Intensifies

Cisco Targets the AI Fabric Bottleneck Cisco introduced its Silicon One G300, a new switching ASIC delivering 102.4 Tbps of throughput and designed specifically for large-scale AI cluster deployments. The chip will power next-generation Cisco Nexus 9000 and 8000 systems aimed at hyperscalers, neocloud providers, sovereign cloud operators, and enterprises building AI infrastructure. The company is positioning the platform around a simple premise: at AI-factory scale, the network becomes part of the compute plane. According to Cisco, the G300 architecture enables: 33% higher network utilization 28% reduction in AI job completion time Support for emerging 1.6T Ethernet environments Integrated telemetry and path-based load balancing Martin Lund, EVP of Cisco’s Common Hardware Group, emphasized the growing centrality of data movement. “As AI training and inference continues to scale, data movement is the key to efficient AI compute; the network becomes part of the compute itself,” Lund said. The new systems also reflect another emerging trend in AI infrastructure: the spread of liquid cooling beyond servers and into the networking layer. Cisco says its fully liquid-cooled switch designs can deliver nearly 70% energy efficiency improvement compared with prior approaches, while new 800G linear pluggable optics aim to reduce optical power consumption by up to 50%. Ethernet’s Next Big Test Industry analysts increasingly view AI networking as one of the most consequential battlegrounds in the current infrastructure cycle. Alan Weckel, founder of 650 Group, noted that backend AI networks are rapidly moving toward 1.6T architectures, a shift that could push the Ethernet data center switch market above $100 billion annually. SemiAnalysis founder Dylan Patel was even more direct in framing the stakes. “Networking has been the fundamental constraint to scaling AI,” Patel said. “At this scale, networking directly determines how much AI compute can actually be utilized.” That reality is driving intense innovation

Read More »

From Lab to Gigawatt: CoreWeave’s ARENA and the AI Validation Imperative

The Production Readiness Gap AI teams continue to confront a familiar challenge: moving from experimentation to predictable production performance. Models that train successfully on small clusters or sandbox environments often behave very differently when deployed at scale. Performance characteristics shift. Data pipelines strain under sustained load. Cost assumptions unravel. Synthetic benchmarks and reduced test sets rarely capture the complex interactions between compute, storage, networking, and orchestration that define real-world AI systems. The result can be an expensive “Day One” surprise:  unexpected infrastructure costs, bottlenecks across distributed components, and delays that ripple across product timelines. CoreWeave’s view is that benchmarking and production launch can no longer be treated as separate phases. Instead, validation must occur in environments that replicate the architectural, operational, and economic realities of live deployment. ARENA is designed around that premise. The platform allows customers to run full workloads on CoreWeave’s production-grade GPU infrastructure, using standardized compute stacks, network configurations, data paths, and service integrations that mirror actual deployment environments. Rather than approximating production behavior, the goal is to observe it directly. Key capabilities include: Running real workloads on GPU clusters that match production configurations. Benchmarking both performance and cost under realistic operational conditions. Diagnosing bottlenecks and scaling behavior across compute, storage, and networking layers. Leveraging standardized observability tools and guided engineering support. CoreWeave positions ARENA as an alternative to traditional demo or sandbox environments; one informed by its own experience operating large-scale AI infrastructure. By validating workloads under production conditions early in the lifecycle, teams gain empirical insight into performance dynamics and cost curves before committing capital and operational resources. Why Production-Scale Validation Has Become Strategic The demand for environments like ARENA reflects how fundamentally AI workloads have changed. Several structural shifts are driving the need for production-scale validation: Continuous, Multi-Layered Workloads AI systems are no longer

Read More »

Utah’s 4 GW AI Campus Tests the Limits of Speed-to-Power

Back in September 2025, we examined an ambitious proposal from infrastructure developer Joule Capital Partners – often branding the effort as “Joule Power” – in partnership with Caterpillar. The concept is straightforward but consequential: acquire a vast rural tract in Millard County, Utah, and pair an AI-focused data center campus with large-scale, on-site “behind-the-meter” generation to bypass the interconnection queues, transmission constraints, and substation bottlenecks slowing projects nationwide. The appeal is clear: speed-to-power and greater control over delivery timelines. But that speed shifts the project’s risk profile. Instead of navigating traditional utility procurement, the development begins to resemble a distributed power plant subject to industrial permitting, fuel supply logistics, air emissions scrutiny, noise controls, and groundwater governance. These are issues communities typically associate with generation facilities, not hyperscale data centers. Our earlier coverage focused on the technical and strategic logic of pairing compute with on-site generation. Now the story has evolved. Community opposition is emerging as a material variable that could influence schedule and scope. Although groundbreaking was held in November 2025, final site plans and key conditional use permits remain pending at the time of publication. What Is Actually Being Proposed? Public records from Millard County show Joule pursuing a zone change for approximately 4,000 acres (about 6.25 square miles), converting agricultural land near 11000 N McCornick Road to Heavy Industrial use. At a July 2025 public meeting, residents raised familiar concerns that surface when a rural landscape is targeted for hyperscale development: labor influx and housing strain, water use, traffic, dust and wildfire risk, wildlife disruption, and the broader loss of farmland and local character. What has proven less clear is the precise scale and sequencing of the buildout. Local reporting describes an initial phase of six data center buildings, each supported by a substantial fleet of Caterpillar

Read More »

Execution, Power, and Public Trust: Rich Miller on 2026’s Data Center Reality and Why He Built Data Center Richness

DCF founder Rich Miller has spent much of his career explaining how the data center industry works. Now, with his latest venture, Data Center Richness, he’s also examining how the industry learns. That thread provided the opening for the latest episode of The DCF Show Podcast, where Miller joined present Data Center Frontier Editor in Chief Matt Vincent and Senior Editor David Chernicoff for a wide-ranging discussion that ultimately landed on a simple conclusion: after two years of unprecedented AI-driven announcements, 2026 will be the year reality asserts itself. Projects will either get built, or they won’t. Power will either materialize, or it won’t. Communities will either accept data center expansion – or they’ll stop it. In other words, the industry is entering its execution phase. Why Data Center Richness Matters Now Miller launched Data Center Richness as both a podcast and a Substack publication, an effort to experiment with formats and better understand how professionals now consume industry information. Podcasts have become a primary way many practitioners follow the business, while YouTube’s discovery advantages increasingly make video versions essential. At the same time, Miller remains committed to written analysis, using Substack as a venue for deeper dives and format experimentation. One example is his weekly newsletter distilling key industry developments into just a handful of essential links rather than overwhelming readers with volume. The approach reflects a broader recognition: the pace of change has accelerated so much that clarity matters more than quantity. The topic of how people learn about data centers isn’t separate from the industry’s trajectory; it’s becoming part of it. Public perception, regulatory scrutiny, and investor expectations are now shaped by how stories are told as much as by how facilities are built. That context sets the stage for the conversation’s core theme. Execution Defines 2026 After

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE