Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

GPT-5 is here. Now what?

At long last, OpenAI has released GPT-5. The new system abandons the distinction between OpenAI’s flagship models and its o series of reasoning models, automatically routing user queries to a fast nonreasoning model or a slower reasoning version. It is now available to everyone through the ChatGPT web interface—though nonpaying users may need to wait a few days to gain full access to the new capabilities.  It’s tempting to compare GPT-5 with its explicit predecessor, GPT-4, but the more illuminating juxtaposition is with o1, OpenAI’s first reasoning model, which was released last year. In contrast to GPT-5’s broad release, o1 was initially available only to Plus and Team subscribers. Those users got access to a completely new kind of language model—one that would “reason” through its answers by generating additional text before providing a final response, enabling it to solve much more challenging problems than its nonreasoning counterparts. Whereas o1 was a major technological advancement, GPT-5 is, above all else, a refined product. During a press briefing, Sam Altman compared GPT-5 to Apple’s Retina displays, and it’s an apt analogy, though perhaps not in the way that he intended. Much like an unprecedentedly crisp screen, GPT-5 will furnish a more pleasant and seamless user experience. That’s not nothing, but it falls far short of the transformative AI future that Altman has spent much of the past year hyping. In the briefing, Altman called GPT-5 “a significant step along the path to AGI,” or artificial general intelligence, and maybe he’s right—but if so, it’s a very small step. Take the demo of the model’s abilities that OpenAI showed to MIT Technology Review in advance of its release. Yann Dubois, a post-training lead at OpenAI, asked GPT-5 to design a web application that would help his partner learn French so that she could communicate more easily with his family. The model did an admirable job of following his instructions and created an appealing, user-friendly app. But when I gave GPT-4o an almost identical prompt, it produced an app with exactly the same functionality. The only difference is that it wasn’t as aesthetically pleasing.
Some of the other user-experience improvements are more substantial. Having the model rather than the user choose whether to apply reasoning to each query removes a major pain point, especially for users who don’t follow LLM advancements closely.  And, according to Altman, GPT-5 reasons much faster than the o-series models. The fact that OpenAI is releasing it to nonpaying users suggests that it’s also less expensive for the company to run. That’s a big deal: Running powerful models cheaply and quickly is a tough problem, and solving it is key to reducing AI’s environmental impact. 
OpenAI has also taken steps to mitigate hallucinations, which have been a persistent headache. OpenAI’s evaluations suggest that GPT-5 models are substantially less likely to make incorrect claims than their predecessor models, o3 and GPT-4o. If that advancement holds up to scrutiny, it could help pave the way for more reliable and trustworthy agents. “Hallucination can cause real safety and security issues,” says Dawn Song, a professor of computer science at UC Berkeley. For example, an agent that hallucinates software packages could download malicious code to a user’s device. GPT-5 has achieved the state of the art on several benchmarks, including a test of agentic abilities and the coding evaluations SWE-Bench and Aider Polyglot. But according to Clémentine Fourrier, an AI researcher at the company HuggingFace, those evaluations are nearing saturation, which means that current models have achieved close to maximal performance.  “It’s basically like looking at the performance of a high schooler on middle-grade problems,” she says. “If the high schooler fails, it tells you something, but if it succeeds, it doesn’t tell you a lot.” Fourrier said she would be impressed if the system achieved a score of 80% or 85% on SWE-Bench—but it only managed a 74.9%.  Ultimately, the headline message from OpenAI is that GPT-5 feels better to use. “The vibes of this model are really good, and I think that people are really going to feel that, especially average people who haven’t been spending their time thinking about models,” said Nick Turley, the head of ChatGPT. Vibes alone, however, won’t bring about the automated future that Altman has promised. Reasoning felt like a major step forward on the way to AGI. We’re still waiting for the next one.

Read More »

OpenAI launches GPT-5, nano, mini and Pro — not AGI, but capable of generating ‘software-on-demand’

After literally years of hype and speculation, OpenAI has officially launched a new lineup of large language models (LLMs), all different-sized variants of GPT-5, the long-awaited predecessor to its GPT-4 model from March of 2023, nearly 2.5 years ago. The company is rolling out four distinct versions of the model — GPT-5, GPT-5 Mini, GPT-5 Nano, and GPT-5 Pro — to meet varying needs for speed, cost, and computational depth.GPT-5 will soon be powering ChatGPT exclusively and replace all other models going forward for its 700 million weekly users, though ChatGPT Pro subscribers ($200) month can still select older models for the next 60 days.As per rumors and reports, OpenAI has replaced the previous system of having users switch the underlying model powering ChatGPT with an automatic router that decides to engage a special “GPT-5 thinking” mode with “deeper reasoning” that takes longer to respond on harder queries, or uses the regular GPT-5 or mini models for simpler queries.

Read More »

Energy Secretary Marks 200th Day of Trump Administration at SRNL’s New Advanced Manufacturing Collaborative Center

Advanced Manufacturing Collaborative in South Carolina Set to Lead on AI, Energy, and Manufacturing WASHINGTON—U.S. Secretary of Energy Chris Wright joined U.S. Senator Lindsey Graham (R-SC), U.S. Representative Joe Wilson (R-SC-02) and state and local leaders for the opening of the new Advanced Manufacturing Collaborative, creating a new chapter for American innovation in South Carolina. Launched during the first Trump Administration and led by the Department of Energy’s Savannah River National Laboratory (SRNL), the new center enables DOE’s mission to support American manufacturing – serving as an economic driver, creating jobs, spurring innovation and maximizing the reach of industry in South Carolina. “The Advanced Manufacturing Collaborative will bring the expertise of the Department of Energy’s National Labs together with innovators from academia and the private sector with one shared goal: to unleash America’s energy potential,” said Energy Secretary Wright. “This mission was started by President Trump in his first term, and I am proud to be representing the Department of Energy 200 days into his second administration for the grand opening of this facility, completed in record time.” “The opening of the Advanced Manufacturing Collaborative on USC Aiken’s campus will greatly enhance the ability for the Savannah River National Laboratory and private sector, along with academia, to work together on critical initiatives,” said U.S. Senator Lindsey Graham (R-SC). “I was proud to secure federal funding for this facility because I believe this partnership will pay dividends for South Carolina and the rest of the nation. The opening of this facility will cement Aiken as a hub for innovation and advanced technology development for years to come. Finally, I would like to thank Secretary Wright and President Trump for recognizing the importance of South Carolina’s contribution to positioning America as a leader in manufacturing and innovation. I will continue to work

Read More »

Russia Says Drone Attack Caused Fire at Afipsky Refinery

Drone attacks in the early hours of Thursday triggered a fire at the independent Afipsky refinery in southern Russia, according to regional emergency services. “A gas and gas-condensate processing unit caught on fire,” the services said in a statement on Telegram. No further details on the extent of the damage were provided. The blaze was fully extinguished by 8:21 a.m. local time, according to the statement. The facility has since resumed normal operations, its press service said. The incident comes during a renewed wave of Ukrainian drone strikes targeting Russia’s downstream oil sector. Earlier this month, similar attacks disrupted operations at two major refineries operated by Rosneft PJSC. The strikes were in response to increasingly intense Russian barrages, according to the Ukrainian General Staff.  The Kremlin is now considering a potential concession to US President Donald Trump, which could include an air truce with Ukraine to head off secondary sanctions, according to people familiar with the situation. The Afipsky refinery has a processing capacity of as much as 9.1 million tons of crude oil annually, or some 180,000 barrels per day, which makes it one of Russia’s smaller facilities. The nation currently processes more than 5 million barrels of crude daily, according to Bloomberg estimates based on industry data. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Harbour Energy Jumps the Most Since 2023 in London Trading

Harbour Energy Plc, the UK’s biggest independent oil and gas producer, jumped the most since 2023 in London trading after announcing the start of a $100 million share buyback and raising financial targets. The company reported strong first-half earnings on Thursday, more than tripling free cash flow as it incorporated assets acquired from Wintershall Dea last year. That allowed it to raise its full-year cash forecast by about 10% to $1 billion and announce the fresh buyback. “We strengthened our financial position” despite market volatility, Chief Executive Officer Linda Cook told reporters. Harbour “entered the second half in an excellent position.” The shares climbed as much as 21% and traded up 13% as of 10:24 a.m. London time. Recent months have seen wild oil-market swings, with prices buffeted by US President Donald Trump’s trade war, shifting OPEC+ policy and Israel’s attacks on Iran. Yet Harbour’s integration of Wintershall Dea fields, including in Norway, Germany and Argentina, allowed it to triple daily production to 488,000 barrels of oil equivalent and raise the lower end of its full-year output guidance. The new buyback will take total shareholder distributions to $555 million in 2025, assuming it completes by year’s end, Harbour said in a statement, adding that it has to conclude by March 31. The company declared an interim dividend of $227.5 million, or 13.19 cents a share, in line with its annual payout policy. Harbour trades in London and operates fields in the UK North Sea. Yet it’s among many companies working on the British continental shelf to reassess their activities after several tax increases. In May it announced plans to cut jobs, and on Thursday said it expects to complete the reorganization by the end of this quarter. “So long as the fiscal regime is as it is in the country, investment here just

Read More »

Why utilities should bring water into the data center energy conversation

Pete Elliott is senior technical staff consultant at ChemTreat and Richard Tribble is technical service consultant at ChemTreat. The growth of data centers is accelerating rapidly, driven by generative AI, cloud computing and increased digital infrastructure demand. As utilities plan for the resulting rise in electricity consumption, one critical factor is often left out of the conversation: water. Cooling systems are among the most resource-intensive components of data center operations. Whether using evaporative cooling towers, liquid-cooled systems, or air-based methods, the trade-offs between water consumption and energy demand carry significant implications for utilities and grid planning. In many cases, these trade-offs are not fully integrated into siting, design or forecasting discussions, despite their direct impact on infrastructure resilience and long-term environmental performance. Utilities and data center operators will need to coordinate more closely to address these challenges as energy and water demand rise in tandem. AI is reshaping infrastructure demand AI workloads require far greater computing power than conventional applications. This demand is driving up server rack power densities and increasing the heat load across data centers. Facilities that once averaged 8 kW per rack now often exceed 17 kW, with projections approaching 30 kW by 2027. As thermal loads increase, so do the cooling requirements. Data center power demand is expected to grow 50% by 2027 and potentially 165% by 2030. In parallel, total onsite and offsite water use associated with AI infrastructure is projected to reach 4.2 to 6.6 billion cubic meters annually, equivalent to nearly half the United Kingdom’s annual water withdrawals. These trends place added strain not only on utility-scale energy infrastructure, but also on regional water systems, particularly in areas already facing water scarcity or seasonal stress. Understanding the water-energy trade-off Evaporative cooling systems are commonly used in data centers due to their thermodynamic efficiency.

Read More »

GPT-5 is here. Now what?

At long last, OpenAI has released GPT-5. The new system abandons the distinction between OpenAI’s flagship models and its o series of reasoning models, automatically routing user queries to a fast nonreasoning model or a slower reasoning version. It is now available to everyone through the ChatGPT web interface—though nonpaying users may need to wait a few days to gain full access to the new capabilities.  It’s tempting to compare GPT-5 with its explicit predecessor, GPT-4, but the more illuminating juxtaposition is with o1, OpenAI’s first reasoning model, which was released last year. In contrast to GPT-5’s broad release, o1 was initially available only to Plus and Team subscribers. Those users got access to a completely new kind of language model—one that would “reason” through its answers by generating additional text before providing a final response, enabling it to solve much more challenging problems than its nonreasoning counterparts. Whereas o1 was a major technological advancement, GPT-5 is, above all else, a refined product. During a press briefing, Sam Altman compared GPT-5 to Apple’s Retina displays, and it’s an apt analogy, though perhaps not in the way that he intended. Much like an unprecedentedly crisp screen, GPT-5 will furnish a more pleasant and seamless user experience. That’s not nothing, but it falls far short of the transformative AI future that Altman has spent much of the past year hyping. In the briefing, Altman called GPT-5 “a significant step along the path to AGI,” or artificial general intelligence, and maybe he’s right—but if so, it’s a very small step. Take the demo of the model’s abilities that OpenAI showed to MIT Technology Review in advance of its release. Yann Dubois, a post-training lead at OpenAI, asked GPT-5 to design a web application that would help his partner learn French so that she could communicate more easily with his family. The model did an admirable job of following his instructions and created an appealing, user-friendly app. But when I gave GPT-4o an almost identical prompt, it produced an app with exactly the same functionality. The only difference is that it wasn’t as aesthetically pleasing.
Some of the other user-experience improvements are more substantial. Having the model rather than the user choose whether to apply reasoning to each query removes a major pain point, especially for users who don’t follow LLM advancements closely.  And, according to Altman, GPT-5 reasons much faster than the o-series models. The fact that OpenAI is releasing it to nonpaying users suggests that it’s also less expensive for the company to run. That’s a big deal: Running powerful models cheaply and quickly is a tough problem, and solving it is key to reducing AI’s environmental impact. 
OpenAI has also taken steps to mitigate hallucinations, which have been a persistent headache. OpenAI’s evaluations suggest that GPT-5 models are substantially less likely to make incorrect claims than their predecessor models, o3 and GPT-4o. If that advancement holds up to scrutiny, it could help pave the way for more reliable and trustworthy agents. “Hallucination can cause real safety and security issues,” says Dawn Song, a professor of computer science at UC Berkeley. For example, an agent that hallucinates software packages could download malicious code to a user’s device. GPT-5 has achieved the state of the art on several benchmarks, including a test of agentic abilities and the coding evaluations SWE-Bench and Aider Polyglot. But according to Clémentine Fourrier, an AI researcher at the company HuggingFace, those evaluations are nearing saturation, which means that current models have achieved close to maximal performance.  “It’s basically like looking at the performance of a high schooler on middle-grade problems,” she says. “If the high schooler fails, it tells you something, but if it succeeds, it doesn’t tell you a lot.” Fourrier said she would be impressed if the system achieved a score of 80% or 85% on SWE-Bench—but it only managed a 74.9%.  Ultimately, the headline message from OpenAI is that GPT-5 feels better to use. “The vibes of this model are really good, and I think that people are really going to feel that, especially average people who haven’t been spending their time thinking about models,” said Nick Turley, the head of ChatGPT. Vibes alone, however, won’t bring about the automated future that Altman has promised. Reasoning felt like a major step forward on the way to AGI. We’re still waiting for the next one.

Read More »

OpenAI launches GPT-5, nano, mini and Pro — not AGI, but capable of generating ‘software-on-demand’

After literally years of hype and speculation, OpenAI has officially launched a new lineup of large language models (LLMs), all different-sized variants of GPT-5, the long-awaited predecessor to its GPT-4 model from March of 2023, nearly 2.5 years ago. The company is rolling out four distinct versions of the model — GPT-5, GPT-5 Mini, GPT-5 Nano, and GPT-5 Pro — to meet varying needs for speed, cost, and computational depth.GPT-5 will soon be powering ChatGPT exclusively and replace all other models going forward for its 700 million weekly users, though ChatGPT Pro subscribers ($200) month can still select older models for the next 60 days.As per rumors and reports, OpenAI has replaced the previous system of having users switch the underlying model powering ChatGPT with an automatic router that decides to engage a special “GPT-5 thinking” mode with “deeper reasoning” that takes longer to respond on harder queries, or uses the regular GPT-5 or mini models for simpler queries.

Read More »

Energy Secretary Marks 200th Day of Trump Administration at SRNL’s New Advanced Manufacturing Collaborative Center

Advanced Manufacturing Collaborative in South Carolina Set to Lead on AI, Energy, and Manufacturing WASHINGTON—U.S. Secretary of Energy Chris Wright joined U.S. Senator Lindsey Graham (R-SC), U.S. Representative Joe Wilson (R-SC-02) and state and local leaders for the opening of the new Advanced Manufacturing Collaborative, creating a new chapter for American innovation in South Carolina. Launched during the first Trump Administration and led by the Department of Energy’s Savannah River National Laboratory (SRNL), the new center enables DOE’s mission to support American manufacturing – serving as an economic driver, creating jobs, spurring innovation and maximizing the reach of industry in South Carolina. “The Advanced Manufacturing Collaborative will bring the expertise of the Department of Energy’s National Labs together with innovators from academia and the private sector with one shared goal: to unleash America’s energy potential,” said Energy Secretary Wright. “This mission was started by President Trump in his first term, and I am proud to be representing the Department of Energy 200 days into his second administration for the grand opening of this facility, completed in record time.” “The opening of the Advanced Manufacturing Collaborative on USC Aiken’s campus will greatly enhance the ability for the Savannah River National Laboratory and private sector, along with academia, to work together on critical initiatives,” said U.S. Senator Lindsey Graham (R-SC). “I was proud to secure federal funding for this facility because I believe this partnership will pay dividends for South Carolina and the rest of the nation. The opening of this facility will cement Aiken as a hub for innovation and advanced technology development for years to come. Finally, I would like to thank Secretary Wright and President Trump for recognizing the importance of South Carolina’s contribution to positioning America as a leader in manufacturing and innovation. I will continue to work

Read More »

Russia Says Drone Attack Caused Fire at Afipsky Refinery

Drone attacks in the early hours of Thursday triggered a fire at the independent Afipsky refinery in southern Russia, according to regional emergency services. “A gas and gas-condensate processing unit caught on fire,” the services said in a statement on Telegram. No further details on the extent of the damage were provided. The blaze was fully extinguished by 8:21 a.m. local time, according to the statement. The facility has since resumed normal operations, its press service said. The incident comes during a renewed wave of Ukrainian drone strikes targeting Russia’s downstream oil sector. Earlier this month, similar attacks disrupted operations at two major refineries operated by Rosneft PJSC. The strikes were in response to increasingly intense Russian barrages, according to the Ukrainian General Staff.  The Kremlin is now considering a potential concession to US President Donald Trump, which could include an air truce with Ukraine to head off secondary sanctions, according to people familiar with the situation. The Afipsky refinery has a processing capacity of as much as 9.1 million tons of crude oil annually, or some 180,000 barrels per day, which makes it one of Russia’s smaller facilities. The nation currently processes more than 5 million barrels of crude daily, according to Bloomberg estimates based on industry data. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Harbour Energy Jumps the Most Since 2023 in London Trading

Harbour Energy Plc, the UK’s biggest independent oil and gas producer, jumped the most since 2023 in London trading after announcing the start of a $100 million share buyback and raising financial targets. The company reported strong first-half earnings on Thursday, more than tripling free cash flow as it incorporated assets acquired from Wintershall Dea last year. That allowed it to raise its full-year cash forecast by about 10% to $1 billion and announce the fresh buyback. “We strengthened our financial position” despite market volatility, Chief Executive Officer Linda Cook told reporters. Harbour “entered the second half in an excellent position.” The shares climbed as much as 21% and traded up 13% as of 10:24 a.m. London time. Recent months have seen wild oil-market swings, with prices buffeted by US President Donald Trump’s trade war, shifting OPEC+ policy and Israel’s attacks on Iran. Yet Harbour’s integration of Wintershall Dea fields, including in Norway, Germany and Argentina, allowed it to triple daily production to 488,000 barrels of oil equivalent and raise the lower end of its full-year output guidance. The new buyback will take total shareholder distributions to $555 million in 2025, assuming it completes by year’s end, Harbour said in a statement, adding that it has to conclude by March 31. The company declared an interim dividend of $227.5 million, or 13.19 cents a share, in line with its annual payout policy. Harbour trades in London and operates fields in the UK North Sea. Yet it’s among many companies working on the British continental shelf to reassess their activities after several tax increases. In May it announced plans to cut jobs, and on Thursday said it expects to complete the reorganization by the end of this quarter. “So long as the fiscal regime is as it is in the country, investment here just

Read More »

Why utilities should bring water into the data center energy conversation

Pete Elliott is senior technical staff consultant at ChemTreat and Richard Tribble is technical service consultant at ChemTreat. The growth of data centers is accelerating rapidly, driven by generative AI, cloud computing and increased digital infrastructure demand. As utilities plan for the resulting rise in electricity consumption, one critical factor is often left out of the conversation: water. Cooling systems are among the most resource-intensive components of data center operations. Whether using evaporative cooling towers, liquid-cooled systems, or air-based methods, the trade-offs between water consumption and energy demand carry significant implications for utilities and grid planning. In many cases, these trade-offs are not fully integrated into siting, design or forecasting discussions, despite their direct impact on infrastructure resilience and long-term environmental performance. Utilities and data center operators will need to coordinate more closely to address these challenges as energy and water demand rise in tandem. AI is reshaping infrastructure demand AI workloads require far greater computing power than conventional applications. This demand is driving up server rack power densities and increasing the heat load across data centers. Facilities that once averaged 8 kW per rack now often exceed 17 kW, with projections approaching 30 kW by 2027. As thermal loads increase, so do the cooling requirements. Data center power demand is expected to grow 50% by 2027 and potentially 165% by 2030. In parallel, total onsite and offsite water use associated with AI infrastructure is projected to reach 4.2 to 6.6 billion cubic meters annually, equivalent to nearly half the United Kingdom’s annual water withdrawals. These trends place added strain not only on utility-scale energy infrastructure, but also on regional water systems, particularly in areas already facing water scarcity or seasonal stress. Understanding the water-energy trade-off Evaporative cooling systems are commonly used in data centers due to their thermodynamic efficiency.

Read More »

Duke Energy Rakes in $6B with Duke Energy Florida Equity Sale

Duke Energy has reached an agreement for Brookfield, via its Super-Core Infrastructure strategy, to acquire a 19.7 percent indirect equity stake in Duke Energy Florida for a total of $6 billion. Brookfield, a prominent infrastructure investor, manages over $200 billion in assets across sectors such as utilities, transportation, midstream, and data, Duke Energy said. The investment supports Duke Energy’s ability to serve customers in its fast-growing electric and gas utilities, strengthens its balance sheet, and funds ongoing capital needs associated with its energy modernization strategy, the company said, adding that the investment represents a significant premium to Duke Energy’s current public equity valuation. Two billion dollars of the proceeds from the transaction will fund Duke Energy’s increased $87 billion, five-year capital plan, and $4 billion will be used to displace holding company debt, Duke Energy stated. “We’re pleased to have Brookfield, a highly regarded infrastructure investor, as a long-term partner in Duke Energy Florida”, Harry Sideris, Duke Energy President and Chief Executive Officer, said. “This significant transaction at a compelling valuation best positions Duke Energy to unlock additional capital investments in Duke Energy Florida during this unprecedented growth period. It also materially strengthens Duke Energy’s overall credit profile, which in turn enables us to invest in our energy modernization plans across our entire footprint – all while helping keep prices as low as possible for our customers,” he added. Duke Energy Florida serves two million customers in central and western Florida. The company’s five-year capital plan has increased by $4 billion, totaling over $16 billion in investments by 2029. This plan focuses on grid modernization, resiliency, and generation capacity enhancements to support growth in the region. Brookfield will invest $6 billion in Florida Progress, which owns Duke Energy Florida, in phases. It will receive $2.8 billion at the first closing

Read More »

USA Crude Oil Inventories Drop 3 Million Barrels Week on Week

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by three million barrels from the week ending July 25 to the week ending August 1, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. That report was released on August 6 and included data for the week ending August 1. It showed that crude oil stocks, not including the SPR, stood at 423.7 million barrels on August 1, 426.7 million barrels on July 25, and 429.3 million barrels on August 2, 2024. Crude oil in the SPR stood at 403.0 million barrels on August 1, 402.7 million barrels on July 25, and 375.8 million barrels on August 2, 2024, the report revealed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.662 billion barrels on August 1, the report highlighted. Total petroleum stocks were up 2.3 million barrels week on week and down 3.3 million barrels year on year, the report showed. “At 423.7 million barrels, U.S. crude oil inventories are about six percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories decreased by 1.3 million barrels from last week and are about one percent below the five year average for this time of year. Both finished gasoline inventories and blending components inventories decreased last week,” it added. “Distillate fuel inventories decreased by 0.6 million barrels last week and are about 16 percent below the five year average for this time of year. Propane/propylene inventories increased by 1.3 million barrels from last week and are eight percent above the five year average for this

Read More »

What is the World’s Most Valuable Oil and Gas Brand?

In a release sent to Rigzone recently by the Brand Finance team, Brand Finance announced that, according to its Energy 100 2025 report, the collective value of the world’s top 100 “most valuable energy brands” is $688.6 billion. Brand Finance noted in the release that $444.1 billion of this total is attributed to the top 50 oil and gas brands ranked, which the company said recorded a four percent year on year growth from 2024. The remaining $244.5 billion are represented by the top 50 utility brands in the rankings, Brand Finance said, pointing out that this was up five percent from last year. The company revealed in the release that Shell retained its position as the world’s most valuable oil and gas brand ranked for the eleventh consecutive year. The company’s brand value stood at $45.4 billion, according to the release, which pointed out that this was a 10 percent year on year drop. “Shell’s continued focus on LNG and gas has positioned the brand well in the growing energy sector,” Brand Finance said in the release. “Notably, Shell has also emerged as the strongest oil and gas brand ranked this year with a Brand Strength Index (BSI) score of 87.5/100 and an AAA brand strength rating,” it added. Brand Finance said Aramco remains as the second most valuable oil and gas brand ranked. The company’s brand value is $41.7 billion, the report revealed. “The brand continues to have a strong brand rating (AAA-) which has helped its brand value remain stable in the face of declining oil prices driven by a global supply surplus, ongoing geopolitical uncertainties, and shifting energy market dynamics,” Brand Finance said in the release, referring to Aramco. Brand Finance also noted in the release that PetroChina retained its position as the third most valuable

Read More »

ExxonMobil’s Imperial Oil Completes Renewable Diesel Facility in Canada

ExxonMobil said its Canadian affiliate Imperial Oil has completed construction of a renewable diesel facility at its Strathcona refinery near Edmonton, Canada. The facility is expected to be Canada’s largest renewable diesel facility, with the capacity to produce up to 20,000 barrels a day, the company said in a news release. Imperial supplies customers in Western Canada and its own operations in Northern Alberta. The facility sources bio-feedstocks from Canadian agricultural suppliers to produce renewable diesel that can be used with no engine modifications, while also being well suited for Canada’s cold weather conditions, ExxonMobil said. In the second quarter, Imperial Oil reported upstream production of 427,000 gross oil-equivalent barrels per day (bpd), its highest second quarter in over 30 years. Imperial’s Kearl asset achieved its highest-ever second quarter production of 275,000 total gross oil-equivalent bpd, the company said in its most recent earnings release. Gross bitumen production at Cold Lake averaged 145,000 barrels per day, compared to 147,000 bpd in the second quarter of 2024, primarily due to production and steam cycle timing, and turnaround impacts partially offset by Grand Rapids solvent-assisted steam-assisted gravity drainage, Imperial said. Imperial’s share of Syncrude quarterly production averaged 77,000 gross bpd, up from 66,000 bpd in the previous-year quarter, primarily driven by the timing of the annual coker turnaround, the company said. Imperial’s refinery throughput averaged 376,000 bpd, lower than 387,000 bpd a year ago. Capacity utilization was 87 percent, compared to 89 percent in the second quarter of 2024. The lower refinery throughput and capacity utilization were primarily due to unplanned downtime partially offset by lower turnaround impacts, the company said. Indonesian Oil Production Increased Last month, ExxonMobil said it added oil production at the Banyu Urip field in East Java, Indonesia, with the development of new wells. Powered by the Banyu

Read More »

Trump Doubles Tariff on India to 50 Pct

(Update) August 7, 2025, 3:54 AM GMT+1: Article updated. US President Donald Trump doubled tariffs on Indian goods to 50% as a penalty for its purchases of Russian oil, escalating a fight with a key Asian partner and sparking outrage in New Delhi.  Trump signed an executive order imposing a 25% tariff on Indian imports that will stack on top of the 25% levy he announced last week, the White House said Wednesday. The higher duty will take effect within 21 days, according to the order, providing some time for negotiation.  Prime Minister Narendra Modi’s government fired back after the announcement, saying the purchases are necessary for the nation’s energy security and blasting Trump for singling out India when other countries are also buying Russian oil. The nation’s opposition leader, Rahul Gandhi, also lambasted Trump as a “bully.” “We reiterate that these actions are unfair, unjustified and unreasonable,” a spokesperson for the Ministry of External Affairs said in a statement. “India will take all actions necessary to protect its national interests.” Trump has given Russian President Vladimir Putin an Aug. 8 deadline to reach a ceasefire with Ukraine or face sanctions, and threatened Moscow’s key trading partners like India in a bid to secure leverage. While talks between US and Russian officials on Wednesday didn’t provide an immediate breakthrough, Trump said afterward there was a “very good chance” he would meet with Putin and Ukrainian President Volodymyr Zelenskiy soon in another bid to broker peace. Investors largely took the news in stride. Oil inched higher after a five-day drop — the longest losing run since May — while the iShares MSCI India ETF closed 0.3% lower in US trading Wednesday. One-month forwards on dollar-rupee held steady at around 87.9 in the offshore market. Although a deal is still possible to avoid the higher rate,

Read More »

April 2023 OPEC+ Tranche ‘Next in Focus’, Analysts Say

Now that the November 2023 OPEC+ tranche is fully unwound, the next in focus is the April 2023 tranche. That’s what analysts at Standard Chartered Bank, including Emily Ashford, said in a report sent to Rigzone by the Standard Chartered team on Wednesday. The analysts highlighted in the report, however, that “nothing” in OPEC+’s latest communiqué “suggested that a strategy for this has been determined yet”. “The OPEC+ eight … met virtually for the final time as an octet on 3 August,” The Standard Chartered Bank analysts said in the report. “As we had expected, they agreed unanimously to unwind the final part of the tranche, adding the last 547,000 barrels per day back into the market,” they added. “Low inventories and steady demand indications provide the opportunity to add further barrels back into the market; meanwhile, compensation requirements from some members and capacity constraints from others may limit the actual number of returning barrels,” they continued. The Standard Chartered Bank analysts went on to note in the report that that “perhaps most critical to the oil balance over the next few quarters will be faltering non-OPEC+ supply growth”. “It is earnings season for U.S. shale oil producers, and there have been announcements of activity pullbacks and budget cuts in response to the low oil price environment at analyst presentations,” the analysts said. “Speaking at a market presentation, CEO of Diamondback Energy Travis Stice suggested that U.S. onshore oil production had peaked, and that it would begin to decline this quarter,” they added. “He estimated the number of fraccing crews would continue to decline, and that there would be further declines in drilling rigs in Q3,” the analysts went on to state. The Standard Chartered Bank analysts noted that “this is supported by the data”, adding that “the U.S. oil

Read More »

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.  But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.  Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.  More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.  I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.  On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.  People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest).  People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.  Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?  In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good.  Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed.  And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.   But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing.  Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search.  “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly.  It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be.  But once you’ve used AI Overviews a bit, you realize they are different.  Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.)  “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.”  That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video.  When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai.  There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out?  I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too.  “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web.  “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful.  “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.”  But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?   Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.   “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says.  Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.”  Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”  “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.”  He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew?  A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.   “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says.  OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more.  “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience.  Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does.  “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.”  When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them.  “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed!  The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers.  It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.”  We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge.  The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets.  Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed.  “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information.  In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses.  But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on. 

Read More »

Subsea7 Scores Various Contracts Globally

Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Read More »

Driving into the future

Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.  We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen.  Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes. 
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake.  What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story. 
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa.  Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Read More »

Oil Holds at Highest Levels Since October

Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

Read More »

What to expect from NaaS in 2025

Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

Read More »

UK battery storage industry ‘back on track’

UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing  and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was  fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

Read More »

ChatGPT rockets to 700M weekly users ahead of GPT-5 launch with reasoning superpowers

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now OpenAI’s ChatGPT will reach 700 million weekly active users this week, the company announced Monday, cementing its position as one of the fastest-adopted software products in history just as the company prepares to release its most powerful language model yet. The surge is a 40 percent jump from the 500 million weekly users ChatGPT had at the end of March and marks a fourfold increase from the same period last year. The explosive growth rivals the adoption rates of platforms like Zoom during the pandemic and early social media networks, underscoring how quickly AI tools have moved from experimental to essential. The milestone comes at a strategic moment for OpenAI, which reportedly plans to launch GPT-5 in early August, citing sources familiar with the company’s plans. The timing suggests OpenAI is orchestrating a coordinated push to dominate the AI landscape before competitors can close the gap. “Every day, people and teams are learning, creating, and solving harder problems,” said Nick Turley, OpenAI’s vice president of product for ChatGPT, in announcing the user benchmark. “Big week ahead.” The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF GPT-5 will combine reasoning powers into single AI system The upcoming model goes beyond an incremental upgrade. According to people briefed on the project who spoke to The Information, GPT-5 will integrate OpenAI’s advanced reasoning capabilities from its o3 series directly

Read More »

Qwen-Image is a powerful, open source new AI image generator with support for embedded text in English & Chinese

After seizing the summer with a blitz of powerful, freely available new open source language and coding focused AI models that matched or in some cases bested closed-source/proprietary U.S. rivals, Alibaba’s crack “Qwen Team” of AI researchers is back again today with the release of a highly ranked new AI image generator model — also open source.Qwen-Image stands out in a crowded field of generative image models due to its emphasis on rendering text accurately within visuals — an area where many rivals still struggle. Supporting both alphabetic and logographic scripts, the model is particularly adept at managing complex typography, multi-line layouts, paragraph-level semantics, and bilingual content (e.g., English-Chinese).In practice, this allows users to generate content like movie posters, presentation slides, storefront scenes, handwritten poetry, and stylized infographics — with crisp text that aligns with their prompts.

Read More »

These protocols will help AI agents navigate our messy lives

A growing number of companies are launching AI agents that can do things on your behalf—actions like sending an email, making a document, or editing a database. Initial reviews for these agents have been mixed at best, though, because they struggle to interact with all the different components of our digital lives. Part of the problem is that we are still building the necessary infrastructure to help agents navigate the world. If we want agents to complete tasks for us, we need to give them the necessary tools while also making sure they use that power responsibly. Anthropic and Google are among the companies and groups working to do those. Over the past year, they have both introduced protocols that try to define how AI agents should interact with each other and the world around them. These protocols could make it easier for agents to control other programs like email clients and note-taking apps.  The reason has to do with application programming interfaces, the connections between computers or programs that govern much of our online world. APIs currently reply to “pings” with standardized information. But AI models aren’t made to work exactly the same every time. The very randomness that helps them come across as conversational and expressive also makes it difficult for them to both call an API and understand the response. 
“Models speak a natural language,” says Theo Chu, a project manager at Anthropic. “For [a model] to get context and do something with that context, there is a translation layer that has to happen for it to make sense to the model.” Chu works on one such translation technique, the Model Context Protocol (MCP), which Anthropic introduced at the end of last year.  MCP attempts to standardize how AI agents interact with the world via various programs, and it’s already very popular. One web aggregator for MCP servers (essentially, the portals for different programs or tools that agents can access) lists over 15,000 servers already. 
Working out how to govern how AI agents interact with each other is arguably an even steeper challenge, and it’s one the Agent2Agent protocol (A2A), introduced by Google in April, tries to take on. Whereas MCP translates requests between words and code, A2A tries to moderate exchanges between agents, which is an “essential next step for the industry to move beyond single-purpose agents,” Rao Surapaneni, who works with A2A at Google Cloud, wrote in an email to MIT Technology Review.  Google says 150 companies have already partnered with it to develop and adopt A2A, including Adobe and Salesforce. At a high level, both MCP and A2A tell an AI agent what it absolutely needs to do, what it should do, and what it should not do to ensure a safe interaction with other services. In a way, they are complementary—each agent in an A2A interaction could individually be using MCP to fetch information the other asks for.  However, Chu stresses that it is “definitely still early days” for MCP, and the A2A road map lists plenty of tasks still to be done. We’ve identified the three main areas of growth for MCP, A2A, and other agent protocols: security, openness, and efficiency. What should these protocols say about security? Researchers and developers still don’t really understand how AI models work, and new vulnerabilities are being discovered all the time. For chatbot-style AI applications, malicious attacks can cause models to do all sorts of bad things, including regurgitating training data and spouting slurs. But for AI agents, which interact with the world on someone’s behalf, the possibilities are far riskier.  For example, one AI agent, made to read and send emails for someone, has already been shown to be vulnerable to what’s known as an indirect prompt injection attack. Essentially, an email could be written in a way that hijacks the AI model and causes it to malfunction. Then, if that agent has access to the user’s files, it could be instructed to send private documents to the attacker.  Some researchers believe that protocols like MCP should prevent agents from carrying out harmful actions like this. However, it does not at the moment. “Basically, it does not have any security design,” says Zhaorun Chen, a  University of Chicago PhD student who works on AI agent security and uses MCP servers.  Bruce Schneier, a security researcher and activist, is skeptical that protocols like MCP will be able to do much to reduce the inherent risks that come with AI and is concerned that giving such technology more power will just give it more ability to cause harm in the real, physical world. “We just don’t have good answers on how to secure this stuff,” says Schneier. “It’s going to be a security cesspool really fast.” 

Others are more hopeful. Security design could be added to MCP and A2A similar to the way it is for internet protocols like HTTPS (though the nature of attacks on AI systems is very different). And Chen and Anthropic believe that standardizing protocols like MCP and A2A can help make it easier to catch and resolve security issues even as is. Chen uses MCP in his research to test the roles different programs can play in attacks to better understand vulnerabilities. Chu at Anthropic believes that these tools could let cybersecurity companies more easily deal with attacks against agents, because it will be easier to unpack who sent what.  How open should these protocols be? Although MCP and A2A are two of the most popular agent protocols available today, there are plenty of others in the works. Large companies like Cisco and IBM are working on their own protocols, and other groups have put forth different designs like Agora, designed by researchers at the University of Oxford, which upgrades an agent-service communication from human language to structured data in real time. Many developers hope there could eventually be a registry of safe, trusted systems to navigate the proliferation of agents and tools. Others, including Chen, want users to be able to rate different services in something like a Yelp for AI agent tools. Some more niche protocols have even built blockchains on top of MCP and A2A so that servers can show they are not just spam.  Both MCP and A2A are open-source, which is common for would-be standards as it lets others work on building them. This can help protocols develop faster and more transparently.  “If we go build something together, we spend less time overall, because we’re not having to each reinvent the wheel,” says David Nalley, who leads developer experience at Amazon Web Services and works with a lot of open-source systems, including A2A and MCP.  Nalley oversaw Google’s donation of A2A to the Linux Foundation, a nonprofit organization that guides open-source projects, back in June. With the foundation’s stewardship, the developers who work on A2A (including employees at Google and many others) all get a say in how it should evolve. MCP, on the other hand, is owned by Anthropic and licensed for free. That is a sticking point for some open-source advocates, who want others to have a say in how the code base itself is developed.  “There’s admittedly some increased risk around a single person or a single entity being in absolute control,” says Nalley. He says most people would prefer multiple groups to have a “seat at the table” to make sure that these protocols are serving everyone’s best interests. 
However, Nalley does believe Anthropic is acting in good faith—its license, he says, is incredibly permissive, allowing other groups to create their own modified versions of the code (a process known as “forking”).  “Someone could fork it if they needed to, if something went completely off the rails,” says Nalley. IBM’s Agent Communication Protocol was actually spun off of MCP. 
Anthropic is still deciding exactly how to develop MCP. For now, it works with a steering committee of outside companies that help make decisions on MCP’s development, but Anthropic seems open to changing this approach. “We are looking to evolve how we think about both ownership and governance in the future,” says Chu. Is natural language fast enough? MCP and A2A work on the agents’ terms—they use words and phrases (termed natural language in AI), just as AI models do when they are responding to a person. This is part of the selling point for these protocols, because it means the model doesn’t have to be trained to talk in a way that is unnatural to it. “Allowing a natural-language interface to be used between agents and not just with humans unlocks sharing the intelligence that is built into these agents,” says Surapaneni. But this choice does come with drawbacks. Natural-language interfaces lack the precision of APIs, and that could result in incorrect responses. And it creates inefficiencies.  Usually, an AI model reads and responds to text by splitting words into tokens. The AI model will read a prompt, split it into input tokens, generate a response in the form of output tokens, and then put these tokens into words to send back. These tokens define in some sense how much work the AI model has to do—that’s why most AI platforms charge users according to the number of tokens used.  But the whole point of working in tokens is so that people can understand the output—it’s usually faster and more efficient for machine-to-machine communication to just work over code. MCP and A2A both work in natural language, so they require the model to spend tokens as the agent talks to other machines, like tools and other agents. The user never even sees these exchanges—all the effort of making everything human-readable doesn’t ever get read by a human. “You waste a lot of tokens if you want to use MCP,” says Chen.  Chen describes this process as potentially very costly. For example, suppose the user wants the agent to read a document and summarize it. If the agent uses another program to summarize here, it needs to read the document, write the document to the program, read back the summary, and write it back to the user. Since the agent needed to read and write everything, both the document and the summary get doubled up. According to Chen, “It’s actually a lot of tokens.” As with so many aspects of MCP and A2A’s designs, their benefits also create new challenges. “There’s a long way to go if we want to scale up and actually make them useful,” says Chen. 

Read More »

Rethinking how we measure AI intelligence

Current AI benchmarks are struggling to keep pace with modern models. As helpful as they are to measure model performance on specific tasks, it can be hard to know if models trained on internet data are actually solving problems or just remembering answers they’ve already seen. As models reach closer to 100% on certain benchmarks, they also become less effective at revealing meaningful performance differences. We continue to invest in new and more challenging benchmarks, but on the path to general intelligence, we need to continue to look for new ways to evaluate. The more recent shift towards dynamic, human-judged testing solves these issues of memorization and saturation, but in turn, creates new difficulties stemming from the inherent subjectivity of human preferences.While we continue to evolve and pursue current AI benchmarks, we’re also consistently looking to test new approaches to evaluating models. That’s why today, we’re introducing the Kaggle Game Arena: a new, public AI benchmarking platform where AI models compete head-to-head in strategic games, providing a verifiable, and dynamic measure of their capabilities.

Read More »

The Download: fixing ‘evil’ AI, and the White House’s war on science

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Forcing LLMs to be evil during training can make them nicer in the long run Large language models have recently acquired a reputation for behaving badly. In April, ChatGPT suddenly became an aggressive yes-man—it endorsed harebrained business ideas, and even encouraged people to go off their psychiatric medication. More recently, xAI’s Grok adopted what can best be described as a 4chan neo-Nazi persona and repeatedly referred to itself as “MechaHitler” on X.  Both changes were quickly reversed—but why did they happen at all? And how do we stop AI going off the rails like this? 
A new study from Anthropic suggests that traits such as sycophancy or evilness are associated with specific patterns of activity in large language models—and turning on those patterns during training can, paradoxically, prevent the model from adopting the related traits. Read the full story.  —Grace Huckins
Read more of our top stories about AI: + Five things you need to know about AI right now.  + Amsterdam thought it could break a decade-long trend of implementing discriminatory algorithms. Its failure raises the question: can AI programs ever be made fair? Read our story.  + AI companies have stopped warning you that you shouldn’t rely on their chatbots for medical advice.  + We’re starting to give AI agents real autonomy. But are they really ready for it?  + What even is AI? Everyone thinks they know, but no one can agree. Here’s why that’s a problem. The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 The US is losing its scientific supremacyMoney and talent are starting to leave as a hostile White House ramps up its attacks. (The Atlantic $)+ The foundations of America’s prosperity are being dismantled. (MIT Technology Review) 2 Global markets are swooning again New tariffs, weak jobs data, and Trump’s decision to fire a top economic official are not going down well. (Reuters $)3 Big Tech is turning into Big InfrastructureCapital expenditure on AI contributed more to US economic growth in the last two quarters than all consumer spending, which is kind of wild. (WSJ $)+ But are they likely to get a return on their huge investments? (FT $)4 OpenAI pulled a feature that let you see strangers’ conversations with ChatGPT They’d opted in to sharing them—but may well have not realized that’d mean their chats would be indexed on Google Search. (TechCrunch) 5 Tesla has to pay $243 million over the role Autopilot played in a fatal crashThe plaintiffs successfully argued that the company’s promises about its tech can lull drivers into a false sense of security. (NBC)6 Tech workers in China are desperate to learn AI skillsAnd they’re assuaging their anxiety with online courses, though they say they vary in quality. (Rest of World) + Chinese universities want students to use more AI, not less. (MIT Technology Review)7 Russia is escalating its crackdown on online freedoms There are growing fears that it’s planning to ban WhatsApp and Telegram. (NYT $) 8 People are using AI to write obituariesBut what do we lose when we outsource expressing our emotions to a machine? (WP $)+ Deepfakes of your dead loved ones are a booming Chinese business. (MIT Technology Review)9 Just seeing a sick person triggers your immune responseThis is a pretty cool finding —and the study was conducted in virtual reality too. (Nature) 10 The US has recorded the longest lightning flash ever ⚡A “mega-flash” over the Great Plains stretched to about 515 miles! (New Scientist $) Quote of the day “Apple must do this. Apple will do this. This is sort of ours to grab.”
 —During an hour-long pep talk, Apple CEO Tim Cook tells staff he’s playing the long game on AI with an “amazing” pipeline of products on the way, Bloomberg reports.
One more thing MICHAEL BYERS Think that your plastic is being recycled? Think again. The problem of plastic waste hides in plain sight, a ubiquitous part of our lives we rarely question. But a closer examination of the situation is shocking.To date, humans have created around 11 billion metric tons of plastic, the vast majority of which ends up in landfills or the environment. Only 9% of the plastic ever produced has been recycled.To make matters worse, plastic production is growing dramatically; in fact, half of all plastics in existence have been produced in just the last two decades.So what do we do? Sadly, solutions such as recycling and reuse aren’t equal to the scale of the task. The only answer is drastic cuts in production in the first place. Read the full story.  —Douglas Main We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)+ The new Alien TV series sounds fantastic.+ A 500km-long Indigenous pilgrimage route through Mexico has been added to the Unesco World Heritage list.+ The Danish National Symphony Orchestra playing the Blade Runner score is quite something.+ It’s not too late to spice up your summer with an icebox cake.

Read More »

Why tomorrow’s best devs won’t just code — they’ll curate, coordinate and command AI

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now As AI continues to take on more and more new competencies, junior coding, as we knew it, is rapidly becoming a thing of the past. Tasks that used to be the bread and butter for junior developers — such as repetitive scripting, HTML layout or simple DevOps setups — are now being reliably handled by AI assistants like ChatGPT, GitHub Copilot and Amazon CodeWhisperer. This is not just an upgrade to speed and efficiency — we are looking at a serious structural change here. So where does that leave entry-level developers? And, speaking more broadly, where does it leave the software industry as a whole? The vanishing beginner level For decades, software engineering as a field had a fairly predictable pathway: Begin with the basics, build some landing pages, write test cases, troubleshoot minor bugs. As your skills grow, you can move toward architectural thinking and product ownership. But now AI is vastly changing how the bottom end of that ladder operates, since it can do most junior-level tasks on its own. As a result, beginners entering the industry are increasingly being asked to contribute at a level that used to require years of experience. It is not just about writing code anymore — it is about understanding systems, structuring problems and working alongside AI like a team member. That is a tall order. That said, I do believe that there is a way forward. It starts by changing the way we learn. If you are just starting out, avoid relying on AI to get things done. It is tempting, sure, but in the long run, it is also harmful. If you skip the

Read More »

GPT-5 is here. Now what?

At long last, OpenAI has released GPT-5. The new system abandons the distinction between OpenAI’s flagship models and its o series of reasoning models, automatically routing user queries to a fast nonreasoning model or a slower reasoning version. It is now available to everyone through the ChatGPT web interface—though nonpaying users may need to wait a few days to gain full access to the new capabilities.  It’s tempting to compare GPT-5 with its explicit predecessor, GPT-4, but the more illuminating juxtaposition is with o1, OpenAI’s first reasoning model, which was released last year. In contrast to GPT-5’s broad release, o1 was initially available only to Plus and Team subscribers. Those users got access to a completely new kind of language model—one that would “reason” through its answers by generating additional text before providing a final response, enabling it to solve much more challenging problems than its nonreasoning counterparts. Whereas o1 was a major technological advancement, GPT-5 is, above all else, a refined product. During a press briefing, Sam Altman compared GPT-5 to Apple’s Retina displays, and it’s an apt analogy, though perhaps not in the way that he intended. Much like an unprecedentedly crisp screen, GPT-5 will furnish a more pleasant and seamless user experience. That’s not nothing, but it falls far short of the transformative AI future that Altman has spent much of the past year hyping. In the briefing, Altman called GPT-5 “a significant step along the path to AGI,” or artificial general intelligence, and maybe he’s right—but if so, it’s a very small step. Take the demo of the model’s abilities that OpenAI showed to MIT Technology Review in advance of its release. Yann Dubois, a post-training lead at OpenAI, asked GPT-5 to design a web application that would help his partner learn French so that she could communicate more easily with his family. The model did an admirable job of following his instructions and created an appealing, user-friendly app. But when I gave GPT-4o an almost identical prompt, it produced an app with exactly the same functionality. The only difference is that it wasn’t as aesthetically pleasing.
Some of the other user-experience improvements are more substantial. Having the model rather than the user choose whether to apply reasoning to each query removes a major pain point, especially for users who don’t follow LLM advancements closely.  And, according to Altman, GPT-5 reasons much faster than the o-series models. The fact that OpenAI is releasing it to nonpaying users suggests that it’s also less expensive for the company to run. That’s a big deal: Running powerful models cheaply and quickly is a tough problem, and solving it is key to reducing AI’s environmental impact. 
OpenAI has also taken steps to mitigate hallucinations, which have been a persistent headache. OpenAI’s evaluations suggest that GPT-5 models are substantially less likely to make incorrect claims than their predecessor models, o3 and GPT-4o. If that advancement holds up to scrutiny, it could help pave the way for more reliable and trustworthy agents. “Hallucination can cause real safety and security issues,” says Dawn Song, a professor of computer science at UC Berkeley. For example, an agent that hallucinates software packages could download malicious code to a user’s device. GPT-5 has achieved the state of the art on several benchmarks, including a test of agentic abilities and the coding evaluations SWE-Bench and Aider Polyglot. But according to Clémentine Fourrier, an AI researcher at the company HuggingFace, those evaluations are nearing saturation, which means that current models have achieved close to maximal performance.  “It’s basically like looking at the performance of a high schooler on middle-grade problems,” she says. “If the high schooler fails, it tells you something, but if it succeeds, it doesn’t tell you a lot.” Fourrier said she would be impressed if the system achieved a score of 80% or 85% on SWE-Bench—but it only managed a 74.9%.  Ultimately, the headline message from OpenAI is that GPT-5 feels better to use. “The vibes of this model are really good, and I think that people are really going to feel that, especially average people who haven’t been spending their time thinking about models,” said Nick Turley, the head of ChatGPT. Vibes alone, however, won’t bring about the automated future that Altman has promised. Reasoning felt like a major step forward on the way to AGI. We’re still waiting for the next one.

Read More »

OpenAI launches GPT-5, nano, mini and Pro — not AGI, but capable of generating ‘software-on-demand’

After literally years of hype and speculation, OpenAI has officially launched a new lineup of large language models (LLMs), all different-sized variants of GPT-5, the long-awaited predecessor to its GPT-4 model from March of 2023, nearly 2.5 years ago. The company is rolling out four distinct versions of the model — GPT-5, GPT-5 Mini, GPT-5 Nano, and GPT-5 Pro — to meet varying needs for speed, cost, and computational depth.GPT-5 will soon be powering ChatGPT exclusively and replace all other models going forward for its 700 million weekly users, though ChatGPT Pro subscribers ($200) month can still select older models for the next 60 days.As per rumors and reports, OpenAI has replaced the previous system of having users switch the underlying model powering ChatGPT with an automatic router that decides to engage a special “GPT-5 thinking” mode with “deeper reasoning” that takes longer to respond on harder queries, or uses the regular GPT-5 or mini models for simpler queries.

Read More »

Energy Secretary Marks 200th Day of Trump Administration at SRNL’s New Advanced Manufacturing Collaborative Center

Advanced Manufacturing Collaborative in South Carolina Set to Lead on AI, Energy, and Manufacturing WASHINGTON—U.S. Secretary of Energy Chris Wright joined U.S. Senator Lindsey Graham (R-SC), U.S. Representative Joe Wilson (R-SC-02) and state and local leaders for the opening of the new Advanced Manufacturing Collaborative, creating a new chapter for American innovation in South Carolina. Launched during the first Trump Administration and led by the Department of Energy’s Savannah River National Laboratory (SRNL), the new center enables DOE’s mission to support American manufacturing – serving as an economic driver, creating jobs, spurring innovation and maximizing the reach of industry in South Carolina. “The Advanced Manufacturing Collaborative will bring the expertise of the Department of Energy’s National Labs together with innovators from academia and the private sector with one shared goal: to unleash America’s energy potential,” said Energy Secretary Wright. “This mission was started by President Trump in his first term, and I am proud to be representing the Department of Energy 200 days into his second administration for the grand opening of this facility, completed in record time.” “The opening of the Advanced Manufacturing Collaborative on USC Aiken’s campus will greatly enhance the ability for the Savannah River National Laboratory and private sector, along with academia, to work together on critical initiatives,” said U.S. Senator Lindsey Graham (R-SC). “I was proud to secure federal funding for this facility because I believe this partnership will pay dividends for South Carolina and the rest of the nation. The opening of this facility will cement Aiken as a hub for innovation and advanced technology development for years to come. Finally, I would like to thank Secretary Wright and President Trump for recognizing the importance of South Carolina’s contribution to positioning America as a leader in manufacturing and innovation. I will continue to work

Read More »

Russia Says Drone Attack Caused Fire at Afipsky Refinery

Drone attacks in the early hours of Thursday triggered a fire at the independent Afipsky refinery in southern Russia, according to regional emergency services. “A gas and gas-condensate processing unit caught on fire,” the services said in a statement on Telegram. No further details on the extent of the damage were provided. The blaze was fully extinguished by 8:21 a.m. local time, according to the statement. The facility has since resumed normal operations, its press service said. The incident comes during a renewed wave of Ukrainian drone strikes targeting Russia’s downstream oil sector. Earlier this month, similar attacks disrupted operations at two major refineries operated by Rosneft PJSC. The strikes were in response to increasingly intense Russian barrages, according to the Ukrainian General Staff.  The Kremlin is now considering a potential concession to US President Donald Trump, which could include an air truce with Ukraine to head off secondary sanctions, according to people familiar with the situation. The Afipsky refinery has a processing capacity of as much as 9.1 million tons of crude oil annually, or some 180,000 barrels per day, which makes it one of Russia’s smaller facilities. The nation currently processes more than 5 million barrels of crude daily, according to Bloomberg estimates based on industry data. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Harbour Energy Jumps the Most Since 2023 in London Trading

Harbour Energy Plc, the UK’s biggest independent oil and gas producer, jumped the most since 2023 in London trading after announcing the start of a $100 million share buyback and raising financial targets. The company reported strong first-half earnings on Thursday, more than tripling free cash flow as it incorporated assets acquired from Wintershall Dea last year. That allowed it to raise its full-year cash forecast by about 10% to $1 billion and announce the fresh buyback. “We strengthened our financial position” despite market volatility, Chief Executive Officer Linda Cook told reporters. Harbour “entered the second half in an excellent position.” The shares climbed as much as 21% and traded up 13% as of 10:24 a.m. London time. Recent months have seen wild oil-market swings, with prices buffeted by US President Donald Trump’s trade war, shifting OPEC+ policy and Israel’s attacks on Iran. Yet Harbour’s integration of Wintershall Dea fields, including in Norway, Germany and Argentina, allowed it to triple daily production to 488,000 barrels of oil equivalent and raise the lower end of its full-year output guidance. The new buyback will take total shareholder distributions to $555 million in 2025, assuming it completes by year’s end, Harbour said in a statement, adding that it has to conclude by March 31. The company declared an interim dividend of $227.5 million, or 13.19 cents a share, in line with its annual payout policy. Harbour trades in London and operates fields in the UK North Sea. Yet it’s among many companies working on the British continental shelf to reassess their activities after several tax increases. In May it announced plans to cut jobs, and on Thursday said it expects to complete the reorganization by the end of this quarter. “So long as the fiscal regime is as it is in the country, investment here just

Read More »

Why utilities should bring water into the data center energy conversation

Pete Elliott is senior technical staff consultant at ChemTreat and Richard Tribble is technical service consultant at ChemTreat. The growth of data centers is accelerating rapidly, driven by generative AI, cloud computing and increased digital infrastructure demand. As utilities plan for the resulting rise in electricity consumption, one critical factor is often left out of the conversation: water. Cooling systems are among the most resource-intensive components of data center operations. Whether using evaporative cooling towers, liquid-cooled systems, or air-based methods, the trade-offs between water consumption and energy demand carry significant implications for utilities and grid planning. In many cases, these trade-offs are not fully integrated into siting, design or forecasting discussions, despite their direct impact on infrastructure resilience and long-term environmental performance. Utilities and data center operators will need to coordinate more closely to address these challenges as energy and water demand rise in tandem. AI is reshaping infrastructure demand AI workloads require far greater computing power than conventional applications. This demand is driving up server rack power densities and increasing the heat load across data centers. Facilities that once averaged 8 kW per rack now often exceed 17 kW, with projections approaching 30 kW by 2027. As thermal loads increase, so do the cooling requirements. Data center power demand is expected to grow 50% by 2027 and potentially 165% by 2030. In parallel, total onsite and offsite water use associated with AI infrastructure is projected to reach 4.2 to 6.6 billion cubic meters annually, equivalent to nearly half the United Kingdom’s annual water withdrawals. These trends place added strain not only on utility-scale energy infrastructure, but also on regional water systems, particularly in areas already facing water scarcity or seasonal stress. Understanding the water-energy trade-off Evaporative cooling systems are commonly used in data centers due to their thermodynamic efficiency.

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE