Your Gateway to Power, Energy, Datacenters, Bitcoin and AI
Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.
Discover What Matters Most to You

AI
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Discover What Matter Most to You
Featured Articles

Al-Sada Reappointed as Rosneft Chair
Mohammed Bin Saleh Al-Sada has been re-elected chair of the board of directors of Russia’s state-owned PJSC Rosneft Oil Co. Ten other members of the board were elected at a shareholders’ meeting, Rosneft said in a media release. Al-Sada was elected Rosneft chairman for the first time in June 2023. He has accumulated over 40 years of experience in the energy industry. Presently, Al-Sada serves as the chairman of the board of trustees at Doha University for Science and Technology, in Qatar, Rosneft said. Rosneft added that Al-Sada also serves as a member of the board of trustees of the Abdullah Bin Hamad Al-Attiyah International Foundation for Energy and Sustainable Development, an advisory board member for the GCC Supreme Council. HE is also the vice chairman of the board of directors for Nesma Infrastructure & Technology. Between 2007 and 2011, he served as Qatari Minister of State for Energy and Industry Affairs. From 2011 to 2018, he was Qatar’s Minister of Energy and Industry and Chairman of the Board of Qatar Petroleum, now known as QatarEnergy, Rosneft noted. Board members also elected the chairmen of the three permanent committees of the board. The board of directors of Rosneft includes Andrey I. Akimov, chairman of the management board of Gazprombank; Pedro A. Aquino Jr. (independent), chief executive officer of Oil & Petroleum Holdings International Resources Ltd.; Faizal Alsuwaidi and Hamad Rashid Al-Mohannadi, representatives of Qatar Investment Authority; Viktor G. Martynov (independent), rector of Gubkin Russian State University of Oil and Gas (National Research University); Alexander D. Nekipelov (independent), director of Moscow School of Economics at the Lomonosov Moscow State University; Alexander V. Novak, Russia’s Deputy Prime Minister; Maxim S. Oreshkin, Deputy Head of the Administration of the President of the Russian Federation; Govind Kottis Satish (independent), managing director of Value Prolific Consulting Services

Naftogaz Seals $225MM Loans to Buy Gas for Winter
Naftogaz Group has secured loans totaling UAH 9.4 billion ($225.09 million) from local banks to procure winter gas for Ukraine. JSC CB PrivatBank and PJSC JSB Ukrgasbank have each committed UAH 4.7 billion, according to online statements by state-owned integrated energy company Naftogaz. The funds will be used to stock underground storage facilities for the 2025-26 heating season, it said. “Naftogaz is diversifying its sources and routes of gas supply”, Naftogaz said. “This enhances Ukraine’s energy security and resilience amid the ongoing full-scale war”. Chief executive Sergii Koretskyi said, “At the same time, we continue to work with international financial institutions and partner countries”. Last April the European Bank for Reconstruction and Development (EBRD) said it had agreed to lend EUR 270 million ($317.28 million) to Naftogaz, complemented by a EUR 139 million grant from the Norwegian government. These will be used to buy nearly one billion cubic meters (35.31 billion cubic feet) of gas, Naftogaz said separately at the time. “Naftogaz has been the recipient of two previous EBRD loans for a total of EUR 500 million, backed by EUR 275 million in guarantees from the United States, Norway, Germany, France, Canada and The Netherlands, and complemented by earlier grant finance from Norway of EUR 187 million for emergency gas purchases”, the EBRD said April 25. “The latest agreement lifts EBRD finance for Naftogaz to EUR 770 million since 2022. “Norway’s latest grant finance brings its total wartime energy sector-focused support for Ukraine through the EBRD to EUR 460 million”. On July 11 the EBRD announced EUR 400 million in new funding for Ukraine that included a EUR 160-million loan to Naftogaz company Ukrnafta for the installation of 250 megawatts of small-scale gas-fired distributed power generation capacity across Ukraine. “At the Ukraine Recovery Conference in Rome on 10-11 July, the

Saipem, Subsea7 Agree Merger
Saipem SpA signed a binding merger deal to acquire Subsea7 SA and thereafter rebrand into Saipem7, the companies said Thursday, after an initial agreement last February. Subsea7 shareholders would receive 6.688 Saipem shares for each Subsea7 unit. The combined company’s share capital would be equally divided between the shareholders of Italian state-backed Saipem and Luxembourg-registered Subsea7 assuming all the latter’s shareholders participate in the transaction, a joint statement said. As the biggest shareholders of Saipem, Eni SpA and CDP Equity SpA would respectively own about 10.6 percent and 6.4 percent of Saipem7. Siem Industries SA, Subsea7’s top shareholder, would own around 11.8 percent. The parties expect to complete the merger in the latter half of 2026 subject to regulatory approvals, yes votes by the shareholders of both Saipem and Subsea7 and other customary conditions. Eni, CDP Equity and Siem Industries signed an agreement to vote for the combination. As part of the tripartite agreement, Eni and CDP Equity are entitled to assign Saipem7’s chief executive, who is planned to be Alessandro Puliti, Saipem’s chief executive and general manager. Siem Industries has been given the right to designate Saipem7’s chair, who is expected to be Subsea7 chair Kristian Siem. These designations would still be subject to approval by the combined company’s board, according to the statement. The resulting entity would inherit projects in over 60 countries and operate “a full spectrum of offshore and onshore services, from drilling, engineering and construction to life-of-field services and decommissioning, with an increased ability to optimize project scheduling for clients in oil, gas, carbon capture and renewable energy”, the statement said. Saipem7 would have more than 60 construction vessels able to perform “shallow-water to ultra-deepwater operations, utilising a full portfolio of heavy lift, high-end J-lay, S-lay and reel-lay rigid pipeline solutions, flexible pipe and umbilical

CISPE seeks to annul Broadcom’s VMware takeover
However, Forrester Research Senior Analyst Dario Maisto said, “Broadcom VMware commercial practices have been under the lenses for quite some time now. While we may agree or disagree with the European Commission’s decision to approve Broadcom’s acquisition of VMware, the fact is that a number of European organizations are suffering from unilateral price increases and arbitrary closure of services.” European organizations, he pointed out, “are too much dependent on IT vendors that act as monopolies or oligopolies in the best case scenario. Something like the sought-for Buy European Act may be a way to promote better competition in the European cloud and IT markets.” The appeal, said Maisto, “is a long term play, though. In the short term, CISPE should keep seeking a fairer cloud market in Europe. Results will come sooner or later, as it was for the Microsoft case.” ‘Time to find a new dance partner’ John Annand, digital infrastructure practice lead at Info-Tech Research Group, is of two minds on the topic. “No doubt that what Broadcom is doing is manifestly geared towards their own benefit at the expense of (soon to be in many cases former) partners and customers,” he said. “When Broadcom completed the acquisition of VMWare, they promised an end to special back-room pricing deals that gave differential discounts to preferred hardware or public cloud providers.” Basically, he said, “Broadcom changed the license deals for all their cloud provider partners, and they did so equally. However, a subset of those partners are [part of] CISPE, and all the members of that subset are on the small side. So, while the vast majority of CSPs worldwide are affected negatively by the Broadcom changes, for CISPE members, 100% of them will be negatively affected. Does this affect competition? Sure.” Annand also noted that, as of October, European clients
Anthropic unveils ‘auditing agents’ to test for AI misalignment
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now When models attempt to get their way or become overly accommodating to the user, it can mean trouble for enterprises. Therefore, it’s essential that, in addition to performance evaluations, organizations conduct alignment testing. However, alignment audits often present two major challenges: scalability and validation. Alignment testing takes so much time for human researchers, and it’s challenging to be assured that the audit caught everything. In a paper, Anthropic researchers said they developed auditing agents that achieved “impressive performance at auditing tasks, while also shedding light on their limitations.” The researchers said these agents, which were created during the pre-deployment testing of Claude Opus 4, improved alignment validation tests and allowed researchers to run multiple parallel audits at scale. Anthropic also released a replication of its audit agents on GitHub. New Anthropic research: Building and evaluating alignment auditing agents. We developed three AI agents to autonomously complete alignment auditing tasks. In testing, our agents successfully uncovered hidden goals, built safety evaluations, and surfaced concerning behaviors. pic.twitter.com/HMQhMaA4v0 — Anthropic (@AnthropicAI) July 24, 2025 “We introduce three agents that autonomously complete alignment auditing tasks. We also introduce three environments that formalize alignment auditing workflows as auditing games, and use them to evaluate our agents,” the researcher said in the paper. The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF The three agents they explored were: Tool-using investigator agent for

Freed says 20,000 clinicians are using its medical AI transcription ‘scribe,’ but competition is rising fast
Even generative AI critics and detractors have to admit the technology is great for something: transcription.If you’ve joined a meeting on Zoom, Microsoft Teams, Google Meet or other video call platform of your choice at any point in the last year or so, you’ve likely noticed an increased number of AI notetakers joining the conference call as well.Indeed, not only do these platforms all have AI transcription features built in, but there are of course other stand alone services like Otter AI (used by VentureBeat along with the Google Workspace suite of Apps), and models such as OpenAI’s new gpt-4o-transcribe and older open-source Whisper, aiOla, and many others with specific niches and roles.One such startup is San Fransisco-based Freed AI, co-founded in 2022 by former Facebook engineers Erez Druk and Andrey Bannikov, now its CEO and CTO, respectively. The idea was simple: give doctors and medical professionals a way to automatically transcribe their conversations with patients, capture accurate health specific terminology, and extract insights and action plans from the conversations without the physician having to lift a finger.

Al-Sada Reappointed as Rosneft Chair
Mohammed Bin Saleh Al-Sada has been re-elected chair of the board of directors of Russia’s state-owned PJSC Rosneft Oil Co. Ten other members of the board were elected at a shareholders’ meeting, Rosneft said in a media release. Al-Sada was elected Rosneft chairman for the first time in June 2023. He has accumulated over 40 years of experience in the energy industry. Presently, Al-Sada serves as the chairman of the board of trustees at Doha University for Science and Technology, in Qatar, Rosneft said. Rosneft added that Al-Sada also serves as a member of the board of trustees of the Abdullah Bin Hamad Al-Attiyah International Foundation for Energy and Sustainable Development, an advisory board member for the GCC Supreme Council. HE is also the vice chairman of the board of directors for Nesma Infrastructure & Technology. Between 2007 and 2011, he served as Qatari Minister of State for Energy and Industry Affairs. From 2011 to 2018, he was Qatar’s Minister of Energy and Industry and Chairman of the Board of Qatar Petroleum, now known as QatarEnergy, Rosneft noted. Board members also elected the chairmen of the three permanent committees of the board. The board of directors of Rosneft includes Andrey I. Akimov, chairman of the management board of Gazprombank; Pedro A. Aquino Jr. (independent), chief executive officer of Oil & Petroleum Holdings International Resources Ltd.; Faizal Alsuwaidi and Hamad Rashid Al-Mohannadi, representatives of Qatar Investment Authority; Viktor G. Martynov (independent), rector of Gubkin Russian State University of Oil and Gas (National Research University); Alexander D. Nekipelov (independent), director of Moscow School of Economics at the Lomonosov Moscow State University; Alexander V. Novak, Russia’s Deputy Prime Minister; Maxim S. Oreshkin, Deputy Head of the Administration of the President of the Russian Federation; Govind Kottis Satish (independent), managing director of Value Prolific Consulting Services

Naftogaz Seals $225MM Loans to Buy Gas for Winter
Naftogaz Group has secured loans totaling UAH 9.4 billion ($225.09 million) from local banks to procure winter gas for Ukraine. JSC CB PrivatBank and PJSC JSB Ukrgasbank have each committed UAH 4.7 billion, according to online statements by state-owned integrated energy company Naftogaz. The funds will be used to stock underground storage facilities for the 2025-26 heating season, it said. “Naftogaz is diversifying its sources and routes of gas supply”, Naftogaz said. “This enhances Ukraine’s energy security and resilience amid the ongoing full-scale war”. Chief executive Sergii Koretskyi said, “At the same time, we continue to work with international financial institutions and partner countries”. Last April the European Bank for Reconstruction and Development (EBRD) said it had agreed to lend EUR 270 million ($317.28 million) to Naftogaz, complemented by a EUR 139 million grant from the Norwegian government. These will be used to buy nearly one billion cubic meters (35.31 billion cubic feet) of gas, Naftogaz said separately at the time. “Naftogaz has been the recipient of two previous EBRD loans for a total of EUR 500 million, backed by EUR 275 million in guarantees from the United States, Norway, Germany, France, Canada and The Netherlands, and complemented by earlier grant finance from Norway of EUR 187 million for emergency gas purchases”, the EBRD said April 25. “The latest agreement lifts EBRD finance for Naftogaz to EUR 770 million since 2022. “Norway’s latest grant finance brings its total wartime energy sector-focused support for Ukraine through the EBRD to EUR 460 million”. On July 11 the EBRD announced EUR 400 million in new funding for Ukraine that included a EUR 160-million loan to Naftogaz company Ukrnafta for the installation of 250 megawatts of small-scale gas-fired distributed power generation capacity across Ukraine. “At the Ukraine Recovery Conference in Rome on 10-11 July, the

Saipem, Subsea7 Agree Merger
Saipem SpA signed a binding merger deal to acquire Subsea7 SA and thereafter rebrand into Saipem7, the companies said Thursday, after an initial agreement last February. Subsea7 shareholders would receive 6.688 Saipem shares for each Subsea7 unit. The combined company’s share capital would be equally divided between the shareholders of Italian state-backed Saipem and Luxembourg-registered Subsea7 assuming all the latter’s shareholders participate in the transaction, a joint statement said. As the biggest shareholders of Saipem, Eni SpA and CDP Equity SpA would respectively own about 10.6 percent and 6.4 percent of Saipem7. Siem Industries SA, Subsea7’s top shareholder, would own around 11.8 percent. The parties expect to complete the merger in the latter half of 2026 subject to regulatory approvals, yes votes by the shareholders of both Saipem and Subsea7 and other customary conditions. Eni, CDP Equity and Siem Industries signed an agreement to vote for the combination. As part of the tripartite agreement, Eni and CDP Equity are entitled to assign Saipem7’s chief executive, who is planned to be Alessandro Puliti, Saipem’s chief executive and general manager. Siem Industries has been given the right to designate Saipem7’s chair, who is expected to be Subsea7 chair Kristian Siem. These designations would still be subject to approval by the combined company’s board, according to the statement. The resulting entity would inherit projects in over 60 countries and operate “a full spectrum of offshore and onshore services, from drilling, engineering and construction to life-of-field services and decommissioning, with an increased ability to optimize project scheduling for clients in oil, gas, carbon capture and renewable energy”, the statement said. Saipem7 would have more than 60 construction vessels able to perform “shallow-water to ultra-deepwater operations, utilising a full portfolio of heavy lift, high-end J-lay, S-lay and reel-lay rigid pipeline solutions, flexible pipe and umbilical

CISPE seeks to annul Broadcom’s VMware takeover
However, Forrester Research Senior Analyst Dario Maisto said, “Broadcom VMware commercial practices have been under the lenses for quite some time now. While we may agree or disagree with the European Commission’s decision to approve Broadcom’s acquisition of VMware, the fact is that a number of European organizations are suffering from unilateral price increases and arbitrary closure of services.” European organizations, he pointed out, “are too much dependent on IT vendors that act as monopolies or oligopolies in the best case scenario. Something like the sought-for Buy European Act may be a way to promote better competition in the European cloud and IT markets.” The appeal, said Maisto, “is a long term play, though. In the short term, CISPE should keep seeking a fairer cloud market in Europe. Results will come sooner or later, as it was for the Microsoft case.” ‘Time to find a new dance partner’ John Annand, digital infrastructure practice lead at Info-Tech Research Group, is of two minds on the topic. “No doubt that what Broadcom is doing is manifestly geared towards their own benefit at the expense of (soon to be in many cases former) partners and customers,” he said. “When Broadcom completed the acquisition of VMWare, they promised an end to special back-room pricing deals that gave differential discounts to preferred hardware or public cloud providers.” Basically, he said, “Broadcom changed the license deals for all their cloud provider partners, and they did so equally. However, a subset of those partners are [part of] CISPE, and all the members of that subset are on the small side. So, while the vast majority of CSPs worldwide are affected negatively by the Broadcom changes, for CISPE members, 100% of them will be negatively affected. Does this affect competition? Sure.” Annand also noted that, as of October, European clients
Anthropic unveils ‘auditing agents’ to test for AI misalignment
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now When models attempt to get their way or become overly accommodating to the user, it can mean trouble for enterprises. Therefore, it’s essential that, in addition to performance evaluations, organizations conduct alignment testing. However, alignment audits often present two major challenges: scalability and validation. Alignment testing takes so much time for human researchers, and it’s challenging to be assured that the audit caught everything. In a paper, Anthropic researchers said they developed auditing agents that achieved “impressive performance at auditing tasks, while also shedding light on their limitations.” The researchers said these agents, which were created during the pre-deployment testing of Claude Opus 4, improved alignment validation tests and allowed researchers to run multiple parallel audits at scale. Anthropic also released a replication of its audit agents on GitHub. New Anthropic research: Building and evaluating alignment auditing agents. We developed three AI agents to autonomously complete alignment auditing tasks. In testing, our agents successfully uncovered hidden goals, built safety evaluations, and surfaced concerning behaviors. pic.twitter.com/HMQhMaA4v0 — Anthropic (@AnthropicAI) July 24, 2025 “We introduce three agents that autonomously complete alignment auditing tasks. We also introduce three environments that formalize alignment auditing workflows as auditing games, and use them to evaluate our agents,” the researcher said in the paper. The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF The three agents they explored were: Tool-using investigator agent for

Freed says 20,000 clinicians are using its medical AI transcription ‘scribe,’ but competition is rising fast
Even generative AI critics and detractors have to admit the technology is great for something: transcription.If you’ve joined a meeting on Zoom, Microsoft Teams, Google Meet or other video call platform of your choice at any point in the last year or so, you’ve likely noticed an increased number of AI notetakers joining the conference call as well.Indeed, not only do these platforms all have AI transcription features built in, but there are of course other stand alone services like Otter AI (used by VentureBeat along with the Google Workspace suite of Apps), and models such as OpenAI’s new gpt-4o-transcribe and older open-source Whisper, aiOla, and many others with specific niches and roles.One such startup is San Fransisco-based Freed AI, co-founded in 2022 by former Facebook engineers Erez Druk and Andrey Bannikov, now its CEO and CTO, respectively. The idea was simple: give doctors and medical professionals a way to automatically transcribe their conversations with patients, capture accurate health specific terminology, and extract insights and action plans from the conversations without the physician having to lift a finger.

Enphase Energy looks to third-party financing to boost business
Dive Brief: U.S. solar and battery installers are not rushing to stock up on equipment ahead of the expiration of the 25D federal tax credit for customer-owned residential solar systems at the end of the year, Enphase Energy CEO Badri Kothandaraman said Tuesday on the company’s second-quarter earnings call. But Kothandaraman said Enphase expects “pull-forward” business to materialize early in the fourth quarter, later than some analysts expected. Julien Dumoulin-Smith, an equity analyst with investment bank Jefferies, said in a Wednesday note that “we were too optimistic in our preview and expected demand pull-in from [the] potential expiry of 25D.” After the 25D credit expires, the solar industry “must evolve rapidly” toward leasing arrangements and power purchase agreements while boosting battery attachment rates and driving down installation and customer acquisition costs, Kothandaraman said. Dive Insight: In spite of a muted, single-digit increase in equipment demand during the second quarter, Enphase still sees installers who sell solar and battery systems directly to homeowners ramping up buying activity later in the year, Kothandaraman said. “Our installers are experts and they know what to do,” he said. “They can get a lot of installations done quickly.” The tax and spending law President Trump signed on July 4 ends the 30% investment tax credit for customer-owned residential solar and battery systems placed in service after Dec. 31. Third-party financed systems, also known as third-party ownership or TPO systems, are covered by a different investment tax credit that also applies to utility-scale solar, wind and other clean energy systems. TPO installers have until the middle of 2026 to make “safe harbor” equipment purchases, Kothandaraman said. Solar and storage resellers must quickly adopt third-party financing models to survive a sharp contraction in the residential distributed energy market next year, Kothandaraman said. Kothandaraman’s “personal view is

US electricity demand to grow 2.5% annually thru 2035: BofA Institute
US electricity demand to grow 2.5% annually through 2035: BofA Institute | Utility Dive Skip to main content An article from Dive Brief Building electrification, data centers, industrial growth and electric vehicles are among the factors contributing to growth, according to the prediction. Published July 24, 2025 A view of a Pacific Gas & Electric (PG&E) electrical substation on January 26, 2022 in Petaluma, California. There are a range of estimates around future U.S. electricity demand, but all point to a rapid rise after decades of stagnant growth. Justin Sullivan via Getty Images Dive Brief: U.S. electricity demand will grow at a 2.5% compound annual growth rate through 2035 — compared with a 0.5% CAGR from 2014-2024 — according to research distributed by Bank of America Institute on Tuesday. Utilities will need to increase spending to expand and replace aging power generation, transmission and distribution assets, and “deregulation and accelerated permitting may further help get more projects off the starting line,” the analysts said. The U.S. Senate Committee on Energy and Natural Resources heard testimony on Wednesday about the need to meet the rising electricity demand. Generation interconnection timelines are too long, transmission development lags demand and “permitting is fragmented and sequential,” Jeff Tench, executive vice president, North America and Asia Pacific, for Vantage Data Centers, told lawmakers. Dive Insight: There are a range of estimates around future U.S. electricity demand, but all point to a rapid rise after decades of stagnant growth. Bank of America Institute’s 2.5% CAGR prediction includes historical annual growth of about 0.5%, with another 1% coming from building electrification, 0.5% from data centers, 0.3% from industrial growth and 0.2% from electric vehicles. Permission granted by Bank of America Institute “These energy needs pose a challenge: US energy infrastructure is aging quickly and in need of replacement. In

DOE Announces Site Selection for AI Data Center and Energy Infrastructure Development on Federal Lands
The forthcoming solicitations will drive innovation in reliable energy technologies, contribute to lower energy costs, and strengthen American leadership in artificial intelligence WASHINGTON– The U.S. Department of Energy (DOE) today announced the next steps in the Trump administration’s plan to accelerate the development of AI infrastructure through siting on DOE lands. DOE has selected four sites—Idaho National Laboratory, Oak Ridge Reservation, Paducah Gaseous Diffusion Plant and Savannah River Site—to move forward with plans to invite private sector partners to develop cutting edge AI data center and energy generation projects. Today’s announcement supports the Trump administration’s goals of utilizing Federal lands to lower energy costs and help power the global AI race, as outlined in President Trump’s Executive Orders on Accelerating Federal Permitting of Data Center Infrastructure, Deploying Advanced Nuclear Reactor Technologies for National Security, and Unleashing American Energy. “By leveraging DOE land assets for the deployment of AI and energy infrastructure, we are taking a bold step to accelerate the next Manhattan Project—ensuring U.S. AI and energy leadership,” said Energy Secretary Chris Wright. “These sites are uniquely positioned to host data centers as well as power generation to bolster grid reliability, strengthen our national security, and reduce energy costs.” DOE received enormous interest in response to its April request for information (RFI) that helped inform the selection of these sites. The chosen locations are well-situated for large-scale data centers, new power generation, and other necessary infrastructure. DOE looks forward to working with data center developers, energy companies, and the broader public in consultation with states, local governments, and federally recognized tribes that these projects will serve to further advance this important initiative. More details regarding project scope, eligibility requirements, and submission guidelines at each site will be available with the site-specific releases. These solicitations are expected to be

USA Crude Oil Inventories Drop Week on Week
U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 3.2 million barrels from the week ending July 11 to the week ending July 18, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. That report was released on July 23 and included data for the week ending July 18. It showed that crude oil stocks, not including the SPR, stood at 419.0 million barrels on July 18, 422.2 million barrels on July 11, and 436.5 million barrels on July 19, 2024. Crude oil in the SPR stood at 402.5 million barrels on July 18, 402.7 million barrels on July 11, and 374.4 million barrels on July 19, 2024, the report revealed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.653 billion barrels on July 18, the report highlighted. Total petroleum stocks were down 5.4 million barrels week on week and down 12.7 million barrels year on year, the report showed. “At 419 million barrels, U.S. crude oil inventories are about nine percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories decreased by 1.7 million barrels from last week and are slightly above the five year average for this time of year. Both finished gasoline inventories and blending components inventories decreased last week,” it added. “Distillate fuel inventories increased by 2.9 million barrels last week and are about 19 percent below the five year average for this time of year. Propane/propylene inventories decreased by 0.5 million barrels from last week and are 10 percent above the five year average for this time of

SLB Scores NEP Carbon Storage Project Job
Global energy technology company Schlumberger NV (SLB) has secured a contract to develop a carbon storage site in the North Sea. The company said the contract was awarded by the Northern Endurance Partnership (NEP), an incorporated joint venture between BP PLC, Equinor ASA, and TotalEnergies SE. SLB said that NEP is developing both onshore and offshore infrastructure needed to transport the carbon dioxide (CO2) from carbon capture projects across Teesside and the Humber. These projects are collectively known as the East Coast Cluster. SLB said the captured carbon will be safely stored under the North Sea. SLB said it will utilize its Sequestri carbon storage solutions portfolio, comprising technologies designed and qualified for developing carbon storage sites, to build six storage wells. The project encompasses drilling, measurement, cementing, fluids, completions, wireline, and pumping services, the company said. “Technologies and services tailored for carbon storage will play a critical role in shifting the economics and safeguarding the integrity of carbon storage projects before and after the FID”, Katherine Rojas, senior vice president of Industrial Decarbonization at SLB, said. “We are excited to be a part of this groundbreaking CCS project in the UK, leveraging the proven carbon storage technologies in our Sequestri portfolio and our extensive expertise delivering complex CCS projects around the world”. The NEP infrastructure is expected to play a vital role in helping the UK’s highest carbon-intensive industrial areas reach net-zero emissions. Through the Endurance saline aquifer and nearby storage facilities, NEP can provide storage for as much as one billion metric tons of CO2, SLB said. This infrastructure will facilitate the transportation and permanent storage of an initial four million metric tons of CO2 annually, with operations anticipated to commence in 2028. To contact the author, email [email protected] WHAT DO YOU THINK? Generated by readers, the comments

Trump’s big bill is ‘tough but constructive’ for renewables: NextEra
Dive Brief: NextEra Energy is well-positioned to shield its renewable energy projects from early tax credit phase-outs under the One Big Beautiful Bill Act and capture a greater share of the market as a result, John Ketchum, president, CEO and chairman of NextEra Energy, said Wednesday during a second quarter earnings call. Because NextEra is in a “constant state of construction,” the company expects to safe harbor its projects through 2029, Ketchum said. That should bring in more customers in 2028 and 2029 as competing developers’ costs begin to rise, he said. NextEra also aims to capitalize on growing demand for new gas and nuclear generation, but Ketchum said it was still too early to predict customer needs beyond 2030. Dive Insight: If the One Big Beautiful Bill was meant to limit the growth of renewable energy, executives at NextEra don’t see that happening — at least not for their company. During Wednesday’s Q&A with analysts, CFO Michael Dunne rejected assertions that the reconciliation bill created a “cliff” for renewable energy projects, arguing that it’s “just changing the rule set, and we’ll continue to build the energy infrastructure that this country needs.” Ketchum told analysts that he was confident the company could take advantage of the law’s exceptions for projects that begin construction before July 4, 2026, to lock in credits through 2029. But smaller developers may struggle to access capital and begin construction by that date, Ketchum said, resulting in less competition for NextEra Energy Resources in 2028 and 2029. “That could create potentially bigger opportunities for us in those years,” Ketchum said. He also said that the company may be able to buy attractive energy projects from other developers at a discount in the years to come as a result of the reconciliation bill. Analysts on the call were

AI means the end of internet search as we’ve known it
We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way. But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way. Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results. More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene. I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources. On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages. People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest). People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see.
Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate. Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know? In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good. Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed. And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first. But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad.
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing. Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search. “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly. It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be. But once you’ve used AI Overviews a bit, you realize they are different. Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web.
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.) “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.” That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language.
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video. When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai. There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out? I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too. “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web. “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful. “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.” But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result? Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend. “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says. Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.” Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.” “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.” He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew? A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says. OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more. “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience. Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does. “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.” When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them. “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed! The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers. It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.” We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge. The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets. Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed. “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information. In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses. But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on.

Subsea7 Scores Various Contracts Globally
Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Driving into the future
Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more. We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen. Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes.
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake. What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story.
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa. Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Oil Holds at Highest Levels Since October
Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

What to expect from NaaS in 2025
Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

UK battery storage industry ‘back on track’
UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

Alibaba’s new open source Qwen3-235B-A22B-2507 beats Kimi-2 and offers low compute version
Chinese e-commerce giant Alibaba has made waves globally in the tech and business communities with its own family of “Qwen” generative AI large language models, beginning with the launch of the original Tongyi Qianwen LLM chatbot in April 2023 through the release of Qwen 3 in April 2025.Well, not only are its models powerful and score high on third-party benchmark tests at completing math, science, reasoning, and writing tasks, but for the most part, they’ve been released under permissive open source licensing terms, allowing organizations and enterprises to download them, customize them, run them, and generally use them for all variety of purposes, even commercial. Think of them as an alternative to DeepSeek. This week, Alibaba’s “Qwen Team,” as its AI division is known, released the latest updates to its Qwen family, and they’re already attracting attention once more from AI power users in the West for their top performance, in one case, edging out even the new Kimi-2 model from rival Chinese AI startup Moonshot released in mid-July 2025.

Fighting forever chemicals and startup fatigue
In partnership withMichigan Economic Development Corporation What if we could permanently remove the toxic “forever chemicals” contaminating our water? That’s the driving force behind Michigan-based startup Enspired Solutions, founded by environmental toxicologist Denise Kay and chemical engineer Meng Wang. The duo left corporate consulting in the rearview mirror to take on one of the most pervasive environmental challenges: PFAS. “PFAS is referred to as a forever chemical because it is so resistant to break down,” says Kay. “It does not break down naturally in the environment, so it just circles around and around. This chemistry, which would break that cycle and break the molecule apart, could really support the health of all of us.” Basing the company in Michigan was both a strategic and a practical strategy. The state has been a leader in PFAS regulation with a startup infrastructure—buoyed by the Michigan Economic Development Corporation (MEDC)—that helped turn an ambitious vision into a viable business. From intellectual property analyses to forecasting finances and fundraising guidance, the MEDC’s programs offered Kay and Wang the resources to focus on building their PFASigator: a machine the size of two large refrigerators that uses ultraviolet light and chemistry to break down PFAS in water. In other words, “it essentially eats PFAS.”
Despite the support from the MEDC, the journey has been far from smooth. “As people say, being an entrepreneur and running a startup is like a rollercoaster,” Kay says. “You have high moments, and you have very low moments when you think nothing’s ever going to move forward.” Without revenue or salaries in the early days, the co-founders had to be sustained by something greater than financial incentive.
“If problem solving and learning new talents do not provide sufficient intrinsic reward for a founder to be satisfied throughout what I guarantee will be a long duration effort, then that founder may need to reset their expectations. Because the financial rewards of entrepreneurship are small throughout the process.” Still, Kay remains optimistic about the road ahead for Enspired Solutions, for clean water innovation, and for other founders walking down a similar path. “Often, founders are coached about formulas for fundraising, formulas for startup success. Learning those formulas and expectations is important, but it’s also important to not forget that it’s your creativity and innovation and foresight that got you to the place you’re in and drove you to start a company. Ultimately, people still want to see that shine through.” This episode of Business Lab is produced in partnership with the Michigan Economic Development Corporation. Full Transcript Megan Tatum: From MIT Technology Review, I’m Megan Tatum. This is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Today’s episode is brought to you in partnership with the Michigan Economic Development Corporation.Our topic today is launching a technology startup in the US state of Michigan. Building out an innovative idea into a viable product and company requires knowledge and resources that individuals might not have. That’s why the Michigan Economic Development Corporation, or the MEDC, has launched an innovation campaign to support technology entrepreneurs.Two words for you: startup ecosystem.My guest is Dr. Denise Kay, the co-founder and CEO at Enspired Solutions, a Michigan-based startup focused on removing synthetic forever chemicals called PFAS from water.Welcome, Denise. Dr. Denise Kay: Hi, Megan. Megan: Hi. Thank you so much for joining us. To get us started, Denise, I wondered if we could talk about Enspired Solutions a bit more. How did the idea come about, and what does your company do?
Denise: Well, my co-founder, Meng, and I had careers in consulting, advising clients on the fate and toxicity of chemicals in the environment. What we did was evaluate how chemicals moved through soil, water, and air, and what toxic impact they might have on humans and wildlife. That put us in a really unique position to see early on the environmental and health ramifications of the manmade chemical PFAS in our environment. When we learned of a very novel and elegant chemistry that could effectively destroy PFAS, we could foresee the value in making this chemistry available for commercial use and the potential for a significant positive impact on maintaining healthy water resources for all of us.Like you mentioned, PFAS is referred to as a forever chemical because it is so resistant to break down. It does not break down naturally in the environment, so it just circles around and around. This chemistry, which would break that cycle and break the molecule apart, could really support the health of all of us.Ultimately, Meng and I quit our jobs, and we founded Enspired Solutions. Our objective was to design, manufacture, and sell commercial-scale equipment that destroys PFAS in water based on this laboratory bench-scale chemistry that had been discovered, the goal being that this toxic contaminant does not continue to circulate in our natural resources.At this point, we have won an award from the EPA and Department of Defense, and proven our technology in over 200 different water samples ranging from groundwater, surface water, landfill leachate, industrial wastewater, [and] municipal wastewater. It’s really everywhere. What we’re seeing traction in right now is customer applications managing semiconductor waste. Groundwater and surface water around airports tend to be high in PFAS. Centralized waste disposal facilities that collect and manage PFAS-contaminated liquids. And also, even transitioning firetrucks to PFAS-free firefighting foams. Megan: Fantastic. That’s a huge breadth of applications, incredible stuff. Denise: Yeah. Megan: You launched about four years ago now. I wondered what factors made Michigan the right place to build and grow the company? Denise: That is something we put a lot of thought into, because I live in Michigan, and Meng lives in Illinois, so when it was just the two of us, there was even that, “Okay, what is going to be our headquarters?” We looked at a number of factors. Some of the things we considered were rentable incubator space. By incubator, I mean startup incubators or innovation centers. The startup support network, a pool of future employees, and what position the state agencies were taking regarding PFAS.While thinking about all those things and investigating our communities, in Michigan, we found a space to rent where we could do chemistry experiments in an incubator environment. Somewhere where we were surrounded by other entrepreneurs, which we knew was something we had to learn how to do. We were great chemists, but we knew that surrounding ourselves with those skills that could be a gap for us was going to be helpful.Also, we know that Michigan has moved much faster than other states in identifying PFAS sources in the environment and regulating its presence. This combination was something we knew would be the right place for starting our business and having success. Megan: It was a perfect setting for those two reasons. What were the first stages of your journey working with the Michigan Economic Development Corporation, the MEDC?
Denise: Well, both my co-founder, Meng, and I are first-time entrepreneurs. MEDC was one of the first resources I reached out to, starting from a Google search. They were an information resource we turned to initially, and then again and again for learning some fundamental skills. And receiving one-on-one expert mentorship for things like business contracts, understanding intellectual property landscapes, tracking and forecasting our business finances, and even how to approach fundraising. Megan: Wow. It sounds like they were an invaluable resource in those early days. How did early-stage research and development progress from that point? What were the key MEDC services and programs you used to get started?
Denise: Well, our business is based on cutting-edge science, truly cutting-edge science. Understanding the intellectual property landscape, which is a term used to describe intellectual property, patents, trademarks, trade secrets that are related to the science we were founding our business on, it was very important. So that we knew we were starting on a path, that we wouldn’t hit a wall three years from now. The MEDC performed an IP landscape survey for us. They searched the breadth of patents, and patent applications, and trademarks, and those things, and provided that for Meng and me to review and consider our position before really, really digging in and spending a lot of emotional time and money on the business. The MEDC also helped us early on create a model in Excel for tracking business financing and forecasting, forecasting our future financial needs, so that we could be proactive instead of reactive to financial limitations. We knew it wasn’t going to be inexpensive to design and build a piece of equipment that’s the size of two very large refrigerators that had never been built before. That type of financial-forward modeling helped us figure out when we would need to start fundraising and taking in investments. As we progressed along that, the MEDC also provided support of an attorney who reviewed contract language to make sure that we really understood various agreements that we were signing. Megan: Right. You mentioned that you and your co-founder were first-time entrepreneurs, as you put it. Tech acumen and business acumen are very different sets of skills. I wondered, what was the process like, developing this innovative technology while also building out a viable business plan? Denise: Well, Meng is a brilliant individual. She is a chemical engineer who also has an MBA. Meng had fantastic training to help understand the basis of how businesses function, in addition to understanding both the engineering and the chemistry behind what we were trying to do.I am an environmental toxicologist by training. I’ve had a longer career than Meng in that field. Over time, I have grown new offices and established new offices for different consulting firms I’ve worked for. I had the experience with people, space, culture, and running a business from that side. Meng has the financial MBA knowledge basis for a business. We’re both excellent chemists and engineers, and those types of things.We had much of the necessary knowledge, at least to take the first steps forward. The challenge became the hard limit of 24 hours in a day and no revenue to hire any support. That’s when the startup support networks like the MEDC became invaluable. It was simply impossible to do everything that needed to be done, especially while we were learning what we were doing. The MEDC and other programs provided support to take some of that load off us, but also helped us to learn to implement the new skills in an efficient manner, less stumbling.
Megan: So many things to juggle, isn’t there, in starting a company. I wondered, in that vein, could you share some successes and highlights from your journey so far? Any partnerships or projects that you’re excited about that you could share with us? Denise: As people say, being an entrepreneur and running a startup is like a rollercoaster. You have high moments and you have very low moments when you think nothing’s ever going to move forward. I’d love to talk about some of the highlights. Our machine, which we call the PFASigator. First of all, coming up with that name has a fun story behind it. The machine is, like I said, about the size of two large refrigerators. It’s very large, and it breaks down PFAS in water. The machine takes in water that has PFAS in it, we add a couple of liquid chemicals, then a very intense ultraviolet light shines on that water, which catalyzes a chemical reaction called reductive defluorination. When all of this is happening and the PFAS molecules are being broken apart to nontoxic compounds, to an outsider, it all still just looks like water with a light shining on it. But the machine is big, and it essentially eats PFAS. Meng and I were bantering, and her young, six-year-old son was in the background at the time. We were throwing names around. Thomas called out, “The PFASigator!” We were like, “Ooh, there’s something there.”
Megan: It’s a great name. Denise: It matches what we do, and it’s a memorable name. We’ve really had fun with that throughout. That was an early highlight, and we’ve stuck with that name. The next highlight I’d say was standing next to our first fully functioning PFASigator. It was big. It was all stainless steel. Meng and I had never been part of building a physical, large object like that. Just standing there, and the picture we have of us, it was exhilarating. That was a magnificent feeling. Selling our first machine was a day that everyone in the company, I think we were about eight at that point, received a bottle of champagne. Megan: Fantastic. Denise: For a startup to go from zero to one, they call it, you’ve sold nothing to you’ve sold something. That’s a real strong milestone and was a celebration for us. I’d say most recently, Enspired has been awarded a very exciting project in Michigan. It is in the contracting phase, so I can’t reveal too many details. But it is with a progressive municipality that will have our PFASigator permanently installed, destroying PFAS. That kind of movement from zero to one, and then a significant contract that will raise the visibility of the effectiveness of our approach and machine, has really buoyed our energy and is pushing us forward. It’s amazing to know we are now having an impact on the sustainability of water resources. That’s what we started the company for. Megan: Awesome. You have some incredible milestones there. But it’s a hard journey, as you’ve said as well, being an entrepreneur. I wondered, finally, what advice would you offer to burgeoning entrepreneurs given your own experience? Denise: I would advise that if problem solving and learning new talents do not provide sufficient intrinsic reward for a founder to be satisfied throughout what I guarantee will be a long duration effort, then that founder may need to reset their expectations, because the financial rewards of entrepreneurship are small throughout the process.Meng and I put [in] some of our personal funds and took no salary, and worked harder than we ever had in our lives for at least a year and a half before we were able to take a small salary. The financial rewards are small throughout the process of being a startup. The rewards are delayed, and in many cases, for many startups, the financial rewards never materialize.It’s a tough journey, and you have to love being on that journey, and be intrinsically rewarded for that for the sake of the journey itself, or you’ll be a very unhappy founder.Megan: It needs to be something you’re as passionate about as I can tell you are about the work you’re doing at Enspired Solutions. Denise: There’s probably one other thing I’d like to add to that. Megan: Of course. Denise: Often, founders are coached about formulas for fundraising, formulas for startup success. Learning those formulas and expectations is important, but it’s also important to not forget that it’s your creativity and innovation and foresight that got you to the place you’re in and drove you to start a company. Ultimately, people still want to see that shine through.” Megan: That’s fantastic advice. Thank you so much, Denise. That was Dr. Denise Kay, the co-founder and CEO at Enspired Solutions, whom I spoke with from an unexpectedly sunny Brighton, England.That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor and host for Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. You can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, and this episode was produced by Giro Studios. Thanks for listening. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Gemini 2.5 Flash-Lite is now ready for scaled production use
Today, we’re releasing the stable version of Gemini 2.5 Flash-Lite, our fastest and lowest cost ($0.10 input per 1M, $0.40 output per 1M) model in the Gemini 2.5 model family. We built 2.5 Flash-Lite to push the frontier of intelligence per dollar, with native reasoning capabilities that can be optionally toggled on for more demanding use cases. Building on the momentum of 2.5 Pro and 2.5 Flash, this model rounds out our set of 2.5 models that are ready for scaled production use.Our most cost-efficient and fastest 2.5 model yet
Gemini 2.5 Flash-Lite strikes a balance between performance and cost, without compromising on quality, particularly for latency-sensitive tasks like translation and classification.Here’s what makes it stand out:Best in-class speed: Gemini 2.5 Flash-Lite has lower latency than both 2.0 Flash-Lite and 2.0 Flash on a broad sample of prompts.Cost-efficiency: It’s our lowest-cost 2.5 model yet, priced at $0.10 / 1M input tokens and $0.40 output tokens, allowing you to handle large volumes of requests affordably. We have also reduced audio input pricing by 40% from the preview launch.Smart and small: It demonstrates all-around higher quality than 2.0 Flash-Lite across a wide range of benchmarks, including coding, math, science, reasoning, and multimodal understanding.Fully featured: When you build with 2.5 Flash-Lite, you get access to a 1 million-token context window, controllable thinking budgets, and support for native tools like Grounding with Google Search, Code Execution, and URL Context.Gemini 2.5 Flash-Lite in actionSince the launch of 2.5 Flash-Lite, we have already seen some incredibly successful deployments, here are some of our favorites:Satlyt is building a decentralized space computing platform that will transform how satellite data is processed and utilized for real-time summarization of in-orbit telemetry, autonomous task management, and satellite-to-satellite communication parsing. 2.5 Flash-Lite’s speed has enabled a 45% reduction in latency for critical onboard diagnostics and a 30% decrease in power consumption compared to their baseline models.HeyGen uses AI to create avatars for video content and leverages Gemini 2.5 Flash-Lite to automate video planning, analyze and optimize content, and translate videos into over 180 languages. This allows them to provide global, personalized experiences for their users.DocsHound turns product demos into documentation by using Gemini 2.5 Flash-Lite to process long videos and extract thousands of screenshots with low latency. This transforms footage into comprehensive documentation and training data for AI agents much faster than traditional methods.Evertune helps brands understand how they are represented across AI models. Gemini 2.5 Flash-Lite is a game-changer for them, dramatically speeding up analysis and report generation. Its fast performance allows them to quickly scan and synthesize large volumes of model output to provide clients with dynamic, timely insights.You can start using 2.5 Flash-Lite by specifying “gemini-2.5-flash-lite” in your code. If you are using the preview version, you can switch to “gemini-2.5-flash-lite” which is the same underlying model. We plan to remove the preview alias of Flash-Lite on August 25th.Ready to start building? Try the stable version of Gemini 2.5 Flash-Lite now in Google AI Studio and Vertex AI.
The Download: how to melt rocks, and what you need to know about AI
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. This startup wants to use beams of energy to drill geothermal wells Geothermal startup Quaise certainly has an unconventional approach when it comes to destroying rocks: it uses a new form of drilling technology to melt holes through them. The company hopes it’s the key to unlocking geothermal energy and making it feasible anywhere.Quaise’s technology could theoretically be used to tap into the Earth’s heat from anywhere on the globe. But some experts caution that reinventing drilling won’t be as simple, or as fast, as Quaise’s leadership hopes. Read the full story. —Casey Crownhart
Five things you need to know about AI right now
—Will Douglas Heaven, senior editor for AI Last month I gave a talk at SXSW London called “Five things you need to know about AI”—my personal picks for the five most important ideas in AI right now. I aimed the talk at a general audience, and it serves as a quick tour of how I’m thinking about AI in 2025. There’s some fun stuff in there. I even make jokes! You can now watch the video of my talk, but if you want to see the five I chose right now, here is a quick look at them. This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Why it’s so hard to make welfare AI fair There are plenty of stories about AI that’s caused harm when deployed in sensitive situations, and in many of those cases, the systems were developed without much concern to what it meant to be fair or how to implement fairness.But the city of Amsterdam spent a lot of time and money to try to create ethical AI—in fact, it followed every recommendation in the responsible AI playbook. But when it deployed it in the real world, it still couldn’t remove biases. So why did Amsterdam fail? And more importantly: Can this ever be done right?Join our editor Amanda Silverman, investigative reporter Eileen Guo and Gabriel Geiger, an investigative reporter from Lighthouse Reports, for a subscriber-only Roundtables conversation at 1pm ET on Wednesday July 30 to explore if algorithms can ever be fair. Register here!
The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 America’s grand data center ambitions aren’t being realized A major partnership between SoftBank and OpenAI hasn’t got off to a flying start. (WSJ $)+ The setback hasn’t stopped OpenAI opening its first DC office. (Semafor) 2 OpenAI is partnering with the UK governmentIn a bid to increase its public services’ productivity and to drive economic growth. (BBC)+ It all sounds pretty vague. (Engadget) 3 The battle for AI math supremacy is heating upGoogle and OpenAI went head to head in a math competition—but only one played by the rules. (Axios)+ The International Math Olympiad poses a unique challenge to AI models. (Ars Technica)+ What’s next for AI and math. (MIT Technology Review) 4 Mark Zuckerberg’s secretive Hawaiian compound is getting biggerThe multi-billionaire is sinking millions of dollars into the project. (Wired $) 5 India’s back offices are meeting global demand for AI expertise New ‘capability centers’ could help to improve the country’s technological prospects. (FT $)+ The founder of Infosys believes the future of AI will be more democratic. (Rest of World)+ Inside India’s scramble for AI independence. (MIT Technology Review)
6 A crime-tracking app will share videos with the NYPDPublic safety agencies will have access to footage shared on Citizen. (The Verge)+ AI was supposed to make police bodycams better. What happened? (MIT Technology Review) 7 China has a problem with competition: there’s too much of itIts government is making strides to crack down on price wars within sectors. (NYT $)+ China’s Xiaomi is making waves across the world. (Economist $)
8 The metaverse is a tobacco marketer’s playground 🚬Fed up of legal constraints, they’re already operating in unregulated spaces. (The Guardian)+ Welcome to the oldest part of the metaverse. (MIT Technology Review) 9 How AI is shaking up physicsModels are suggesting outlandish ideas that actually work. (Quanta Magazine) 10 Tesla has opened a diner that resembles a spaceshipIt’s technically a drive-thru that happens to sell Tesla merch. (TechCrunch) Quote of the day “If you can pick off the individuals for $100 million each and they’re good, it’s actually a bargain.”
—Entrepreneur Laszlo Bock tells Insider why he thinks the eye-watering sums Meta is reportedly offering top AI engineers is money well spent. One more thing The world’s first industrial-scale plant for green steel promises a cleaner futureAs of 2023, nearly 2 billion metric tons of steel were being produced annually, enough to cover Manhattan in a layer more than 13 feet thick.Making this metal produces a huge amount of carbon dioxide. Overall, steelmaking accounts for around 8% of the world’s carbon emissions—one of the largest industrial emitters and far more than such sources as aviation.A handful of groups and companies are now making serious progress toward low- or zero-emission steel. Among them, the Swedish company Stegra stands out. The startup is currently building the first industrial-scale plant in the world to make green steel. But can it deliver on its promises? Read the full story.—Douglas Main

This startup wants to use beams of energy to drill geothermal wells
A beam of energy hit the slab of rock, which quickly began to glow. Pieces cracked off, sparks ricocheted, and dust whirled around under a blast of air. From inside a modified trailer, I peeked through the window as a millimeter-wave drilling rig attached to an unassuming box truck melted a hole into a piece of basalt in less than two minutes. After the test was over, I stepped out of the trailer into the Houston heat. I could see a ring of black, glassy material stamped into the slab fragments, evidence of where the rock had melted. This rock-melting drilling technology from the geothermal startup Quaise is certainly unconventional. The company hopes it’s the key to unlocking geothermal energy and making it feasible anywhere. Geothermal power tends to work best in those parts of the world that have the right geology and heat close to the surface. Iceland and the western US, for example, are hot spots for this always-available renewable energy source because they have all the necessary ingredients. But by digging deep enough, companies could theoretically tap into the Earth’s heat from anywhere on the globe.
That’s a difficult task, though. In some places, accessing temperatures high enough to efficiently generate electricity would require drilling miles and miles beneath the surface. Often, that would mean going through very hard rock, like granite. Quaise’s proposed solution is a new mode of drilling that eschews the traditional technique of scraping into rock with a hard drill bit. Instead, the company plans to use a gyrotron, a device that emits high-frequency electromagnetic radiation. Today, the fusion power industry uses gyrotrons to heat plasma to 100 million °C, but Quaise plans to use them to blast, melt, and vaporize rock. This could, in theory, make drilling faster and more economical, allowing for geothermal energy to be accessed anywhere.
Since Quaise’s founding in 2018, the company has demonstrated that its systems work in the controlled conditions of the laboratory, and it has started trials in a semi-controlled environment, including the backyard of its Houston headquarters. Now these efforts are leaving the lab, and the team is taking gyrotron drilling technology to a quarry to test it in real-world conditions. Some experts caution that reinventing drilling won’t be as simple, or as fast, as Quaise’s leadership hopes. The startup is also attempting to raise a large funding round this year, at a time when economic uncertainty is slowing investment and the US climate technology industry is in a difficult spot politically because of policies like tariffs and a slowdown in government support. Quaise’s big idea aims to accelerate an old source of renewable energy. This make-or-break moment might determine how far that idea can go. Blasting through Rough calculations from the geothermal industry suggest that enough energy is stored inside the Earth to meet our energy demands for tens or even hundreds of thousands of years, says Matthew Houde, cofounder and chief of staff at Quaise. After that, other sources like fusion should be available, “assuming we continue going on that long, so to speak,” he quips. “We want to be able to scale this style of geothermal beyond the locations where we’re able to readily access those temperatures today with conventional drilling,” Houde says. The key, he adds, is simply going deep enough: “If we can scale those depths to 10 to 20 kilometers, then we can enable super-hot geothermal to be worldwide accessible.” Though that’s technically possible, there are few examples of humans drilling close to this depth. One research project that began in 1970 in the former Soviet Union reached just over 12 kilometers, but it took nearly 20 years and was incredibly expensive. Quaise hopes to speed up drilling and cut its cost, Houde says. The company’s goal is to drill through rock at a rate of between three and five meters per hour of steady operation. One key factor slowing down many operations that drill through hard rocks like granite is nonproductive time. For example, equipment frequently needs to be brought all the way back up to the surface for repairs or to replace drill bits. Quaise’s key to potentially changing that is its gyrotron. The device emits millimeter waves, beams of energy with wavelengths that fall between microwaves and infrared waves. It’s a bit like a laser, but the beam is not visible to the human eye.
Quaise’s goal is to heat up the target rock, effectively drilling it away. The gyrotron beams waves at a target rock via a waveguide, a hollow metal tube that directs the energy to the right spot. (One of the company’s main technological challenges is to avoid accidentally making plasma, an ionized, superheated state of matter, as it can waste energy and damage key equipment like the waveguide.) Here’s how it works in practice: When Quaise’s rig is drilling a hole, the tip of the waveguide is positioned a foot or so away from the rock it’s targeting. The gyrotron lets out a burst of millimeter waves for about a minute. They travel down the waveguide and hit the target rock, which heats up and then cracks, melts, or even vaporizes. Then the beam stops, and the drill bit at the end of the waveguide is lowered to the surface of the rock, rotating and scraping off broken shards and melted bits of rock as it descends. A steady blast of air carries the debris up to the surface, and the process repeats. The energy in the millimeter waves does the hard work, and the scraping and compressed air help remove the fractured or melted material away. This system is what I saw in action at the company’s Houston headquarters. The drilling rig in the yard is a small setup, something like what a construction company might use to drill micro piles for a foundation or what researchers would use to take geological samples. In total, the gyrotron has a power of 100 kilowatts. A cooling system helps the superconducting magnet in the gyrotron reach the necessary temperature (about -200 °C), and a filtration system catches the debris that sloughs off samples. CASEY CROWNHART Soon after my visit, this backyard setup was packed up and shipped to central Texas to be used for further field testing in a rock quarry. The company announced in July that it had used that rig to drill a 100-meter-deep hole at that field test site. Quaise isn’t the first to develop nonmechanical drilling, says Roland Horne, head of the geothermal program at Stanford University. “Burning holes in rocks is impressive. However, that’s not the whole of what’s involved in drilling,” he says. The operation will need to be able to survive the high temperatures and pressures at the bottom of wells as they’re drilled, he says. So far, the company has found success drilling holes into columns of rock inside metal casings, as well as the quarry in its field trials. But there’s a long road between drilling into predictable material in a relatively predictable environment and creating a miles-deep geothermal well. Rocky roads In April, Quaise fully integrated its second 100-kilowatt gyrotron onto an oil and gas rig owned by the company’s investor and technology partner Nabors. This rig is the sort that would typically be used for training or engineering development, and it’s set up along with a row of other rigs at the Nabors headquarters, just across town from the Quaise lab. At 182 feet high, the top is visible above the office building from the parking lot.
When I visited in April, the company was still completing initial tests, using special thermal paper and firing short blasts to test the setup. In May the company tested this integrated rig, drilling a hole four inches in diameter and 30 feet deep. Another test in June reached a depth of 40 feet. These holes were drilled into columns of basalt that had been lowered into the ground as a test material. While the company tests its 100-kilowatt systems at the rig and the quarry, the next step is an even larger system, which features a gyrotron that’s 10 times more powerful. This one-megawatt system will drill larger holes, over eight inches across, and represents the commercial-scale version of the company’s technology. Drilling tests are set to begin with this larger drill in 2026.
The one-megawatt system actually needs a little over three megawatts of power overall, including the energy needed to run support equipment like cooling systems and the compressor that blows air into the hole, carrying the rock dust back up to the surface. That power demand is similar to what an oil and gas rig requires today. Quaise is in the process of setting up a pilot plant in Oregon, basically on the side of a volcano, says Trenton Cladouhos, the company’s vice president of geothermal resource development. This project will use conventional drilling, and its main purpose is to show that Quaise can build and run a geothermal plant, Cladouhos says. The company is building an exploration well this year and plans to begin drilling production wells (those that can eventually be used to generate electricity) in 2026. That pilot project will reach about 20 megawatts of power with the first few wells, operating on rock that’s around 350 °C. The company plans to have it operational as early as 2028. Quaise’s strategy with the Oregon project is to show that it can use super-hot rocks to produce geothermal power efficiently, says CEO Carlos Araque. After it fires up the plant and begins producing electricity, the company can go back in and deepen the holes with millimeter-wave drilling in the future, he adds. A drilling test shows Quaise’s millimeter-wave technology drilling into a piece of granite. Araque says the company already has some customers lined up for the energy it’ll produce, though he declined to name them, saying only that one was a big tech company, and there’s a utility involved as well. But the startup will need more capital to finish this project and complete its testing with the larger, one-megawatt gyrotron. And uncertainty is floating around in climate tech, given the Trump administration’s tariffs and rollback of financial support for climate tech (though geothermal has been relatively unscathed).
Quaise still has some technical barriers to overcome before it begins building commercial power plants. One potential hurdle: drilling in different directions. Right now, millimeter-wave drilling can go in a straight line, straight down. Developing a geothermal plant like the one at the Oregon site will likely require what’s called directional drilling, the ability to drill in directions other than vertical. And the company will likely face challenges as it transitions from lab testing to field trials. One key challenge for geothermal technology companies attempting to operate at this depth will be keeping wells functional for a long time to keep a power plant operating, says Jefferson Tester, a professor at Cornell University and an expert in geothermal energy. Quaise’s technology is very aspirational, Tester says, and it can be difficult for new ideas in geothermal to compete economically. “It’s eventually all about cost,” he says. And companies with ambitious ideas run the risk that their investors will run out of patience before they can develop their technology enough to make it onto the grid. “There’s a lot more to learn—I mean, we’re reinventing drilling,” says Steve Jeske, a project manager at Quaise. “It seems like it shouldn’t work, but it does.”

Five things you need to know about AI right now
Last month I gave a talk at SXSW London called “Five things you need to know about AI”—my personal picks for the five most important ideas in AI right now. I aimed the talk at a general audience, and it serves as a quick tour of how I’m thinking about AI in 2025. I’m sharing it here in case you’re interested. I think the talk has something for everyone. There’s some fun stuff in there. I even make jokes! The video is now available (thank you, SXSW London). Below is a quick look at my top five. Let me know if you would have picked different ones! 1. Generative AI is now so good it’s scary. Maybe you think that’s obvious. But I am constantly having to check my assumptions about how fast this technology is progressing—and it’s my job to keep up. A few months ago, my colleague—and your regular Algorithm writer—James O’Donnell shared 10 music tracks with the MIT Technology Review editorial team and challenged us to pick which ones had been produced using generative AI and which had been made by people. Pretty much everybody did worse than chance.What’s happening with music is happening across media, from code to robotics to protein synthesis to video. Just look at what people are doing with new video-generation tools like Google DeepMind’s Veo 3. And this technology is being put into everything.My point here? Whether you think AI is the best thing to happen to us or the worst, do not underestimate it. It’s good, and it’s getting better.
2. Hallucination is a feature, not a bug. Let’s not forget the fails. When AI makes up stuff, we call it hallucination. Think of customer service bots offering nonexistent refunds, lawyers submitting briefs filled with nonexistent cases, or RFK Jr.’s government department publishing a report that cites nonexistent academic papers. You’ll hear a lot of talk that makes hallucination sound like it’s a problem we need to fix. The more accurate way to think about hallucination is that this is exactly what generative AI does—what it’s meant to do—all the time. Generative models are trained to make things up.What’s remarkable is not that they make up nonsense, but that the nonsense they make up so often matches reality. Why does this matter? First, we need to be aware of what this technology can and can’t do. But also: Don’t hold out for a future version that doesn’t hallucinate. 3. AI is power hungry and getting hungrier. You’ve probably heard that AI is power hungry. But a lot of that reputation comes from the amount of electricity it takes to train these giant models, though giant models only get trained every so often.What’s changed is that these models are now being used by hundreds of millions of people every day. And while using a model takes far less energy than training one, the energy costs ramp up massively with those kinds of user numbers. ChatGPT, for example, has 400 million weekly users. That makes it the fifth-most-visited website in the world, just after Instagram and ahead of X. Other chatbots are catching up. So it’s no surprise that tech companies are racing to build new data centers in the desert and revamp power grids.The truth is we’ve been in the dark about exactly how much energy it takes to fuel this boom because none of the major companies building this technology have shared much information about it. That’s starting to change, however. Several of my colleagues spent months working with researchers to crunch the numbers for some open source versions of this tech. (Do check out what they found.)
4. Nobody knows exactly how large language models work. Sure, we know how to build them. We know how to make them work really well—see no. 1 on this list.But how they do what they do is still an unsolved mystery. It’s like these things have arrived from outer space and scientists are poking and prodding them from the outside to figure out what they really are.It’s incredible to think that never before has a mass-market technology used by billions of people been so little understood.Why does that matter? Well, until we understand them better we won’t know exactly what they can and can’t do. We won’t know how to control their behavior. We won’t fully understand hallucinations. 5. AGI doesn’t mean anything. Not long ago, talk of AGI was fringe, and mainstream researchers were embarrassed to bring it up. But as AI has got better and far more lucrative, serious people are happy to insist they’re about to create it. Whatever it is.AGI—or artificial general intelligence—has come to mean something like: AI that can match the performance of humans on a wide range of cognitive tasks.But what does that mean? How do we measure performance? Which humans? How wide a range of tasks? And performance on cognitive tasks is just another way of saying intelligence—so the definition is circular anyway.Essentially, when people refer to AGI they now tend to just mean AI, but better than what we have today.There’s this absolute faith in the progress of AI. It’s gotten better in the past, so it will continue to get better. But there is zero evidence that this will actually play out. So where does that leave us? We are building machines that are getting very good at mimicking some of the things people do, but the technology still has serious flaws. And we’re only just figuring out how it actually works. Here’s how I think about AI: We have built machines with humanlike behavior, but we haven’t shrugged off the habit of imagining a humanlike mind behind them. This leads to exaggerated assumptions about what AI can do and plays into the wider culture wars between techno-optimists and techno-skeptics.It’s right to be amazed by this technology. It’s also right to be skeptical of many of the things said about it. It’s still very early days, and it’s all up for grabs. This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Al-Sada Reappointed as Rosneft Chair
Mohammed Bin Saleh Al-Sada has been re-elected chair of the board of directors of Russia’s state-owned PJSC Rosneft Oil Co. Ten other members of the board were elected at a shareholders’ meeting, Rosneft said in a media release. Al-Sada was elected Rosneft chairman for the first time in June 2023. He has accumulated over 40 years of experience in the energy industry. Presently, Al-Sada serves as the chairman of the board of trustees at Doha University for Science and Technology, in Qatar, Rosneft said. Rosneft added that Al-Sada also serves as a member of the board of trustees of the Abdullah Bin Hamad Al-Attiyah International Foundation for Energy and Sustainable Development, an advisory board member for the GCC Supreme Council. HE is also the vice chairman of the board of directors for Nesma Infrastructure & Technology. Between 2007 and 2011, he served as Qatari Minister of State for Energy and Industry Affairs. From 2011 to 2018, he was Qatar’s Minister of Energy and Industry and Chairman of the Board of Qatar Petroleum, now known as QatarEnergy, Rosneft noted. Board members also elected the chairmen of the three permanent committees of the board. The board of directors of Rosneft includes Andrey I. Akimov, chairman of the management board of Gazprombank; Pedro A. Aquino Jr. (independent), chief executive officer of Oil & Petroleum Holdings International Resources Ltd.; Faizal Alsuwaidi and Hamad Rashid Al-Mohannadi, representatives of Qatar Investment Authority; Viktor G. Martynov (independent), rector of Gubkin Russian State University of Oil and Gas (National Research University); Alexander D. Nekipelov (independent), director of Moscow School of Economics at the Lomonosov Moscow State University; Alexander V. Novak, Russia’s Deputy Prime Minister; Maxim S. Oreshkin, Deputy Head of the Administration of the President of the Russian Federation; Govind Kottis Satish (independent), managing director of Value Prolific Consulting Services

Naftogaz Seals $225MM Loans to Buy Gas for Winter
Naftogaz Group has secured loans totaling UAH 9.4 billion ($225.09 million) from local banks to procure winter gas for Ukraine. JSC CB PrivatBank and PJSC JSB Ukrgasbank have each committed UAH 4.7 billion, according to online statements by state-owned integrated energy company Naftogaz. The funds will be used to stock underground storage facilities for the 2025-26 heating season, it said. “Naftogaz is diversifying its sources and routes of gas supply”, Naftogaz said. “This enhances Ukraine’s energy security and resilience amid the ongoing full-scale war”. Chief executive Sergii Koretskyi said, “At the same time, we continue to work with international financial institutions and partner countries”. Last April the European Bank for Reconstruction and Development (EBRD) said it had agreed to lend EUR 270 million ($317.28 million) to Naftogaz, complemented by a EUR 139 million grant from the Norwegian government. These will be used to buy nearly one billion cubic meters (35.31 billion cubic feet) of gas, Naftogaz said separately at the time. “Naftogaz has been the recipient of two previous EBRD loans for a total of EUR 500 million, backed by EUR 275 million in guarantees from the United States, Norway, Germany, France, Canada and The Netherlands, and complemented by earlier grant finance from Norway of EUR 187 million for emergency gas purchases”, the EBRD said April 25. “The latest agreement lifts EBRD finance for Naftogaz to EUR 770 million since 2022. “Norway’s latest grant finance brings its total wartime energy sector-focused support for Ukraine through the EBRD to EUR 460 million”. On July 11 the EBRD announced EUR 400 million in new funding for Ukraine that included a EUR 160-million loan to Naftogaz company Ukrnafta for the installation of 250 megawatts of small-scale gas-fired distributed power generation capacity across Ukraine. “At the Ukraine Recovery Conference in Rome on 10-11 July, the

Saipem, Subsea7 Agree Merger
Saipem SpA signed a binding merger deal to acquire Subsea7 SA and thereafter rebrand into Saipem7, the companies said Thursday, after an initial agreement last February. Subsea7 shareholders would receive 6.688 Saipem shares for each Subsea7 unit. The combined company’s share capital would be equally divided between the shareholders of Italian state-backed Saipem and Luxembourg-registered Subsea7 assuming all the latter’s shareholders participate in the transaction, a joint statement said. As the biggest shareholders of Saipem, Eni SpA and CDP Equity SpA would respectively own about 10.6 percent and 6.4 percent of Saipem7. Siem Industries SA, Subsea7’s top shareholder, would own around 11.8 percent. The parties expect to complete the merger in the latter half of 2026 subject to regulatory approvals, yes votes by the shareholders of both Saipem and Subsea7 and other customary conditions. Eni, CDP Equity and Siem Industries signed an agreement to vote for the combination. As part of the tripartite agreement, Eni and CDP Equity are entitled to assign Saipem7’s chief executive, who is planned to be Alessandro Puliti, Saipem’s chief executive and general manager. Siem Industries has been given the right to designate Saipem7’s chair, who is expected to be Subsea7 chair Kristian Siem. These designations would still be subject to approval by the combined company’s board, according to the statement. The resulting entity would inherit projects in over 60 countries and operate “a full spectrum of offshore and onshore services, from drilling, engineering and construction to life-of-field services and decommissioning, with an increased ability to optimize project scheduling for clients in oil, gas, carbon capture and renewable energy”, the statement said. Saipem7 would have more than 60 construction vessels able to perform “shallow-water to ultra-deepwater operations, utilising a full portfolio of heavy lift, high-end J-lay, S-lay and reel-lay rigid pipeline solutions, flexible pipe and umbilical

CISPE seeks to annul Broadcom’s VMware takeover
However, Forrester Research Senior Analyst Dario Maisto said, “Broadcom VMware commercial practices have been under the lenses for quite some time now. While we may agree or disagree with the European Commission’s decision to approve Broadcom’s acquisition of VMware, the fact is that a number of European organizations are suffering from unilateral price increases and arbitrary closure of services.” European organizations, he pointed out, “are too much dependent on IT vendors that act as monopolies or oligopolies in the best case scenario. Something like the sought-for Buy European Act may be a way to promote better competition in the European cloud and IT markets.” The appeal, said Maisto, “is a long term play, though. In the short term, CISPE should keep seeking a fairer cloud market in Europe. Results will come sooner or later, as it was for the Microsoft case.” ‘Time to find a new dance partner’ John Annand, digital infrastructure practice lead at Info-Tech Research Group, is of two minds on the topic. “No doubt that what Broadcom is doing is manifestly geared towards their own benefit at the expense of (soon to be in many cases former) partners and customers,” he said. “When Broadcom completed the acquisition of VMWare, they promised an end to special back-room pricing deals that gave differential discounts to preferred hardware or public cloud providers.” Basically, he said, “Broadcom changed the license deals for all their cloud provider partners, and they did so equally. However, a subset of those partners are [part of] CISPE, and all the members of that subset are on the small side. So, while the vast majority of CSPs worldwide are affected negatively by the Broadcom changes, for CISPE members, 100% of them will be negatively affected. Does this affect competition? Sure.” Annand also noted that, as of October, European clients
Anthropic unveils ‘auditing agents’ to test for AI misalignment
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now When models attempt to get their way or become overly accommodating to the user, it can mean trouble for enterprises. Therefore, it’s essential that, in addition to performance evaluations, organizations conduct alignment testing. However, alignment audits often present two major challenges: scalability and validation. Alignment testing takes so much time for human researchers, and it’s challenging to be assured that the audit caught everything. In a paper, Anthropic researchers said they developed auditing agents that achieved “impressive performance at auditing tasks, while also shedding light on their limitations.” The researchers said these agents, which were created during the pre-deployment testing of Claude Opus 4, improved alignment validation tests and allowed researchers to run multiple parallel audits at scale. Anthropic also released a replication of its audit agents on GitHub. New Anthropic research: Building and evaluating alignment auditing agents. We developed three AI agents to autonomously complete alignment auditing tasks. In testing, our agents successfully uncovered hidden goals, built safety evaluations, and surfaced concerning behaviors. pic.twitter.com/HMQhMaA4v0 — Anthropic (@AnthropicAI) July 24, 2025 “We introduce three agents that autonomously complete alignment auditing tasks. We also introduce three environments that formalize alignment auditing workflows as auditing games, and use them to evaluate our agents,” the researcher said in the paper. The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF The three agents they explored were: Tool-using investigator agent for

Freed says 20,000 clinicians are using its medical AI transcription ‘scribe,’ but competition is rising fast
Even generative AI critics and detractors have to admit the technology is great for something: transcription.If you’ve joined a meeting on Zoom, Microsoft Teams, Google Meet or other video call platform of your choice at any point in the last year or so, you’ve likely noticed an increased number of AI notetakers joining the conference call as well.Indeed, not only do these platforms all have AI transcription features built in, but there are of course other stand alone services like Otter AI (used by VentureBeat along with the Google Workspace suite of Apps), and models such as OpenAI’s new gpt-4o-transcribe and older open-source Whisper, aiOla, and many others with specific niches and roles.One such startup is San Fransisco-based Freed AI, co-founded in 2022 by former Facebook engineers Erez Druk and Andrey Bannikov, now its CEO and CTO, respectively. The idea was simple: give doctors and medical professionals a way to automatically transcribe their conversations with patients, capture accurate health specific terminology, and extract insights and action plans from the conversations without the physician having to lift a finger.
Stay Ahead with the Paperboy Newsletter
Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.