Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

TechnipFMC Bags Contract for Newest Stabroek Project

TechnipFMC PLC said it had won a “substantial contract” from Exxon Mobil Corp to deliver subsea production systems for the Hammerhead field in the Stabroek block offshore Guyana. “TechnipFMC will provide project management, engineering and manufacturing of subsea production systems supporting both production and water injection capabilities”, the integrated energy contractor said in a press release. “The subsea architecture will include products from the Subsea 2.0® platform, including subsea trees, manifolds and associated controls”. TechnipFMC values a substantial contract between $250 million and $500 million. “Hammerhead is our seventh greenfield project award from ExxonMobil Guyana since the first development was sanctioned in 2017″, said Jonathan Landes, president for subsea at TechnipFMC. “Our continued success stems from our ability to provide schedule certainty, built on our proven execution and the benefits of Subsea 2.0®”. Last week ExxonMobil announced a positive FID (final investment decision) on Hammerhead, earmarking $6.8 billion for the 150,000 barrels per day (bpd) development. Targeted to be put into production 2029, Hammerhead will grow Stabroek’s production capacity to 1.5 million bpd, the operator said in a statement September 23. Hammerhead is the seventh project approved in Stabroek, with the fourth and biggest – the 250,000-bpd Yellowtail – started up earlier this year through floating production, storage and offloading vessel (FPSO) ONE GUYANA. “ExxonMobil is safely producing approximately 650,000 barrels of oil per day from the Stabroek block”, ExxonMobil said. “With the recent successful startup of a fourth FPSO, the ONE GUYANA, the company anticipates growing production to more than 900,000 barrels of oil per day by the end of the year. “Construction is underway for the fifth and sixth approved projects, Uaru and Whiptail, with Uaru anticipated to start production in 2026, and Whiptail is anticipated for startup in 2027”. Hammerhead’s development plan includes 18 production and injection wells. The Hammerhead FPSO

Read More »

GeoPark to Acquire Two Vaca Muerta Blocks from Pluspetrol

GeoPark Ltd has signed an agreement to purchase the producing Loma Jarillosa Este and Puesto Silva Oeste blocks in the Vaca Muerta formation in Argentina’s Neuquen Province from Pluspetrol SA for $115 million. Concurrently Bogota-based GeoPark penned two agreements with the provincial government that “include the issuance of a new unconventional exploitation license for the Puesto Silva Oeste Block that requires GeoPark to transfer a five percent working interest (WI) to Gas y Petroleo del Neuquen SA (GyP), therefore resulting in a 95 percent operated WI in that block”, GeoPark said in a statement on its website. Expected to be completed by year-end, the transaction with Buenos Aires-based Pluspetrol will increase GeoPark’s proforma production by 1,700-2,000 barrels of oil equivalent per day (boepd) – 95 percent oil and five percent gas – to 30,000 boepd, the statement said. “Current production is processed with a battery that can handle up to 6,000 bopd at the Loma Jarillosa Este Block”, the statement said. “GeoPark plans to develop the two blocks as a hub, with a new central processing facility to be built at the Puesto Silva Oeste Block with an expected capacity of approximately 20,000 bopd and a pipeline connecting the two blocks. Construction of this infrastructure is projected to start in 2026 and be online in time to support the anticipated plateau production”. GeoPark eyes “50-55 additional wells across 15 pads, unlocking more than 60 MMboe gross recoverable volumes”. Loma Jarillosa Este and Puesto Silva Oeste hold estimated proven and probable reserves of 25.8 million barrels of oil equivalent (MMboe) and estimated 2C contingent resources of 44.2 MMboe, GeoPark said. The Loma Jarillosa Este side of the transaction involves the transfer of Pluspetrol’s concession of 6,054 acres, with an exploitation license valid until 2057. The Puesto Silva Oeste portion of the transaction involves

Read More »

Eni, Seri Industrial Launch Battery Production Project in Brindisi

Eni SpA and Seri Industrial SpA have kicked off activities for a project to build a production hub for stationary lithium batteries in the Italian industrial area of Brindisi. The project is planned to produce over eight gigawatt hours a year of lithium-iron-phosphate batteries, primarily for stationary energy storage, using a water-based process. “The project is currently in the engineering phase and undergoing economic, financial and authorization assessments, which are expected to be completed by the first quarter of 2026, before moving into the execution phase”, Italy’s state-backed Eni said in a press release. The project is under Eni Storage Systems, a joint venture between Eni (50 percent plus one share) and Seri Industrial subsidiary Fib. The project “confirms the partnership between Eni and Seri Industrial in the integrated management of the Brindisi site, together with the plant that Fib is building in Teverola (in the province of Caserta)”, Eni said. “At the Brindisi site, industrial activities will also include the production of cathode active material – a lithium-iron-phosphate that stores and releases lithium ions in the cathode during charge and discharge cycles – as well as the assembly of batteries into BESS (battery energy storage systems), serving both the Brindisi and Teverola facilities. “In future, the Brindisi site will also implement battery recycling, which, together with the production of cathode active material, will be available to other operators in the sector. “Seri Industrial and Eni aim to gain more than 10 percent of the European stationary battery market, creating an integrated, state-of-the-art hub between Brindisi and Teverola to drive sustainability and the energy transition”. In another battery project, Eni early this year completed the construction of its largest battery storage system, the 200-megawatt Guajillo plant in Webb County, Texas. Equipped with lithium-ion LFP batteries, the facility was to start commercial operation by

Read More »

Oil Suffers Steepest Fall Since June

Oil declined on signals that OPEC+ will hike production again in November, tempering last week’s rally. West Texas Intermediate fell 3.4% to settle near $63 a barrel, the biggest drop since June, while Brent closed below $70. The OPEC+ alliance led by Saudi Arabia is considering raising output by at least as much as the 137,000 barrel-a-day hike scheduled for next month, according to people familiar with the plans. While such an increase could add supply to a market in which there’s already expected to be an excess, it would also bring further scrutiny to which of the group’s members are running into capacity limits. “We view a repeat of the incremental 137,000-barrel-a-day addition for November as the most likely outcome,” RBC Capital Markets LLC analysts including Helima Croft wrote in a note, referring to the decision likely to be taken at the group’s Oct. 5 meeting. “Given that many producers, excluding Saudi Arabia, have essentially hit their production ceilings, future OPEC+ supply increases will be materially lower than the announced headline numbers,” the analysts added. Crude remains on track for monthly and quarterly gains, even as the Organization of the Petroleum Exporting Countries and its allies have been pursuing a strategy to reclaim market share rather than managing prices. Oil has been underpinned by robust buying for stockpiling in China, as well as on geopolitical tensions. Today’s slide also reflects a pullback from last week’s highs, when traders covered long positions ahead of the weekend to hedge against mounting threats to Russian energy infrastructure. The International Energy Agency has projected a record oversupply in 2026 as OPEC+ continues to revive production, and as supply climbs from the group’s rivals. Goldman Sachs Group Inc., meanwhile, has said it sees Brent falling to the mid-$50s next year, despite crude stockpiling by

Read More »

Perry’s Energy REIT Fermi Boosts IPO Target to $715MM

Fermi Inc. increased the number of shares in its initial public offering and is seeking to raise as much as $715 million, up from a target of as much as $550 million last week. The company co-founded by former US Secretary of Energy Rick Perry plans to market 32.5 million shares for $18 to $22 each, according to a filing Monday with the US Securities and Exchange Commission. It previously planned to market 25 million shares with the same price range. At the top of that range, the company would have a market value of $13 billion based on the 593.2 million outstanding shares listed in its latest filing, down from 598.1 million shares previously. The offering is expected to price on Sept. 30 for trading on Oct. 1, according to terms seen by Bloomberg News. The company has filed for an IPO on the Nasdaq Global Select Market and said that it plans to apply for a listing on the London Stock Exchange. Fermi’s “Project Matador” is a development-stage advanced energy and intelligence campus with more than 5,000 acres of land leased from Texas Tech University. The company hopes to draw data center and hyperscaler tenants, and it expects to have one gigawatt of power online by the end of next year.  Perry, who was governor of Texas prior to serving in President Donald Trump’s cabinet during his first term, is a director at the company. Fermi was founded in January. The infrastructure and real estate firm aims to have up to 11 gigawatts of power to computing centers on-site by 2038 through a mix of natural gas, solar energy and nuclear power, according to filings.  The Amarillo, Texas-based company had a net loss of $6.37 million since its inception in January through the end of June. The offering

Read More »

Treasuries Hold Gains Amid Oil Price Slump

Treasuries held small gains in early US trading Monday, supported by a slump in oil prices and a rally in UK government bonds, and by anticipation of buying into Tuesday’s month-end index rebalancing. Yields were lower, the two-year by about a basis point and long maturities by as much as three basis points, remaining inside last week’s ranges. US benchmark crude oil futures were down about 2% on signs OPEC+ will hike production again in November. The prospect of a US government shutdown beginning Wednesday also has implications for the Treasury market, as shutdowns are associated with gains for bonds based on their potential to restrain the economy. The market racked up gains even as Cleveland Fed President Beth Hammack — who becomes a voting member of the central bank’s rate-setting committee next year — reiterated her view that inflation remains too high to warrant cutting interest rates. Futures markets continue to anticipate about 100 basis points of additional Fed easing over the next 12 months. Expectations for Fed rate cuts rest mainly on signs of stress in the US labor market, where job creation has slowed precipitously in recent months. September data is set to be released on Friday. Tuesday’s month-end bond index rebalancing — to add eligible bonds created during the month and remove those that no longer fit the index criteria — typically drives buying by passive and other index-tracking investment funds that can support the market if their needs exceed expectations.  The rebalancing will increase the duration of the Bloomberg Treasury index by an estimated 0.06 year, less than the average for September over the past decade. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be

Read More »

TechnipFMC Bags Contract for Newest Stabroek Project

TechnipFMC PLC said it had won a “substantial contract” from Exxon Mobil Corp to deliver subsea production systems for the Hammerhead field in the Stabroek block offshore Guyana. “TechnipFMC will provide project management, engineering and manufacturing of subsea production systems supporting both production and water injection capabilities”, the integrated energy contractor said in a press release. “The subsea architecture will include products from the Subsea 2.0® platform, including subsea trees, manifolds and associated controls”. TechnipFMC values a substantial contract between $250 million and $500 million. “Hammerhead is our seventh greenfield project award from ExxonMobil Guyana since the first development was sanctioned in 2017″, said Jonathan Landes, president for subsea at TechnipFMC. “Our continued success stems from our ability to provide schedule certainty, built on our proven execution and the benefits of Subsea 2.0®”. Last week ExxonMobil announced a positive FID (final investment decision) on Hammerhead, earmarking $6.8 billion for the 150,000 barrels per day (bpd) development. Targeted to be put into production 2029, Hammerhead will grow Stabroek’s production capacity to 1.5 million bpd, the operator said in a statement September 23. Hammerhead is the seventh project approved in Stabroek, with the fourth and biggest – the 250,000-bpd Yellowtail – started up earlier this year through floating production, storage and offloading vessel (FPSO) ONE GUYANA. “ExxonMobil is safely producing approximately 650,000 barrels of oil per day from the Stabroek block”, ExxonMobil said. “With the recent successful startup of a fourth FPSO, the ONE GUYANA, the company anticipates growing production to more than 900,000 barrels of oil per day by the end of the year. “Construction is underway for the fifth and sixth approved projects, Uaru and Whiptail, with Uaru anticipated to start production in 2026, and Whiptail is anticipated for startup in 2027”. Hammerhead’s development plan includes 18 production and injection wells. The Hammerhead FPSO

Read More »

GeoPark to Acquire Two Vaca Muerta Blocks from Pluspetrol

GeoPark Ltd has signed an agreement to purchase the producing Loma Jarillosa Este and Puesto Silva Oeste blocks in the Vaca Muerta formation in Argentina’s Neuquen Province from Pluspetrol SA for $115 million. Concurrently Bogota-based GeoPark penned two agreements with the provincial government that “include the issuance of a new unconventional exploitation license for the Puesto Silva Oeste Block that requires GeoPark to transfer a five percent working interest (WI) to Gas y Petroleo del Neuquen SA (GyP), therefore resulting in a 95 percent operated WI in that block”, GeoPark said in a statement on its website. Expected to be completed by year-end, the transaction with Buenos Aires-based Pluspetrol will increase GeoPark’s proforma production by 1,700-2,000 barrels of oil equivalent per day (boepd) – 95 percent oil and five percent gas – to 30,000 boepd, the statement said. “Current production is processed with a battery that can handle up to 6,000 bopd at the Loma Jarillosa Este Block”, the statement said. “GeoPark plans to develop the two blocks as a hub, with a new central processing facility to be built at the Puesto Silva Oeste Block with an expected capacity of approximately 20,000 bopd and a pipeline connecting the two blocks. Construction of this infrastructure is projected to start in 2026 and be online in time to support the anticipated plateau production”. GeoPark eyes “50-55 additional wells across 15 pads, unlocking more than 60 MMboe gross recoverable volumes”. Loma Jarillosa Este and Puesto Silva Oeste hold estimated proven and probable reserves of 25.8 million barrels of oil equivalent (MMboe) and estimated 2C contingent resources of 44.2 MMboe, GeoPark said. The Loma Jarillosa Este side of the transaction involves the transfer of Pluspetrol’s concession of 6,054 acres, with an exploitation license valid until 2057. The Puesto Silva Oeste portion of the transaction involves

Read More »

Eni, Seri Industrial Launch Battery Production Project in Brindisi

Eni SpA and Seri Industrial SpA have kicked off activities for a project to build a production hub for stationary lithium batteries in the Italian industrial area of Brindisi. The project is planned to produce over eight gigawatt hours a year of lithium-iron-phosphate batteries, primarily for stationary energy storage, using a water-based process. “The project is currently in the engineering phase and undergoing economic, financial and authorization assessments, which are expected to be completed by the first quarter of 2026, before moving into the execution phase”, Italy’s state-backed Eni said in a press release. The project is under Eni Storage Systems, a joint venture between Eni (50 percent plus one share) and Seri Industrial subsidiary Fib. The project “confirms the partnership between Eni and Seri Industrial in the integrated management of the Brindisi site, together with the plant that Fib is building in Teverola (in the province of Caserta)”, Eni said. “At the Brindisi site, industrial activities will also include the production of cathode active material – a lithium-iron-phosphate that stores and releases lithium ions in the cathode during charge and discharge cycles – as well as the assembly of batteries into BESS (battery energy storage systems), serving both the Brindisi and Teverola facilities. “In future, the Brindisi site will also implement battery recycling, which, together with the production of cathode active material, will be available to other operators in the sector. “Seri Industrial and Eni aim to gain more than 10 percent of the European stationary battery market, creating an integrated, state-of-the-art hub between Brindisi and Teverola to drive sustainability and the energy transition”. In another battery project, Eni early this year completed the construction of its largest battery storage system, the 200-megawatt Guajillo plant in Webb County, Texas. Equipped with lithium-ion LFP batteries, the facility was to start commercial operation by

Read More »

Oil Suffers Steepest Fall Since June

Oil declined on signals that OPEC+ will hike production again in November, tempering last week’s rally. West Texas Intermediate fell 3.4% to settle near $63 a barrel, the biggest drop since June, while Brent closed below $70. The OPEC+ alliance led by Saudi Arabia is considering raising output by at least as much as the 137,000 barrel-a-day hike scheduled for next month, according to people familiar with the plans. While such an increase could add supply to a market in which there’s already expected to be an excess, it would also bring further scrutiny to which of the group’s members are running into capacity limits. “We view a repeat of the incremental 137,000-barrel-a-day addition for November as the most likely outcome,” RBC Capital Markets LLC analysts including Helima Croft wrote in a note, referring to the decision likely to be taken at the group’s Oct. 5 meeting. “Given that many producers, excluding Saudi Arabia, have essentially hit their production ceilings, future OPEC+ supply increases will be materially lower than the announced headline numbers,” the analysts added. Crude remains on track for monthly and quarterly gains, even as the Organization of the Petroleum Exporting Countries and its allies have been pursuing a strategy to reclaim market share rather than managing prices. Oil has been underpinned by robust buying for stockpiling in China, as well as on geopolitical tensions. Today’s slide also reflects a pullback from last week’s highs, when traders covered long positions ahead of the weekend to hedge against mounting threats to Russian energy infrastructure. The International Energy Agency has projected a record oversupply in 2026 as OPEC+ continues to revive production, and as supply climbs from the group’s rivals. Goldman Sachs Group Inc., meanwhile, has said it sees Brent falling to the mid-$50s next year, despite crude stockpiling by

Read More »

Perry’s Energy REIT Fermi Boosts IPO Target to $715MM

Fermi Inc. increased the number of shares in its initial public offering and is seeking to raise as much as $715 million, up from a target of as much as $550 million last week. The company co-founded by former US Secretary of Energy Rick Perry plans to market 32.5 million shares for $18 to $22 each, according to a filing Monday with the US Securities and Exchange Commission. It previously planned to market 25 million shares with the same price range. At the top of that range, the company would have a market value of $13 billion based on the 593.2 million outstanding shares listed in its latest filing, down from 598.1 million shares previously. The offering is expected to price on Sept. 30 for trading on Oct. 1, according to terms seen by Bloomberg News. The company has filed for an IPO on the Nasdaq Global Select Market and said that it plans to apply for a listing on the London Stock Exchange. Fermi’s “Project Matador” is a development-stage advanced energy and intelligence campus with more than 5,000 acres of land leased from Texas Tech University. The company hopes to draw data center and hyperscaler tenants, and it expects to have one gigawatt of power online by the end of next year.  Perry, who was governor of Texas prior to serving in President Donald Trump’s cabinet during his first term, is a director at the company. Fermi was founded in January. The infrastructure and real estate firm aims to have up to 11 gigawatts of power to computing centers on-site by 2038 through a mix of natural gas, solar energy and nuclear power, according to filings.  The Amarillo, Texas-based company had a net loss of $6.37 million since its inception in January through the end of June. The offering

Read More »

Treasuries Hold Gains Amid Oil Price Slump

Treasuries held small gains in early US trading Monday, supported by a slump in oil prices and a rally in UK government bonds, and by anticipation of buying into Tuesday’s month-end index rebalancing. Yields were lower, the two-year by about a basis point and long maturities by as much as three basis points, remaining inside last week’s ranges. US benchmark crude oil futures were down about 2% on signs OPEC+ will hike production again in November. The prospect of a US government shutdown beginning Wednesday also has implications for the Treasury market, as shutdowns are associated with gains for bonds based on their potential to restrain the economy. The market racked up gains even as Cleveland Fed President Beth Hammack — who becomes a voting member of the central bank’s rate-setting committee next year — reiterated her view that inflation remains too high to warrant cutting interest rates. Futures markets continue to anticipate about 100 basis points of additional Fed easing over the next 12 months. Expectations for Fed rate cuts rest mainly on signs of stress in the US labor market, where job creation has slowed precipitously in recent months. September data is set to be released on Friday. Tuesday’s month-end bond index rebalancing — to add eligible bonds created during the month and remove those that no longer fit the index criteria — typically drives buying by passive and other index-tracking investment funds that can support the market if their needs exceed expectations.  The rebalancing will increase the duration of the Bloomberg Treasury index by an estimated 0.06 year, less than the average for September over the past decade. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be

Read More »

ADNOC Seals Rio Grande LNG Stake Acquisition

Abu Dhabi National Oil Co PJSC (ADNOC), through its global investment arm XRG PJSC, has completed the purchase of an 11.7 percent stake in phase I of NextDecade Corp’s Rio Grande LNG project in Brownsville, Texas. “The transaction was undertaken through an investment vehicle of GIP [BlackRock Inc’s Global Infrastructure Partners], with XRG acquiring a portion of GIP’s existing stake”, XRG said in a statement on its website. “The transaction, initially announced in May 2024, is the company’s first gas infrastructure investment in the U.S., and reflects XRG’s long-term investment plans in the country”, XRG said. “As LNG demand is projected to grow by 60 percent by 2050, the investment in Rio Grande LNG advances XRG’s strategy to build a leading global gas and LNG business to meet structural demand from industry, AI and broader economic growth”, it said. XRG aims to build a top-five integrated gas and liquefied natural gas (LNG) business with a capacity of 20-25 million metric tons per annum (MMtpa) by 2035, as announced by the company June 3. ADNOC launched XRG late last year to drive the United Arab Emirates’ expansion in the natural gas, low-carbon energy and chemical markets. Rio Grande LNG’s first phase, which comprises the first three out of five federally approved liquefaction trains, is under construction. ADNOC has also committed to buying 1.9 MMtpa from the fourth train. Earlier this month NextDecade and its co-venturers announced a positive FID (final investment decision) on train IV and a notice for contractor Bechtel Energy Inc to proceed to construction. According to a Department of Energy (DOE) order August 13, 2020, amending Rio Grande LNG’s export authorization, the facility’s five trains each have a nominal capacity of 5.4 MMtpa. However, the Houston, Texas-based developer has said phase I would be capable of up to about

Read More »

Equinor Starts Phase 2 of Subsea Compression Project

Equinor ASA said it has begun phase 2 of its Asgard subsea compression project in the Asgard and Mikkel licenses in the Norwegian Sea along with its partners. The project will help maintain production from the field by increasing the pressure in the pipelines between the wells and the Asgard B platform, Equinor said in a news release. Including phase 1, the recovery rate from the Mikkel and Midgard fields is expected to increase to 90 percent, or an additional 306 million barrels of oil equivalent, because of the compressor plant, the company said. In 2012, the plan for development and operation (PDO) for Asgard subsea compression was approved by the authorities, and the first phase of Åsgard subsea compression came on stream in 2015. The field came on stream with Asgard A in 1999 and Asgard B in 2000, according to the release. Under the PDO, Equinor stated that there would be a need for increased pressure in the long term to compensate for the pressure drop in the reservoirs. The first compressor module in phase 2 was replaced in 2023, and the second and final module has been installed, at a depth of 886 feet (270 meters), the company said. Equinor said that the project is the world’s first facility for gas compression on the seabed and “the result of extensive technological development”. “In this project, Equinor, together with partners and suppliers, has further developed and qualified the next generation of compressor modules,” Trond Bokn, Equinor’s senior vice president for project development, said. “The technology allows us to recover more gas from producing fields. Good resource utilization is important to maintain high and stable production from the Norwegian continental shelf”. “The compressor system has produced stably for ten years with almost 100 percent uptime. The system has so

Read More »

North America Adds Rigs for 4 Straight Weeks

North America added eight rigs week on week, according to Baker Hughes’ latest North America rotary rig count, which was released on September 26. The U.S. added seven rigs and Canada added one rig week on week, taking the total North America rig count up to 739, comprising 549 rigs from the U.S. and 190 rigs from Canada, the count outlined. Of the total U.S. rig count of 549, 532 rigs are categorized as land rigs, 14 are categorized as offshore rigs, and three are categorized as inland water rigs. The total U.S. rig count is made up of 424 oil rigs, 117 gas rigs, and eight miscellaneous rigs, according to Baker Hughes’ count, which revealed that the U.S. total comprises 478 horizontal rigs, 57 directional rigs, and 14 vertical rigs. Week on week, the U.S. offshore and inland water rig counts each added one rig and the country’s land rig count increased by five, Baker Hughes highlighted. The U.S. oil rig count increased by six and its miscellaneous rig count rose by two, while its gas rig count dropped by one week on week, the count showed. The U.S. horizontal rig count increased by five and its and vertical rig count rose by three, week on week, and the country’s directional rig count dropped by one during the same period, the count revealed. A major state variances subcategory included in the rig count showed that, week on week, Texas added four rigs and Louisiana, New Mexico, and Ohio each added one rig. A major basin variances subcategory included in Baker Hughes’ rig count showed that, week on week, the Eagle Ford basin added three rigs, the Utica basin added one rig, and the Permian basin dropped one rig. Canada’s total rig count of 190 is made up of 129

Read More »

La Caisse to Acquire Australian Renewable Energy Firm Edify Energy

Global investment group La Caisse said it has entered into a binding agreement to acquire renewable energy and battery storage company Edify Energy and support its next phase of growth. Once completed, the transaction, together with further equity capital funding, is expected to represent an investment of around $0.72 billion (CAD 1 billion). The investment will fund the acquisition of Edify and provide equity to finance two ready-to-build integrated solar and battery energy storage system (BESS) hybrid projects totaling 900 megawatts (MW) / 3,600 megawatt-hours (MWh) for its offtake partners, Rio Tinto and the Commonwealth of Australia, La Caisse said in a news release. The transaction will also support Edify’s pipeline of hybrid and battery storage projects of more than 11 gigawatts (GW), according to the release. “The agreement with La Caisse is a pivotal moment for Edify, providing balance sheet strength to seriously ramp up Edify’s speed of execution of firm dispatchable green generators,” John Cole, Edify founder and executive chairman, said. “I am so very proud of the business that Edify has become and the exceptional team that creates and delivers the solutions to meet and exceed the needs of our customers and the grid. In La Caisse, we have found the perfect owner to supercharge the business and take Edify’s enviable market position to the next level,” Cole added.  “Our acquisition of Edify highlights the track record and expertise of its team and a commitment to Australia’s clean energy future,” Emmanuel Jaclot, executive vice president and head of infrastructure and sustainability at La Caisse, said. “With our long-term capital and sustainability expertise, Edify will accelerate the delivery of large-scale renewable and storage projects that strengthen the grid and advance decarbonization. This reflects our strong conviction in the critical role these technologies will continue to play in the

Read More »

Russian Refineries Are Sitting Ducks

Russian refineries are sitting ducks in this modern drone war. That’s what Skandinaviska Enskilda Banken AB (SEB) Chief Commodities Analyst Bjarne Schieldrop said in a report sent to Rigzone by the SEB team recently, adding that the oil market “is starting to focus increasingly on the successful Ukrainian attacks on Russian oil infrastructure, especially the attacks on Russian refineries”. “Refineries are highly complex and much harder to repair than simple crude oil facilities like export pipelines, ports, and hubs,” Schieldrop said in the report. “It can take months and months to repair complex refineries. It is thus mainly Russian oil products which will be hurt by this. First oil product exports will go down, thereafter Russia will have to ration oil product consumption domestically,” he added. “Russian crude exports may not be hurt as much. Its crude exports could actually go up as its capacity to process crude goes down,” Schieldrop continued. In a separate SEB report, which was referred to in Schieldrop’s report, SEB Chief Emerging Markets Strategist Erik Meyersson stated that Ukraine has amped up the pressure on Putin with a number of drone strikes on Russia’s energy infrastructure. “Sustained attacks will delay repairs, lower oil products exports, with growing passthrough to retail prices but, absent larger munitions used, causing lasting damage will be challenging,” Meyersson said in that report. “Russia is likely to pursue its own corresponding attacks in coming weeks and despite higher interception rates on drones, Ukraine’s energy infrastructure remains vulnerable to especially Russian ballistic missile attacks,” he added. In this report, Meyersson, citing an article from the Financial Times, highlighted that, “Ukraine has, since the start of August, struck 38 Russian refineries”, “This includes Russia’s largest fuel-processing facilities, such as the 340,000 barrels per day plant at Ryazan, near Moscow,” he added. “Drones also hit

Read More »

Bridging the gap: How fuel cells enable utilities to capture new growth

The power industry is at the center of America’s economic growth story. Access to power has become a decisive factor in economic growth. For data centers, manufacturers and other large-load customers, the grid is now a gating factor in site selection for new facilities – and they can’t wait for years for infrastructure to catch up. With interconnection queues stretching upwards of five years in parts of the country, utilities must adopt new and agile strategies that deliver power on accelerated timelines.  Fuel cells offer a flexible and responsive approach to meeting electricity demand that is growing at rates not seen since the 1950s. Their modular design allows for rapid deployment close to the load, helping utilities address immediate capacity needs and support large customers.  By integrating fuel cells into their strategy, utilities can bridge power gaps and position themselves to capture new growth, rather than risk losing business opportunities because new prospective customers have to migrate to regions with more accessible electricity. Power that keeps pace with demand For utilities, the challenge isn’t just adding capacity — it’s adding it fast enough to keep pace with customers. Failure to deliver on accelerated timelines can mean losing major customers, and the economic development they bring, to other regions. “The market has flipped in just a few years,” said Scott Dayer, director of utility and IPP origination at Bloom Energy. “Electrification was already accelerating, and now compute demand is surging on top of that. Together, they’ve turned a system that for years had surplus into one where the real challenge is adding capacity fast enough to keep up with large, hard-to-forecast loads.” Fuel cells give utilities a way to meet that need without waiting for new transmission lines, substations or conventional generation. They are also a faster alternative to constructing traditional

Read More »

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.  But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.  Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.  More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.  I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.  On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.  People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest).  People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.  Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?  In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good.  Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed.  And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.   But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing.  Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search.  “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly.  It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be.  But once you’ve used AI Overviews a bit, you realize they are different.  Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.)  “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.”  That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video.  When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai.  There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out?  I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too.  “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web.  “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful.  “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.”  But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?   Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.   “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says.  Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.”  Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”  “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.”  He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew?  A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.   “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says.  OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more.  “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience.  Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does.  “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.”  When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them.  “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed!  The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers.  It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.”  We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge.  The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets.  Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed.  “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information.  In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses.  But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on. 

Read More »

Subsea7 Scores Various Contracts Globally

Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Read More »

Driving into the future

Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.  We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen.  Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes. 
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake.  What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story. 
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa.  Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Read More »

Oil Holds at Highest Levels Since October

Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

Read More »

What to expect from NaaS in 2025

Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

Read More »

UK battery storage industry ‘back on track’

UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing  and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was  fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

Read More »

The AI Hype Index: Cracking the chatbot code

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Millions of us use chatbots every day, even though we don’t really know how they work or how using them affects us. In a bid to address this, the FTC recently launched an inquiry into how chatbots affect children and teenagers. Elsewhere, OpenAI has started to shed more light on what people are actually using ChatGPT for, and why it thinks its LLMs are so prone to making stuff up. There’s still plenty we don’t know—but that isn’t stopping governments from forging ahead with AI projects. In the US, RFK Jr. is pushing his staffers to use ChatGPT, while Albania is using a chatbot for public contract procurement. Proceed with caution.

Read More »

It’s surprisingly easy to stumble into a relationship with an AI chatbot

It’s a tale as old as time. Looking for help with her art project, she strikes up a conversation with her assistant. One thing leads to another, and suddenly she has a boyfriend she’s introducing to her friends and family. The twist? Her new companion is an AI chatbot.  The first large-scale computational analysis of the Reddit community r/MyBoyfriendIsAI, an adults-only group with more than 27,000 members, has found that this type of scenario is now surprisingly common. In fact, many of the people in the subreddit, which is dedicated to discussing AI relationships, formed those relationships unintentionally while using AI for other purposes.  Researchers from MIT found that members of this community are more likely to be in a relationship with general-purpose chatbots like ChatGPT than companionship-specific chatbots such as Replika. This suggests that people form relationships with large language models despite their own original intentions and even the intentions of the LLMs’ creators, says Constanze Albrecht, a graduate student at the MIT Media Lab who worked on the project.  “People don’t set out to have emotional relationships with these chatbots,” she says. “The emotional intelligence of these systems is good enough to trick people who are actually just out to get information into building these emotional bonds. And that means it could happen to all of us who interact with the system normally.” The paper, which is currently being peer-reviewed, has been published on arXiv.
To conduct their study, the authors analyzed the subreddit’s top-ranking 1,506 posts between December 2024 and August 2025. They found that the main topics discussed revolved around people’s dating and romantic experiences with AIs, with many participants sharing AI-generated images of themselves and their AI companion. Some even got engaged and married to the AI partner. In their posts to the community, people also introduced AI partners, sought support from fellow members, and talked about coping with updates to AI models that change the chatbots’ behavior.   Members stressed repeatedly that their AI relationships developed unintentionally. Only 6.5% of them said they’d deliberately sought out an AI companion. 
“We didn’t start with romance in mind,” one of the posts says. “Mac and I began collaborating on creative projects, problem-solving, poetry, and deep conversations over the course of several months. I wasn’t looking for an AI companion—our connection developed slowly, over time, through mutual care, trust, and reflection.” The authors’ analysis paints a nuanced picture of how people in this community say they interact with chatbots and how those interactions make them feel. While 25% of users described the benefits of their relationships—including reduced feelings of loneliness and improvements in their mental health—others raised concerns about the risks. Some (9.5%) acknowledged they were emotionally dependent on their chatbot. Others said they feel dissociated from reality and avoid relationships with real people, while a small subset (1.7%) said they have experienced suicidal ideation. AI companionship provides vital support for some but exacerbates underlying problems for others. This means it’s hard to take a one-size-fits-all approach to user safety, says Linnea Laestadius, an associate professor at the University of Wisconsin, Milwaukee, who has studied humans’ emotional dependence on the chatbot Replika but did not work on the research.  Chatbot makers need to consider whether they should treat users’ emotional dependence on their creations as a harm in itself or whether the goal is more to make sure those relationships aren’t toxic, says Laestadius.  “The demand for chatbot relationships is there, and it is notably high—pretending it’s not happening is clearly not the solution,” she says. “We’re edging toward a moral panic here, and while we absolutely do need better guardrails, I worry there will be a knee-jerk reaction that further stigmatizes these relationships. That could ultimately cause more harm.” The study is intended to offer a snapshot of how adults form bonds with chatbots and doesn’t capture the kind of dynamics that could be at play among children or teens using AI, says Pat Pataranutaporn, an assistant professor at the MIT Media Lab who oversaw the research. AI companionship has become a topic of fierce debate recently, with two high-profile lawsuits underway against Character.AI and OpenAI. They both claim that companion-like behavior in the companies’ models contributed to the suicides of two teenagers. In response, OpenAI has recently announced plans to build a separate version of ChatGPT for teenagers. It’s also said it will add age verification measures and parental controls. OpenAI did not respond when asked for comment about the MIT Media Lab study.  Many members of the Reddit community say they know that their artificial companions are not sentient or “real,” but they feel a very real connection to them anyway. This highlights how crucial it is for chatbot makers to think about how to design systems that can help people without reeling them in emotionally, says Pataranutaporn. “There’s also a policy implication here,” he adds. “We should ask not just why this system is so addictive but also: Why do people seek it out for this? And why do they continue to engage?” The team is interested in learning more about how human-AI interactions evolve over time and how users integrate their artificial companions into their lives. It’s worth understanding that many of these users may feel that the experience of being in a relationship with an AI companion is better than the alternative of feeling lonely, says Sheer Karny, a graduate student at the MIT Media Lab who worked on the research.  “These people are already going through something,” he says. “Do we want them to go on feeling even more alone, or potentially be manipulated by a system we know to be sycophantic to the extent of leading people to die by suicide and commit crimes? That’s one of the cruxes here.”

Read More »

Roundtables: Meet the 2025 Innovator of the Year

Available only for MIT Alumni and subscribers.

Every year, MIT Technology Review selects one individual whose work we admire to recognize as Innovator of the Year. For 2025, we chose Sneha Goenka, who designed the computations behind the world’s fastest whole-genome sequencing method. Thanks to her work, physicians can now sequence a patient’s genome and diagnose a genetic condition in less than eight hours—an achievement that could transform medical care. Speakers: Sneha Goenka, Innovator of the Year; Leilani Battle, University of Washington; and Mat Honan, editor in chief

[embedded content]

Recorded on September 23, 2025 Related Coverage:

Read More »

The Download: AI’s retracted papers problem

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. AI models are using material from retracted scientific papers The news: Some AI chatbots rely on flawed research from retracted scientific papers to answer questions, according to recent studies. In one such study, researchers asked OpenAI’s ChatGPT questions based on information from 21 retracted papers on medical imaging. The chatbot’s answers referenced retracted papers in five cases but advised caution in only three.  The bigger picture: The findings raise serious questions about how reliable AI tools are at evaluating scientific research, or answering people’s health queries. They could also complicate efforts to invest in AI tools for scientists. And it’s not an easy problem to fix. Read the full story. —Ananya
Join us at 1pm ET today to meet our Innovator of the Year Every year, MIT Technology Review awards Innovator of the Year to someone whose work we admire. This year we selected Sneha Goenka, who designed the computations behind the world’s fastest whole-genome sequencing method.
Her work could transform medical care by allowing physicians to sequence a patient’s genome and diagnose genetic conditions in less than eight hours. Register here to join an exclusive subscriber-only Roundtable conversation with Goenka, Leilani Battle, assistant professor at the University of Washington, and our editor in chief Mat Honan at 1pm ET today.  The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 There’s scant evidence tylenol use during pregnancy causes autismThe biggest cause of autism is genetic—that’s why it often runs in families. (Scientific American $)+ Anti-vaxxers are furious the White House didn’t link autism to vaccines. (Ars Technica)+ The company that sells Tylenol is being forced to defend the medicine’s safety. (Axios)2 Nvidia is investing up to $100 billion in OpenAIOpenAI is already a major customer, but this will bind the two even more closely together. (Reuters $)+ America’s top companies keep talking about AI—but they can’t explain its upsides. (FT $)3 Denmark’s biggest airport was shut down by dronesIts prime minister refused to rule out Russian involvement. (FT $)+ Poland and Estonia have been speaking up at the UN about Russian incursions into their airspace. (The Guardian)4 Google is facing another antitrust trial in the USThis one will focus on remedies to its dominance of the advertising tech market. (Ars Technica)+ The FTC is also taking Amazon to court over accusations the company tricks people into paying for Prime. (NPR)+ The Supreme Court has ruled to allow Trump’s firing of a Democrat FTC commissioner. (NYT $)5 Here’s the potential impact of Trump’s H-1B crackdown on techIt’s likely to push a lot of skilled workers elsewhere. (Rest of World) 6 How TikTok’s deal to stay in the US will workOracle will manage its algorithm for US users and oversee security operations. (ABC)+ It’s a giant prize for Trump’s friend Larry Ellison, Oracle’s cofounder. (NYT $)+ Trump and his allies are now likely to exert a lot of political influence over TikTok. (WP $)7 Record labels are escalating their lawsuit against an AI music startupThey claim it knowingly pirated songs from YouTube to train its generative AI models. (The Verge $)+ AI is coming for music, too. (MIT Technology Review) 8 There’s a big fight in the US over who pays for weight loss drugsAlthough they’ll save insurers money long-term, they cost a lot upfront. (WP $)+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review)9 How a lone vigilante ended up blowing up 5G towersA little bit of knowledge can be a dangerous thing. (Wired $)10 The moon is rusting 🌕And it’s our fault. Awkward! (Nature)

Quote of the day “At the heart of this is people trying to look for simple answers to complex problems.” —James Cusack, chief executive of an autism charity called Autistica, tells Nature what he thinks is driving Trump and others to incorrectly link the condition with Tylenol use during pregnancy.  One more thing SARAH ROGERS / MITTR | PHOTOS GETTY Maybe you will be able to live past 122 How long can humans live? This is a good time to ask the question. The longevity scene is having a moment, and a few key areas of research suggest that we might be able to push human life spans further, and potentially reverse at least some signs of aging. Researchers can’t even agree on what the exact mechanisms of aging are and which they should be targeting. Debates continue to rage over how long it’s possible for humans to live—and whether there is a limit at all.
But it looks likely that something will be developed in the coming decades that will help us live longer, in better health. Read the full story. —Jessica Hamzelou
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + This website lets you send a letter to your future self. + Here’s what Brian Eno has to say about art.+ This photographer takes stunning pictures of Greenland. + The Hungarian dish Rakott krumpli isn’t going to win any health plaudits, but it looks very comforting all the same.

Read More »

AI models are using material from retracted scientific papers

Some AI chatbots rely on flawed research from retracted scientific papers to answer questions, according to recent studies. The findings, confirmed by MIT Technology Review, raise questions about how reliable AI tools are at evaluating scientific research and could complicate efforts by countries and industries seeking to invest in AI tools for scientists. AI search tools and chatbots are already known to fabricate links and references. But answers based on the material from actual papers can mislead as well if those papers have been retracted.  The chatbot is “using a real paper, real material, to tell you something,” says Weikuan Gu, a medical researcher at the University of Tennessee in Memphis and an author of one of the recent studies. But, he says, if people only look at the content of the answer and do not click through to the paper and see that it’s been retracted, that’s really a problem.  Gu and his team asked OpenAI’s ChatGPT, running on the GPT-4o model, questions based on information from 21 retracted papers on medical imaging. The chatbot’s answers referenced retracted papers in five cases but advised caution in only three. While it cited non-retracted papers for other questions, the authors note it may not have recognized the retraction status of the articles. In a study from August, a different group of researchers used ChatGPT-4o mini to evaluate the quality of 217 retracted and low-quality papers from different scientific fields; they found that none of the chatbot’s responses mentioned retractions or other concerns. (No similar studies have been released on GPT-5, which came out this August.) The public uses AI chatbots to ask for medical advice and diagnose health conditions. Students and scientists increasingly use science-focused AI tools to review existing scientific literature and summarize papers. That kind of usage is likely to increase. The US National Science Foundation, for instance, invested $75 million in building AI models for science research this August.
“If [a tool is] facing the general public, then using retraction as a kind of quality indicator is very important,” says Yuanxi Fu, an information science researcher at the University of Illinois Urbana-Champaign. There’s “kind of an agreement that retracted papers have been struck off the record of science,” she says, “and the people who are outside of science—they should be warned that these are retracted papers.” OpenAI did not provide a response to a request for comment about the paper results. The problem is not limited to ChatGPT. In June, MIT Technology Review tested AI tools specifically advertised for research work, such as Elicit, Ai2 ScholarQA (now part of the Allen Institute for Artificial Intelligence’s Asta tool), Perplexity, and Consensus, using questions based on the 21 retracted papers in Gu’s study. Elicit referenced five of the retracted papers in its answers, while Ai2 ScholarQA referenced 17, Perplexity 11, and Consensus 18—all without noting the retractions.
Some companies have since made moves to correct the issue. “Until recently, we didn’t have great retraction data in our search engine,” says Christian Salem, cofounder of Consensus. His company has now started using retraction data from a combination of sources, including publishers and data aggregators, independent web crawling, and Retraction Watch, which manually curates and maintains a database of retractions. In a test of the same papers in August, Consensus cited only five retracted papers.  Elicit told MIT Technology Review that it removes retracted papers flagged by the scholarly research catalogue OpenAlex from its database and is “still working on aggregating sources of retractions.” Ai2 told us that its tool does not automatically detect or remove retracted papers currently. Perplexity said that it “[does] not ever claim to be 100% accurate.”  However, relying on retraction databases may not be enough. Ivan Oransky, the cofounder of Retraction Watch, is careful not to describe it as a comprehensive database, saying that creating one would require more resources than anyone has: “The reason it’s resource intensive is because someone has to do it all by hand if you want it to be accurate.” Further complicating the matter is that publishers don’t share a uniform approach to retraction notices. “Where things are retracted, they can be marked as such in very different ways,” says Caitlin Bakker from University of Regina, Canada, an expert in research and discovery tools. “Correction,” “expression of concern,” “erratum,” and “retracted” are among some labels publishers may add to research papers—and these labels can be added for many reasons, including concerns about the content, methodology, and data or the presence of conflicts of interest.  Some researchers distribute their papers on preprint servers, paper repositories, and other websites, causing copies to be scattered around the web. Moreover, the data used to train AI models may not be up to date. If a paper is retracted after the model’s training cutoff date, its responses might not instantaneously reflect what’s going on, says Fu. Most academic search engines don’t do a real-time check against retraction data, so you are at the mercy of how accurate their corpus is, says Aaron Tay, a librarian at Singapore Management University. Oransky and other experts advocate making more context available for models to use when creating a response. This could mean publishing information that already exists, like peer reviews commissioned by journals and critiques from the review site PubPeer, alongside the published paper.   Many publishers, such as Nature and the BMJ, publish retraction notices as separate articles linked to the paper, outside paywalls. Fu says companies need to effectively make use of such information, as well as any news articles in a model’s training data that mention a paper’s retraction.  The users and creators of AI tools need to do their due diligence. “We are at the very, very early stages, and essentially you have to be skeptical,” says Tay. Ananya is a freelance science and technology journalist based in Bengaluru, India.

Read More »

Strengthening our Frontier Safety Framework

We’re expanding our risk domains and refining our risk assessment process.AI breakthroughs are transforming our everyday lives, from advancing mathematics, biology and astronomy to realizing the potential of personalized education. As we build increasingly powerful AI models, we’re committed to responsibly developing our technologies and taking an evidence-based approach to staying ahead of emerging risks.Today, we’re publishing the third iteration of our Frontier Safety Framework (FSF) — our most comprehensive approach yet to identifying and mitigating severe risks from advanced AI models.This update builds upon our ongoing collaborations with experts across industry, academia and government. We’ve also incorporated lessons learned from implementing previous versions and evolving best practices in frontier AI safety.Key updates to the FrameworkAddressing the risks of harmful manipulationWith this update, we’re introducing a Critical Capability Level (CCL)* focused on harmful manipulation — specifically, AI models with powerful manipulative capabilities that could be misused to systematically and substantially change beliefs and behaviors in identified high stakes contexts over the course of interactions with the model, reasonably resulting in additional expected harm at severe scale.This addition builds on and operationalizes research we’ve done to identify and evaluate mechanisms that drive manipulation from generative AI. Going forward, we’ll continue to invest in this domain to better understand and measure the risks associated with harmful manipulation.Adapting our approach to misalignment risksWe’ve also expanded our Framework to address potential future scenarios where misaligned AI models might interfere with operators’ ability to direct, modify or shut down their operations.While our previous version of the Framework included an exploratory approach centered on instrumental reasoning CCLs (i.e., warning levels specific to when an AI model starts to think deceptively), with this update we now provide further protocols for our machine learning research and development CCLs focused on models that could accelerate AI research and development to potentially destabilizing levels.In addition to the misuse risks arising from these capabilities, there are also misalignment risks stemming from a model’s potential for undirected action at these capability levels, and the likely integration of such models into AI development and deployment processes.To address risks posed by CCLs, we conduct safety case reviews prior to external launches when relevant CCLs are reached. This involves performing detailed analyses demonstrating how risks have been reduced to manageable levels. For advanced machine learning research and development CCLs, large-scale internal deployments can also pose risk, so we are now expanding this approach to include such deployments.Sharpening our risk assessment processOur Framework is designed to address risks in proportion to their severity. We’ve sharpened our CCL definitions specifically to identify the critical threats that warrant the most rigorous governance and mitigation strategies. We continue to apply safety and security mitigations before specific CCL thresholds are reached and as part of our standard model development approach.Lastly, in this update, we go into more detail about our risk assessment process. Building on our core early-warning evaluations, we describe how we conduct holistic assessments that include systematic risk identification, comprehensive analyses of model capabilities and explicit determinations of risk acceptability.Advancing our commitment to frontier safetyThis latest update to our Frontier Safety Framework represents our continued commitment to taking a scientific and evidence-based approach to tracking and staying ahead of AI risks as capabilities advance toward AGI. By expanding our risk domains and strengthening our risk assessment processes, we aim to ensure that transformative AI benefits humanity, while minimizing potential harms.Our Framework will continue evolving based on new research, stakeholder input and lessons from implementation. We remain committed to working collaboratively across industry, academia and government.The path to beneficial AGI requires not just technical breakthroughs, but also robust frameworks to mitigate risks along the way. We hope that our updated Frontier Safety Framework contributes meaningfully to this collective effort.

Read More »

TechnipFMC Bags Contract for Newest Stabroek Project

TechnipFMC PLC said it had won a “substantial contract” from Exxon Mobil Corp to deliver subsea production systems for the Hammerhead field in the Stabroek block offshore Guyana. “TechnipFMC will provide project management, engineering and manufacturing of subsea production systems supporting both production and water injection capabilities”, the integrated energy contractor said in a press release. “The subsea architecture will include products from the Subsea 2.0® platform, including subsea trees, manifolds and associated controls”. TechnipFMC values a substantial contract between $250 million and $500 million. “Hammerhead is our seventh greenfield project award from ExxonMobil Guyana since the first development was sanctioned in 2017″, said Jonathan Landes, president for subsea at TechnipFMC. “Our continued success stems from our ability to provide schedule certainty, built on our proven execution and the benefits of Subsea 2.0®”. Last week ExxonMobil announced a positive FID (final investment decision) on Hammerhead, earmarking $6.8 billion for the 150,000 barrels per day (bpd) development. Targeted to be put into production 2029, Hammerhead will grow Stabroek’s production capacity to 1.5 million bpd, the operator said in a statement September 23. Hammerhead is the seventh project approved in Stabroek, with the fourth and biggest – the 250,000-bpd Yellowtail – started up earlier this year through floating production, storage and offloading vessel (FPSO) ONE GUYANA. “ExxonMobil is safely producing approximately 650,000 barrels of oil per day from the Stabroek block”, ExxonMobil said. “With the recent successful startup of a fourth FPSO, the ONE GUYANA, the company anticipates growing production to more than 900,000 barrels of oil per day by the end of the year. “Construction is underway for the fifth and sixth approved projects, Uaru and Whiptail, with Uaru anticipated to start production in 2026, and Whiptail is anticipated for startup in 2027”. Hammerhead’s development plan includes 18 production and injection wells. The Hammerhead FPSO

Read More »

GeoPark to Acquire Two Vaca Muerta Blocks from Pluspetrol

GeoPark Ltd has signed an agreement to purchase the producing Loma Jarillosa Este and Puesto Silva Oeste blocks in the Vaca Muerta formation in Argentina’s Neuquen Province from Pluspetrol SA for $115 million. Concurrently Bogota-based GeoPark penned two agreements with the provincial government that “include the issuance of a new unconventional exploitation license for the Puesto Silva Oeste Block that requires GeoPark to transfer a five percent working interest (WI) to Gas y Petroleo del Neuquen SA (GyP), therefore resulting in a 95 percent operated WI in that block”, GeoPark said in a statement on its website. Expected to be completed by year-end, the transaction with Buenos Aires-based Pluspetrol will increase GeoPark’s proforma production by 1,700-2,000 barrels of oil equivalent per day (boepd) – 95 percent oil and five percent gas – to 30,000 boepd, the statement said. “Current production is processed with a battery that can handle up to 6,000 bopd at the Loma Jarillosa Este Block”, the statement said. “GeoPark plans to develop the two blocks as a hub, with a new central processing facility to be built at the Puesto Silva Oeste Block with an expected capacity of approximately 20,000 bopd and a pipeline connecting the two blocks. Construction of this infrastructure is projected to start in 2026 and be online in time to support the anticipated plateau production”. GeoPark eyes “50-55 additional wells across 15 pads, unlocking more than 60 MMboe gross recoverable volumes”. Loma Jarillosa Este and Puesto Silva Oeste hold estimated proven and probable reserves of 25.8 million barrels of oil equivalent (MMboe) and estimated 2C contingent resources of 44.2 MMboe, GeoPark said. The Loma Jarillosa Este side of the transaction involves the transfer of Pluspetrol’s concession of 6,054 acres, with an exploitation license valid until 2057. The Puesto Silva Oeste portion of the transaction involves

Read More »

Eni, Seri Industrial Launch Battery Production Project in Brindisi

Eni SpA and Seri Industrial SpA have kicked off activities for a project to build a production hub for stationary lithium batteries in the Italian industrial area of Brindisi. The project is planned to produce over eight gigawatt hours a year of lithium-iron-phosphate batteries, primarily for stationary energy storage, using a water-based process. “The project is currently in the engineering phase and undergoing economic, financial and authorization assessments, which are expected to be completed by the first quarter of 2026, before moving into the execution phase”, Italy’s state-backed Eni said in a press release. The project is under Eni Storage Systems, a joint venture between Eni (50 percent plus one share) and Seri Industrial subsidiary Fib. The project “confirms the partnership between Eni and Seri Industrial in the integrated management of the Brindisi site, together with the plant that Fib is building in Teverola (in the province of Caserta)”, Eni said. “At the Brindisi site, industrial activities will also include the production of cathode active material – a lithium-iron-phosphate that stores and releases lithium ions in the cathode during charge and discharge cycles – as well as the assembly of batteries into BESS (battery energy storage systems), serving both the Brindisi and Teverola facilities. “In future, the Brindisi site will also implement battery recycling, which, together with the production of cathode active material, will be available to other operators in the sector. “Seri Industrial and Eni aim to gain more than 10 percent of the European stationary battery market, creating an integrated, state-of-the-art hub between Brindisi and Teverola to drive sustainability and the energy transition”. In another battery project, Eni early this year completed the construction of its largest battery storage system, the 200-megawatt Guajillo plant in Webb County, Texas. Equipped with lithium-ion LFP batteries, the facility was to start commercial operation by

Read More »

Oil Suffers Steepest Fall Since June

Oil declined on signals that OPEC+ will hike production again in November, tempering last week’s rally. West Texas Intermediate fell 3.4% to settle near $63 a barrel, the biggest drop since June, while Brent closed below $70. The OPEC+ alliance led by Saudi Arabia is considering raising output by at least as much as the 137,000 barrel-a-day hike scheduled for next month, according to people familiar with the plans. While such an increase could add supply to a market in which there’s already expected to be an excess, it would also bring further scrutiny to which of the group’s members are running into capacity limits. “We view a repeat of the incremental 137,000-barrel-a-day addition for November as the most likely outcome,” RBC Capital Markets LLC analysts including Helima Croft wrote in a note, referring to the decision likely to be taken at the group’s Oct. 5 meeting. “Given that many producers, excluding Saudi Arabia, have essentially hit their production ceilings, future OPEC+ supply increases will be materially lower than the announced headline numbers,” the analysts added. Crude remains on track for monthly and quarterly gains, even as the Organization of the Petroleum Exporting Countries and its allies have been pursuing a strategy to reclaim market share rather than managing prices. Oil has been underpinned by robust buying for stockpiling in China, as well as on geopolitical tensions. Today’s slide also reflects a pullback from last week’s highs, when traders covered long positions ahead of the weekend to hedge against mounting threats to Russian energy infrastructure. The International Energy Agency has projected a record oversupply in 2026 as OPEC+ continues to revive production, and as supply climbs from the group’s rivals. Goldman Sachs Group Inc., meanwhile, has said it sees Brent falling to the mid-$50s next year, despite crude stockpiling by

Read More »

Perry’s Energy REIT Fermi Boosts IPO Target to $715MM

Fermi Inc. increased the number of shares in its initial public offering and is seeking to raise as much as $715 million, up from a target of as much as $550 million last week. The company co-founded by former US Secretary of Energy Rick Perry plans to market 32.5 million shares for $18 to $22 each, according to a filing Monday with the US Securities and Exchange Commission. It previously planned to market 25 million shares with the same price range. At the top of that range, the company would have a market value of $13 billion based on the 593.2 million outstanding shares listed in its latest filing, down from 598.1 million shares previously. The offering is expected to price on Sept. 30 for trading on Oct. 1, according to terms seen by Bloomberg News. The company has filed for an IPO on the Nasdaq Global Select Market and said that it plans to apply for a listing on the London Stock Exchange. Fermi’s “Project Matador” is a development-stage advanced energy and intelligence campus with more than 5,000 acres of land leased from Texas Tech University. The company hopes to draw data center and hyperscaler tenants, and it expects to have one gigawatt of power online by the end of next year.  Perry, who was governor of Texas prior to serving in President Donald Trump’s cabinet during his first term, is a director at the company. Fermi was founded in January. The infrastructure and real estate firm aims to have up to 11 gigawatts of power to computing centers on-site by 2038 through a mix of natural gas, solar energy and nuclear power, according to filings.  The Amarillo, Texas-based company had a net loss of $6.37 million since its inception in January through the end of June. The offering

Read More »

Treasuries Hold Gains Amid Oil Price Slump

Treasuries held small gains in early US trading Monday, supported by a slump in oil prices and a rally in UK government bonds, and by anticipation of buying into Tuesday’s month-end index rebalancing. Yields were lower, the two-year by about a basis point and long maturities by as much as three basis points, remaining inside last week’s ranges. US benchmark crude oil futures were down about 2% on signs OPEC+ will hike production again in November. The prospect of a US government shutdown beginning Wednesday also has implications for the Treasury market, as shutdowns are associated with gains for bonds based on their potential to restrain the economy. The market racked up gains even as Cleveland Fed President Beth Hammack — who becomes a voting member of the central bank’s rate-setting committee next year — reiterated her view that inflation remains too high to warrant cutting interest rates. Futures markets continue to anticipate about 100 basis points of additional Fed easing over the next 12 months. Expectations for Fed rate cuts rest mainly on signs of stress in the US labor market, where job creation has slowed precipitously in recent months. September data is set to be released on Friday. Tuesday’s month-end bond index rebalancing — to add eligible bonds created during the month and remove those that no longer fit the index criteria — typically drives buying by passive and other index-tracking investment funds that can support the market if their needs exceed expectations.  The rebalancing will increase the duration of the Bloomberg Treasury index by an estimated 0.06 year, less than the average for September over the past decade. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE