Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Shell Expects Weak Oil Trading Result for Q4

Shell Plc said its oil trading performance worsened in the fourth quarter as crude prices slumped, adding to signs that Big Oil is heading into a tougher earnings season.  Oil trading results for the year’s final three months are “expected to be significantly lower” than the previous quarter, Shell said in an update on Thursday, ahead of an earnings report due early next month. At the company’s troubled chemicals division, a “significant loss” is expected. The update comes at a time when the oil market is lurching into an oversupply that could make for challenging trading conditions in the months ahead. The international benchmark Brent plunged 18 percent last year and has been largely unaffected by turmoil in Venezuela, where the country’s president Nicolás Maduro has been captured by US forces. It’s a “rough end to the year” for Shell, RBC Capital Markets analyst Biraj Borkhataria said Thursday in a note. He had expected “a relatively weak quarter, and this looks worse than expected.” Shell’s shares fell as much as 2.3 percent in early London trading. Shell’s massive in-house trading business deals in oil, gas, fuels, chemicals and renewable power – trading both the company’s own production as well as supply from third parties. The energy major doesn’t disclose separate results for its traders, but the performance is closely watched as it can be a key driver of earnings. A strong trading performance in the third quarter was one of the reasons Shell cited for earnings that beat estimates.  Since taking over the London-based energy giant three years ago, Chief Executive Officer Wael Sawan has sought to cut costs and offload under-performing assets to improve the company’s balance sheet. This is his first test in a lower oil-price environment that will pressure the firm’s ability to maintain its level of share buybacks. US rival Exxon Mobil

Read More »

National Grid Unveils New Plan for Anglo-Dutch Cable Link

National Grid PLC on Thursday announced route changes for the proposed LionLink cable to the Netherlands, a project with TenneT BV. The interconnector is designed to carry up to two gigawatts of wind electricity, enough for about 2.5 million British homes, according to National Grid. LionLink would connect a wind farm offshore the Netherlands to the Dutch and UK grids, with a targeted start of operation in 2032, National Grid says on the project webpage. The power transmission and distribution operator said in an online statement Thursday it would launch an eight-week public consultation for its new plan for the cable to start underground in Suffolk’s Walberswick, “a decision made following an assessment of the environment and local residents’ concerns around access constraints and traffic impacts”. “An alternative underground HVDC [high-voltage direct current] cable corridor to the north of Southwold was discounted following the consultations”, National Grid said. “NGV is also working closely with local authorities to ensure no construction takes place on the beach, and there is no visible infrastructure once the project is complete”, it added, referring to National Grid Ventures, its unit tasked with building and operating LionLink. “84 percent of the UK section of the LionLink cable will be offshore, and all onshore sections will be buried underground”. The new plan was based on non-statutory consultations in 2022 and 2023, LionLink project director Gareth Burden said. “We are coordinating with other developers in Suffolk on a regular basis so that where possible, we can work together to ensure construction is carried out in manageable sections, and we can avoid long-term disruption in any one area”, Burden added. National Grid noted, “LionLink is set to be one of the first projects of its kind, helping to shape the future of offshore renewable energy by combining wind generation and

Read More »

Cisco identifies vulnerability in ISE network access control devices

Johannes Ullrich, dean of research at the SANS Institute, said, “Most likely, this is an XML External Entity vulnerability.” External entities, he explained, are an XML feature that instructs the parser to either read local files or access external URLs. In this case, an attacker could embed an external entity in the license file, instructing the XML parser to read a confidential file and include it in the response. This is a common vulnerability in XML parsers, he said, typically mitigated by disabling external entity parsing. An attacker would be able to obtain read access to confidential files like configuration files, he added, and possibly user credentials. Ullrich also said an ISE administrator may have access to a lot of the information, but they should not have access to user credentials. The Cisco advisory says an attacker could exploit this vulnerability by uploading a malicious file to the application: “A successful exploit could allow the attacker to read arbitrary files from the underlying operating system that could include sensitive data that should otherwise be inaccessible even to administrators. To exploit this vulnerability, the attacker must have valid administrative credentials.” Cisco said proof-of-concept exploit code is available for this vulnerability, but so far the company isn’t aware of any malicious use of the hole.  These days, admin credentials aren’t hard to get, Harrington noted. The “dirty secret that few people want to talk about is across IT and security operations there are so many systems that are left with default credentials.” That’s particularly common, he said, with devices behind a firewall, such as network access control servers, because admins think because they are inside the network they can’t be touched by external hackers. But lots of credentials can be scooped up in compromises of applications where Cisco admins might have stored passwords.

Read More »

JLL’s 2026 Global Data Center Outlook: Navigating the AI Supercycle, Power Scarcity and Structural Market Transformation

Sovereign AI and National Infrastructure Policy JLL frames artificial intelligence infrastructure as an emerging national strategic asset, with sovereign AI initiatives representing an estimated $8 billion in cumulative capital expenditure by 2030. While modest relative to hyperscale investment totals, this segment carries outsized strategic importance. Data localization mandates, evolving AI regulation, and national security considerations are increasingly driving governments to prioritize domestic compute capacity, often with pricing premiums reaching as high as 60%. Examples cited across Europe, the Middle East, North America, and Asia underscore a consistent pattern: digital sovereignty is no longer an abstract policy goal, but a concrete driver of data center siting, ownership structures, and financing models. In practice, sovereign AI initiatives are accelerating demand for locally controlled infrastructure, influencing where capital is deployed and how assets are underwritten. For developers and investors, this shift introduces a distinct set of considerations. Sovereign projects tend to favor jurisdictional alignment, long-term tenancy, and enhanced security requirements, while also benefiting from regulatory tailwinds and, in some cases, direct state involvement. As AI capabilities become more tightly linked to economic competitiveness and national resilience, policy-driven demand is likely to remain a durable (if specialized) component of global data center growth. Energy and Sustainability as the Central Constraint Energy availability emerges as the report’s dominant structural constraint. In many major markets, average grid interconnection timelines now extend beyond four years, effectively decoupling data center development schedules from traditional utility planning cycles. As a result, operators are increasingly pursuing alternative energy strategies to maintain project momentum, including: Behind-the-meter generation Expanded use of natural gas, particularly in the United States Private-wire renewable energy projects Battery energy storage systems (BESS) JLL points to declining battery costs, seen falling below $90 per kilowatt-hour in select deployments, as a meaningful enabler of grid flexibility, renewable firming, and

Read More »

Oil Prices Jump as Short Covering Builds

Oil moved higher as traders digested a mix of geopolitical risks that could add a premium to prices while continuing to assess US measures to exert control over Venezuela’s oil. West Texas Intermediate rose 3.2% to settle below $58 a barrel. Prices continued to climb after settlement, rising more than 1% and leaving the market poised to wipe out losses from earlier in the week. President Donald Trump threatened to hit Iran “hard” if the country’s government killed protesters amid an ongoing period of unrest. A disruption to Iranian supply would prove an unexpected hurdle in a market that’s currently anticipating a glut of oil. Adding to the bullish momentum, an annual period of commodity index rebalancing is expected to see cash flow back into crude over the next few days. Call skews for Brent have also strengthened as traders pile into the options market to hedge. And entering the day, trend-following commodity trading advisers were 91% short in WTI, according to data from Kpler’s Bridgeton Research group. That positioning can leave traders rushing to cover shorts in the event of a price spike. The confluence of bullish events arrived as traders were weighing the US’s efforts to control the Venezuelan oil industry. Energy Secretary Chris Wright said the US plans to control sales of Venezuelan oil and would initially offer stored crude, while the Energy Department said barrels already were being marketed. State-owned Petroleos de Venezuela SA said it’s in negotiations with Washington over selling crude through a framework similar to an arrangement with Chevron Corp., the only supermajor operating in the country. Meanwhile, President Donald Trump told the New York Times that US oversight of the country could last years and that “the oil will take a while.” “We are really talking about a trade-flow shift as the

Read More »

Survey Shows OPEC Held Supply Flat Last Month

OPEC’s crude production held steady in December as a slump in Venezuela’s output to the lowest in two years was offset by increases in Iraq and some other members, a Bloomberg survey showed.  The Organization of the Petroleum Exporting Countries pumped an average of just over 29 million barrels a day, little changed from the previous month, according to the survey. Venezuelan output declined by about 14% to 830,000 barrels a day as the US blocked and seized tankers as part of a strategy to pressure the country’s leadership. Supplies increased from Iraq and a few other nations as they pressed on with the last in a series of collective increases before a planned pause in the first quarter of this year. The alliance, led by Saudi Arabia, aims to keep output steady through the end of March while global oil markets confront a surplus. World markets have been buffeted this week after President Donald Trump’s administration captured Venezuelan leader Nicolás Maduro, and said it would assume control of the OPEC member’s oil exports indefinitely.  While Trump has said that US oil companies will invest billions of dollars to rebuild Venezuela’s crumbling energy infrastructure, the nation’s situation in the short term remains precarious. Last month, Caracas was forced to shutter wells at the oil-rich Orinoco Belt amid the American blockade.  The shock move is the latest in an array of geopolitical challenges confronting the broader OPEC+ coalition, ranging from forecasts of a record supply glut to unrest in Iran and Russia’s ongoing war against Ukraine, which is taking a toll on the oil exports of fellow alliance member Kazakhstan. Oil prices are trading near the lowest in five years at just over $60 a barrel in London, squeezing the finances of OPEC+ members. Amid the uncertain backdrop, eight key nations agreed again this month to freeze output levels during the first quarter,

Read More »

Shell Expects Weak Oil Trading Result for Q4

Shell Plc said its oil trading performance worsened in the fourth quarter as crude prices slumped, adding to signs that Big Oil is heading into a tougher earnings season.  Oil trading results for the year’s final three months are “expected to be significantly lower” than the previous quarter, Shell said in an update on Thursday, ahead of an earnings report due early next month. At the company’s troubled chemicals division, a “significant loss” is expected. The update comes at a time when the oil market is lurching into an oversupply that could make for challenging trading conditions in the months ahead. The international benchmark Brent plunged 18 percent last year and has been largely unaffected by turmoil in Venezuela, where the country’s president Nicolás Maduro has been captured by US forces. It’s a “rough end to the year” for Shell, RBC Capital Markets analyst Biraj Borkhataria said Thursday in a note. He had expected “a relatively weak quarter, and this looks worse than expected.” Shell’s shares fell as much as 2.3 percent in early London trading. Shell’s massive in-house trading business deals in oil, gas, fuels, chemicals and renewable power – trading both the company’s own production as well as supply from third parties. The energy major doesn’t disclose separate results for its traders, but the performance is closely watched as it can be a key driver of earnings. A strong trading performance in the third quarter was one of the reasons Shell cited for earnings that beat estimates.  Since taking over the London-based energy giant three years ago, Chief Executive Officer Wael Sawan has sought to cut costs and offload under-performing assets to improve the company’s balance sheet. This is his first test in a lower oil-price environment that will pressure the firm’s ability to maintain its level of share buybacks. US rival Exxon Mobil

Read More »

National Grid Unveils New Plan for Anglo-Dutch Cable Link

National Grid PLC on Thursday announced route changes for the proposed LionLink cable to the Netherlands, a project with TenneT BV. The interconnector is designed to carry up to two gigawatts of wind electricity, enough for about 2.5 million British homes, according to National Grid. LionLink would connect a wind farm offshore the Netherlands to the Dutch and UK grids, with a targeted start of operation in 2032, National Grid says on the project webpage. The power transmission and distribution operator said in an online statement Thursday it would launch an eight-week public consultation for its new plan for the cable to start underground in Suffolk’s Walberswick, “a decision made following an assessment of the environment and local residents’ concerns around access constraints and traffic impacts”. “An alternative underground HVDC [high-voltage direct current] cable corridor to the north of Southwold was discounted following the consultations”, National Grid said. “NGV is also working closely with local authorities to ensure no construction takes place on the beach, and there is no visible infrastructure once the project is complete”, it added, referring to National Grid Ventures, its unit tasked with building and operating LionLink. “84 percent of the UK section of the LionLink cable will be offshore, and all onshore sections will be buried underground”. The new plan was based on non-statutory consultations in 2022 and 2023, LionLink project director Gareth Burden said. “We are coordinating with other developers in Suffolk on a regular basis so that where possible, we can work together to ensure construction is carried out in manageable sections, and we can avoid long-term disruption in any one area”, Burden added. National Grid noted, “LionLink is set to be one of the first projects of its kind, helping to shape the future of offshore renewable energy by combining wind generation and

Read More »

Cisco identifies vulnerability in ISE network access control devices

Johannes Ullrich, dean of research at the SANS Institute, said, “Most likely, this is an XML External Entity vulnerability.” External entities, he explained, are an XML feature that instructs the parser to either read local files or access external URLs. In this case, an attacker could embed an external entity in the license file, instructing the XML parser to read a confidential file and include it in the response. This is a common vulnerability in XML parsers, he said, typically mitigated by disabling external entity parsing. An attacker would be able to obtain read access to confidential files like configuration files, he added, and possibly user credentials. Ullrich also said an ISE administrator may have access to a lot of the information, but they should not have access to user credentials. The Cisco advisory says an attacker could exploit this vulnerability by uploading a malicious file to the application: “A successful exploit could allow the attacker to read arbitrary files from the underlying operating system that could include sensitive data that should otherwise be inaccessible even to administrators. To exploit this vulnerability, the attacker must have valid administrative credentials.” Cisco said proof-of-concept exploit code is available for this vulnerability, but so far the company isn’t aware of any malicious use of the hole.  These days, admin credentials aren’t hard to get, Harrington noted. The “dirty secret that few people want to talk about is across IT and security operations there are so many systems that are left with default credentials.” That’s particularly common, he said, with devices behind a firewall, such as network access control servers, because admins think because they are inside the network they can’t be touched by external hackers. But lots of credentials can be scooped up in compromises of applications where Cisco admins might have stored passwords.

Read More »

JLL’s 2026 Global Data Center Outlook: Navigating the AI Supercycle, Power Scarcity and Structural Market Transformation

Sovereign AI and National Infrastructure Policy JLL frames artificial intelligence infrastructure as an emerging national strategic asset, with sovereign AI initiatives representing an estimated $8 billion in cumulative capital expenditure by 2030. While modest relative to hyperscale investment totals, this segment carries outsized strategic importance. Data localization mandates, evolving AI regulation, and national security considerations are increasingly driving governments to prioritize domestic compute capacity, often with pricing premiums reaching as high as 60%. Examples cited across Europe, the Middle East, North America, and Asia underscore a consistent pattern: digital sovereignty is no longer an abstract policy goal, but a concrete driver of data center siting, ownership structures, and financing models. In practice, sovereign AI initiatives are accelerating demand for locally controlled infrastructure, influencing where capital is deployed and how assets are underwritten. For developers and investors, this shift introduces a distinct set of considerations. Sovereign projects tend to favor jurisdictional alignment, long-term tenancy, and enhanced security requirements, while also benefiting from regulatory tailwinds and, in some cases, direct state involvement. As AI capabilities become more tightly linked to economic competitiveness and national resilience, policy-driven demand is likely to remain a durable (if specialized) component of global data center growth. Energy and Sustainability as the Central Constraint Energy availability emerges as the report’s dominant structural constraint. In many major markets, average grid interconnection timelines now extend beyond four years, effectively decoupling data center development schedules from traditional utility planning cycles. As a result, operators are increasingly pursuing alternative energy strategies to maintain project momentum, including: Behind-the-meter generation Expanded use of natural gas, particularly in the United States Private-wire renewable energy projects Battery energy storage systems (BESS) JLL points to declining battery costs, seen falling below $90 per kilowatt-hour in select deployments, as a meaningful enabler of grid flexibility, renewable firming, and

Read More »

Oil Prices Jump as Short Covering Builds

Oil moved higher as traders digested a mix of geopolitical risks that could add a premium to prices while continuing to assess US measures to exert control over Venezuela’s oil. West Texas Intermediate rose 3.2% to settle below $58 a barrel. Prices continued to climb after settlement, rising more than 1% and leaving the market poised to wipe out losses from earlier in the week. President Donald Trump threatened to hit Iran “hard” if the country’s government killed protesters amid an ongoing period of unrest. A disruption to Iranian supply would prove an unexpected hurdle in a market that’s currently anticipating a glut of oil. Adding to the bullish momentum, an annual period of commodity index rebalancing is expected to see cash flow back into crude over the next few days. Call skews for Brent have also strengthened as traders pile into the options market to hedge. And entering the day, trend-following commodity trading advisers were 91% short in WTI, according to data from Kpler’s Bridgeton Research group. That positioning can leave traders rushing to cover shorts in the event of a price spike. The confluence of bullish events arrived as traders were weighing the US’s efforts to control the Venezuelan oil industry. Energy Secretary Chris Wright said the US plans to control sales of Venezuelan oil and would initially offer stored crude, while the Energy Department said barrels already were being marketed. State-owned Petroleos de Venezuela SA said it’s in negotiations with Washington over selling crude through a framework similar to an arrangement with Chevron Corp., the only supermajor operating in the country. Meanwhile, President Donald Trump told the New York Times that US oversight of the country could last years and that “the oil will take a while.” “We are really talking about a trade-flow shift as the

Read More »

Survey Shows OPEC Held Supply Flat Last Month

OPEC’s crude production held steady in December as a slump in Venezuela’s output to the lowest in two years was offset by increases in Iraq and some other members, a Bloomberg survey showed.  The Organization of the Petroleum Exporting Countries pumped an average of just over 29 million barrels a day, little changed from the previous month, according to the survey. Venezuelan output declined by about 14% to 830,000 barrels a day as the US blocked and seized tankers as part of a strategy to pressure the country’s leadership. Supplies increased from Iraq and a few other nations as they pressed on with the last in a series of collective increases before a planned pause in the first quarter of this year. The alliance, led by Saudi Arabia, aims to keep output steady through the end of March while global oil markets confront a surplus. World markets have been buffeted this week after President Donald Trump’s administration captured Venezuelan leader Nicolás Maduro, and said it would assume control of the OPEC member’s oil exports indefinitely.  While Trump has said that US oil companies will invest billions of dollars to rebuild Venezuela’s crumbling energy infrastructure, the nation’s situation in the short term remains precarious. Last month, Caracas was forced to shutter wells at the oil-rich Orinoco Belt amid the American blockade.  The shock move is the latest in an array of geopolitical challenges confronting the broader OPEC+ coalition, ranging from forecasts of a record supply glut to unrest in Iran and Russia’s ongoing war against Ukraine, which is taking a toll on the oil exports of fellow alliance member Kazakhstan. Oil prices are trading near the lowest in five years at just over $60 a barrel in London, squeezing the finances of OPEC+ members. Amid the uncertain backdrop, eight key nations agreed again this month to freeze output levels during the first quarter,

Read More »

Utilities under pressure: 6 power sector trends to watch in 2026

Listen to the article 10 min This audio is auto-generated. Please let us know if you have feedback. 2026 will be a year of reckoning for the electric power industry.  Major policy changes in the One Big Beautiful Bill Act, which axed most subsidies for clean energy and electric vehicles, are forcing utilities, manufacturers, developers and others to pivot fast. The impacts of those changes will become more pronounced over the coming months. Market forces will also have their say. Demand for power has never been greater. But some of the most aggressive predictions driving resource planning may not come to pass, leading some to fear the possibility of another tech bubble. At the same time, each passing day brings more distributed energy resources onto the grid, increasing the opportunities — and expectations — for utilities to harness those resources into a more dynamic, flexible and resilient system. Here are some of the top trends Utility Dive will be tracking over the coming year. Large loads — where are they, and who controls their interconnection — dominate industry concerns Across the United States, but particularly in markets like Texas and the Mid-Atlantic, large loads — mainly data centers designed to run artificial intelligence programs — are seeking to connect to the grid, driving up electricity demand forecasts and ballooning interconnection queues. That’s led some states to introduce new large load tariffs to weed out speculative requests, with more states expected to follow suit.  The Department of Energy is now pushing federal regulators to take a more active role in regulating how those loads get connected to the grid, setting the stage for a power struggle between state and federal authorities. The DOE asked the Federal Energy Regulatory Commission to issue rules by April 30, a deadline many say will be hard to meet. A

Read More »

China’s Top Oil Firms Turn to Beijing for Guidance on VEN

Leading Chinese oil companies with interests in Venezuela have asked Beijing for guidance on how to protect their investments as Washington cranks up pressure on the Latin American country to increase its economic ties with the US. State-owned firms led by China National Petroleum Corp. raised concerns this week with government agencies and sought advice from officials, in an effort to align their responses with Beijing’s diplomatic strategy and to salvage existing claims to some of the world’s largest oil reserves, according to people familiar with the situation. They asked not to be identified as the discussions are private. The companies, closely monitoring developments even before the US seized President Nicolas Maduro at the weekend, are also conducting their own assessments of the situation on the ground, the people said. Top Beijing officials are separately reviewing events and trying to better understand corporate exposure, while planning for scenarios including a worst case where China’s investments would go to zero, they added.  While it is typical for government-backed firms to maintain close ties with officials in Beijing, the emergency consultations underscore the stakes for Chinese majors, caught off-guard by Washington’s raid and by the rapid escalation of efforts to establish a US sphere of influence in the Americas. Beyond the immediate impact of US actions, all are concerned about long-term prospects, the people said. Chinese companies have established a significant footprint across Latin America over the past decades, including under the Belt and Road Initiative. Venezuela, with few other friends, has been among the most important beneficiaries of this largesse — in part because of its vast oil wealth. China first extended financing for infrastructure and oil projects in 2007, under former President Hugo Chavez. Public data supports estimates that Beijing had lent upwards of $60 billion in oil-backed loans through state-run banks by 2015. 

Read More »

USA Crude Oil Stocks Drop Nearly 4MM Barrels WoW

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 3.8 million barrels from the week ending December 26 to the week ending January 2, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. This report was released on January 7 and included data for the week ending January 2. According to the report, crude oil stocks, not including the SPR, stood at 419.1 million barrels on January 2, 422.9 million barrels on December 26, 2025, and 414.6 million barrels on January 3, 2025. Crude oil in the SPR stood at 413.5 million barrels on January 2, 413.2 million barrels on December 26, and 393.8 million barrels on January 3, 2025, the report showed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.707 billion barrels on January 2, the report revealed. Total petroleum stocks were up 8.4 million barrels week on week and up 78.7 million barrels year on year, the report pointed out. “At 419.1 million barrels, U.S. crude oil inventories are about three percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 7.7 million barrels from last week and are about three percent above the five year average for this time of year. Finished gasoline inventories decreased, while blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 5.6 million barrels last week and are about three percent below the five year average for this time of year. Propane/propylene inventories decreased 2.2 million barrels from last week and are about 29 percent above the five year

Read More »

Imperial Expects Up To $1.6B Capex for 2026

Imperial Oil Ltd said it expects CAD 2-2.2 billion ($1.6 billion) in capital and exploration expenditure for next year, compared to CAD 1.9-2.1 billion for this year. The Canadian oil sands-focused producer, majority-owned by Exxon Mobil Corp, earlier announced a cost-saving restructuring plan. “The company’s strategy remains focused on maximizing the value of its existing assets and progressing advantaged high-value growth opportunities while delivering industry-leading returns to shareholders”, Imperial said in a guidance statement. Imperial expects a gross production of 441,000-460,000 gross oil equivalent barrels per day (boed) in 2026. In the first nine months of 2025, Imperial averaged 436,000 boed gross, according to its third quarter report October 31. While that fell short of the upper end of its 2025 projection of 433,000-456,000 boed, the third quarter figure was 462,000 boed, the company’s highest quarterly output in over 30 years with Kearl recording its highest-ever quarterly gross production at 316,000 barrels per day (bpd). “Higher volumes reflect reliability improvements and continued growth at Kearl and Cold Lake, progressing towards targets of 300,000 and 165,000 barrels per day respectively”, Imperial said of its production forecast for 2026. “Turnarounds are planned at Cold Lake, Syncrude and at Kearl, where planned work at the K1 plant will extend the turnaround interval from two years to four years”. Next year Imperial “will progress secondary bitumen recovery projects at Kearl, high-value infill drilling and Mahihkan SA-SAGD at Cold Lake and mine progression at both Kearl and Syncrude”, the company said. Downstream, Imperial expects to process 395,000-405,000 bpd with a utilization rate of 91-93 percent. “The company is planning to complete turnarounds at Strathcona and Sarnia”, Imperial said. “At Strathcona, the work will focus on the crude unit, after achieving its longest-ever run length of 10 years. “Imperial continues to focus on further improving and maximizing

Read More »

Where Will USA Gasoline Price Land on Christmas?

In a blog posted on its website on Tuesday, GasBuddy projected what the average U.S. gasoline price will be on December 25. According to this blog, GasBuddy sees the U.S. average gasoline price coming in at $2.79 per gallon this Christmas. That’s lower than the last four Christmas Days, a chart included in the blog showed. This chart outlined that the average U.S. gasoline price was $2.95 per gallon on December 25, 2024, $3.10 per gallon on December 25, 2023, $3.05 per gallon on December 25, 2022, and $3.26 per gallon on December 25, 2021. The average U.S. gasoline price was $2.26 per gallon back on December 25, 2020, according to the chart. “GasBuddy expects the national average on Christmas Day to land near $2.79 per gallon, below last year’s price of $3.00, saving motorists over half a billion dollars during the Christmas week compared to last year,” GasBuddy noted in its blog. “While 2024 previously represented the lowest Christmas Day price since 2020, this year continues that trend and marks another year of modest improvement for holiday drivers,” it added. In the blog, GasBuddy stated that “the softer holiday pricing comes as refinery maintenance winds down and gasoline supplies rise, easing some of the pressure that typically builds earlier in the year”. “In addition, OPEC has been increasing oil production for much of 2025, pushing crude prices to multi-year lows in the weeks leading up to Christmas,” it noted. “Even with millions of Americans traveling for the holidays, winter gasoline demand remains far lower than in the summer, helping keep a natural lid on prices,” it continued. “While unexpected refinery issues or international tensions could still introduce volatility, the overall backdrop is far more favorable for Christmas travelers than it was a few years ago when the re-opening economy

Read More »

3D Energi Secures Investor Commitments for $10MM Capital Raise

3D Energi Ltd said Tuesday new institutional investors and existing shareholders have committed to participating in a share issuance to raise AUD 14.5 million ($9.61 million) for its drilling campaign offshore Victoria. The Eastern Australia-focused exploration company plans to issue nearly 104 million shares at AUD 0.14 per share, it said in a stock filing. “The placement price of [AU]$0.14 per share represents a 17.6 percent discount to the last trading price of [AU]$0.17 on 11 December 2025 and a 18.5 percent discount to the 15-trading-day volume weighted average price of [AU]$0.1719”, 3D Energi said. “Placement shares will be listed on the ASX [Australian Securities Exchange] and rank pari-passu with the existing fully paid ordinary shares”. It expects to settle the placement December 23, while placement options are subject to shareholder approval at a general meeting that 3D Energy expects to hold late January 2026. 3D Energi said it would issue one free attaching option for every one new share issued under the placement. “The placement options are exercisable at [AU]$0.21 each, with an expiry date of two years from the date of issue”, 3D Energi said. “It is intended that the placement options will be listed, and an application will be made to the ASX for quotation of the options, subject to shareholder approval and meeting the ASX requirements for quotation of the options”. “The placement was strongly supported by a number of new domestic and international institutional investors, as well as existing shareholders”, 3D Energi said. “Proceeds from the placement will be applied towards testing at the Essington-1 well, drilling the Charlemont-1 gas exploration well within the VIC/P79 exploration permit, the second well of the 2025 Otway Exploration Drilling Program, and for general working capital purposes, including costs of the placement”, it said. On Wednesday 3D Energi said the

Read More »

National Grid, Con Edison urge FERC to adopt gas pipeline reliability requirements

The Federal Energy Regulatory Commission should adopt reliability-related requirements for gas pipeline operators to ensure fuel supplies during cold weather, according to National Grid USA and affiliated utilities Consolidated Edison Co. of New York and Orange and Rockland Utilities. In the wake of power outages in the Southeast and the near collapse of New York City’s gas system during Winter Storm Elliott in December 2022, voluntary efforts to bolster gas pipeline reliability are inadequate, the utilities said in two separate filings on Friday at FERC. The filings were in response to a gas-electric coordination meeting held in November by the Federal-State Current Issues Collaborative between FERC and the National Association of Regulatory Utility Commissioners. National Grid called for FERC to use its authority under the Natural Gas Act to require pipeline reliability reporting, coupled with enforcement mechanisms, and pipeline tariff reforms. “Such data reporting would enable the commission to gain a clearer picture into pipeline reliability and identify any problematic trends in the quality of pipeline service,” National Grid said. “At that point, the commission could consider using its ratemaking, audit, and civil penalty authority preemptively to address such identified concerns before they result in service curtailments.” On pipeline tariff reforms, FERC should develop tougher provisions for force majeure events — an unforeseen occurence that prevents a contract from being fulfilled — reservation charge crediting, operational flow orders, scheduling and confirmation enhancements, improved real-time coordination, and limits on changes to nomination rankings, National Grid said. FERC should support efforts in New England and New York to create financial incentives for gas-fired generators to enter into winter contracts for imported liquefied natural gas supplies, or other long-term firm contracts with suppliers and pipelines, National Grid said. Con Edison and O&R said they were encouraged by recent efforts such as North American Energy Standard

Read More »

US BOEM Seeks Feedback on Potential Wind Leasing Offshore Guam

The United States Bureau of Ocean Energy Management (BOEM) on Monday issued a Call for Information and Nominations to help it decide on potential leasing areas for wind energy development offshore Guam. The call concerns a contiguous area around the island that comprises about 2.1 million acres. The area’s water depths range from 350 meters (1,148.29 feet) to 2,200 meters (7,217.85 feet), according to a statement on BOEM’s website. Closing April 7, the comment period seeks “relevant information on site conditions, marine resources, and ocean uses near or within the call area”, the BOEM said. “Concurrently, wind energy companies can nominate specific areas they would like to see offered for leasing. “During the call comment period, BOEM will engage with Indigenous Peoples, stakeholder organizations, ocean users, federal agencies, the government of Guam, and other parties to identify conflicts early in the process as BOEM seeks to identify areas where offshore wind development would have the least impact”. The next step would be the identification of specific WEAs, or wind energy areas, in the larger call area. BOEM would then conduct environmental reviews of the WEAs in consultation with different stakeholders. “After completing its environmental reviews and consultations, BOEM may propose one or more competitive lease sales for areas within the WEAs”, the Department of the Interior (DOI) sub-agency said. BOEM Director Elizabeth Klein said, “Responsible offshore wind development off Guam’s coast offers a vital opportunity to expand clean energy, cut carbon emissions, and reduce energy costs for Guam residents”. Late last year the DOI announced the approval of the 2.4-gigawatt (GW) SouthCoast Wind Project, raising the total capacity of federally approved offshore wind power projects to over 19 GW. The project owned by a joint venture between EDP Renewables and ENGIE received a positive Record of Decision, the DOI said in

Read More »

Biden Bars Offshore Oil Drilling in USA Atlantic and Pacific

President Joe Biden is indefinitely blocking offshore oil and gas development in more than 625 million acres of US coastal waters, warning that drilling there is simply “not worth the risks” and “unnecessary” to meet the nation’s energy needs.  Biden’s move is enshrined in a pair of presidential memoranda being issued Monday, burnishing his legacy on conservation and fighting climate change just two weeks before President-elect Donald Trump takes office. Yet unlike other actions Biden has taken to constrain fossil fuel development, this one could be harder for Trump to unwind, since it’s rooted in a 72-year-old provision of federal law that empowers presidents to withdraw US waters from oil and gas leasing without explicitly authorizing revocations.  Biden is ruling out future oil and gas leasing along the US East and West Coasts, the eastern Gulf of Mexico and a sliver of the Northern Bering Sea, an area teeming with seabirds, marine mammals, fish and other wildlife that indigenous people have depended on for millennia. The action doesn’t affect energy development under existing offshore leases, and it won’t prevent the sale of more drilling rights in Alaska’s gas-rich Cook Inlet or the central and western Gulf of Mexico, which together provide about 14% of US oil and gas production.  The president cast the move as achieving a careful balance between conservation and energy security. “It is clear to me that the relatively minimal fossil fuel potential in the areas I am withdrawing do not justify the environmental, public health and economic risks that would come from new leasing and drilling,” Biden said. “We do not need to choose between protecting the environment and growing our economy, or between keeping our ocean healthy, our coastlines resilient and the food they produce secure — and keeping energy prices low.” Some of the areas Biden is protecting

Read More »

Biden Admin Finalizes Hydrogen Tax Credit Favoring Cleaner Production

The Biden administration has finalized rules for a tax incentive promoting hydrogen production using renewable power, with lower credits for processes using abated natural gas. The Clean Hydrogen Production Credit is based on carbon intensity, which must not exceed four kilograms of carbon dioxide equivalent per kilogram of hydrogen produced. Qualified facilities are those whose start of construction falls before 2033. These facilities can claim credits for 10 years of production starting on the date of service placement, according to the draft text on the Federal Register’s portal. The final text is scheduled for publication Friday. Established by the 2022 Inflation Reduction Act, the four-tier scheme gives producers that meet wage and apprenticeship requirements a credit of up to $3 per kilogram of “qualified clean hydrogen”, to be adjusted for inflation. Hydrogen whose production process makes higher lifecycle emissions gets less. The scheme will use the Energy Department’s Greenhouse Gases, Regulated Emissions and Energy Use in Transportation (GREET) model in tiering production processes for credit computation. “In the coming weeks, the Department of Energy will release an updated version of the 45VH2-GREET model that producers will use to calculate the section 45V tax credit”, the Treasury Department said in a statement announcing the finalization of rules, a process that it said had considered roughly 30,000 public comments. However, producers may use the GREET model that was the most recent when their facility began construction. “This is in consideration of comments that the prospect of potential changes to the model over time reduces investment certainty”, explained the statement on the Treasury’s website. “Calculation of the lifecycle GHG analysis for the tax credit requires consideration of direct and significant indirect emissions”, the statement said. For electrolytic hydrogen, electrolyzers covered by the scheme include not only those using renewables-derived electricity (green hydrogen) but

Read More »

Xthings unveils Ulticam home security cameras powered by edge AI

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Xthings announced that its Ulticam security camera brand has a new model out today: the Ulticam IQ Floodlight, an edge AI-powered home security camera. The company also plans to showcase two additional cameras, Ulticam IQ, an outdoor spotlight camera, and Ulticam Dot, a portable, wireless security camera. All three cameras offer free cloud storage (seven days rolling) and subscription-free edge AI-powered person detection and alerts. The AI at the edge means that it doesn’t have to go out to an internet-connected data center to tap AI computing to figure out what is in front of the camera. Rather, the processing for the AI is built into the camera itself, and that sets a new standard for value and performance in home security cameras. It can identify people, faces and vehicles. CES 2025 attendees can experience Ulticam’s entire lineup at Pepcom’s Digital Experience event on January 6, 2025, and at the Venetian Expo, Halls A-D, booth #51732, from January 7 to January 10, 2025. These new security cameras will be available for purchase online in the U.S. in Q1 and Q2 2025 at U-tec.com, Amazon, and Best Buy. The Ulticam IQ Series: smart edge AI-powered home security cameras Ulticam IQ home security camera. The Ulticam IQ Series, which includes IQ and IQ Floodlight, takes home security to the next level with the most advanced AI-powered recognition. Among the very first consumer cameras to use edge AI, the IQ Series can quickly and accurately identify people, faces and vehicles, without uploading video for server-side processing, which improves speed, accuracy, security and privacy. Additionally, the Ulticam IQ Series is designed to improve over time with over-the-air updates that enable new AI features. Both cameras

Read More »

Intel unveils new Core Ultra processors with 2X to 3X performance on AI apps

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Intel unveiled new Intel Core Ultra 9 processors today at CES 2025 with as much as two or three times the edge performance on AI apps as before. The chips under the Intel Core Ultra 9 and Core i9 labels were previously codenamed Arrow Lake H, Meteor Lake H, Arrow Lake S and Raptor Lake S Refresh. Intel said it is pushing the boundaries of AI performance and power efficiency for businesses and consumers, ushering in the next era of AI computing. In other performance metrics, Intel said the Core Ultra 9 processors are up to 5.8 times faster in media performance, 3.4 times faster in video analytics end-to-end workloads with media and AI, and 8.2 times better in terms of performance per watt than prior chips. Intel hopes to kick off the year better than in 2024. CEO Pat Gelsinger resigned last month without a permanent successor after a variety of struggles, including mass layoffs, manufacturing delays and poor execution on chips including gaming bugs in chips launched during the summer. Intel Core Ultra Series 2 Michael Masci, vice president of product management at the Edge Computing Group at Intel, said in a briefing that AI, once the domain of research labs, is integrating into every aspect of our lives, including AI PCs where the AI processing is done in the computer itself, not the cloud. AI is also being processed in data centers in big enterprises, from retail stores to hospital rooms. “As CES kicks off, it’s clear we are witnessing a transformative moment,” he said. “Artificial intelligence is moving at an unprecedented pace.” The new processors include the Intel Core 9 Ultra 200 H/U/S models, with up to

Read More »

Quantum navigation could solve the military’s GPS jamming problem

In late September, a Spanish military plane carrying the country’s defense minister to a base in Lithuania was reportedly the subject of a kind of attack—not by a rocket or anti-aircraft rounds, but by radio transmissions that jammed its GPS system.  The flight landed safely, but it was one of thousands that have been affected by a far-reaching Russian campaign of GPS interference since the 2022 invasion of Ukraine. The growing inconvenience to air traffic and risk of a real disaster have highlighted the vulnerability of GPS and focused attention on more secure ways for planes to navigate the gauntlet of jamming and spoofing, the term for tricking a GPS receiver into thinking it’s somewhere else.  US military contractors are rolling out new GPS satellites that use stronger, cleverer signals, and engineers are working on providing better navigation information based on other sources, like cellular transmissions and visual data.  But another approach that’s emerging from labs is quantum navigation: exploiting the quantum nature of light and atoms to build ultra-sensitive sensors that can allow vehicles to navigate independently, without depending on satellites. As GPS interference becomes more of a problem, research on quantum navigation is leaping ahead, with many researchers and companies now rushing to test new devices and techniques. In recent months, the US’s Defense Advanced Research Projects Agency (DARPA) and its Defense Innovation Unit have announced new grants to test the technology on military vehicles and prepare for operational deployment. 
Tracking changes Perhaps the most obvious way to navigate is to know where you started and then track where you go by recording the speed, direction, and duration of travel. But while this approach, known in the field as inertial navigation, is conceptually simple, it’s difficult to do well; tiny uncertainties in any of those measurements compound over time and lead to big errors later on. Douglas Paul, the principal investigator of the UK’s Hub for Quantum Enabled Precision, Navigation & Timing (QEPNT), says that existing specialized inertial-navigation devices might be off by 20 kilometers after 100 hours of travel. Meanwhile, the cheap sensors commonly used in smartphones produce more than twice that level of uncertainty after just one hour. 
“If you’re guiding a missile that flies for one minute, that might be good enough,” he says. “If you’re in an airliner, that’s definitely not good enough.”  A more accurate version of inertial navigation instead uses sensors that rely on the quantum behavior of subatomic particles to more accurately measure acceleration, direction, and time. Several companies, like the US-based Infleqtion, are developing quantum gyroscopes, which track a vehicle’s bearing, and quantum accelerometers, which can reveal how far it’s traveled. Infleqtion’s sensors are based on a technique called atom interferometry: A beam of rubidium atoms is zapped with precise laser pulses, which split the atoms into two separate paths. Later, other laser pulses recombine the atoms, and they’re measured with a detector. If the vehicle has turned or accelerated while the atoms are in motion, the two paths will be slightly out of phase in a way the detector can interpret.  Last year the company trialed these inertial sensors on a customized plane flying at a British military testing site. In October of this year, Infleqtion ran its first real-world test of a new generation of inertial sensors that use a steady stream of atoms instead of pulses, allowing for continuous navigation and avoiding long dead times. A view of Infleqtion’s atomic clock Tiqker. COURTESY INFLEQTION Infleqtion also has an atomic clock, called Tiqker, that can help determine how far a vehicle has traveled. It is a kind of optical clock that uses infrared lasers tuned to a specific frequency to excite electrons in rubidium, which then release photons at a consistent, known rate. The device “will lose one second every 2 million years or so,” says Max Perez, who oversees the project, and it fits in a standard electronics equipment rack. It has passed tests on flights in the UK, on US Army ground vehicles in New Mexico, and, in late October, on a drone submarine.  “Tiqker operated happily through these conditions, which is unheard-of for previous generations of optical clocks,” says Perez. Eventually the company hopes to make the unit smaller and more rugged by switching to lasers generated by microchips.  Magnetic fields Vehicles deprived of satellite-based navigation are not entirely on their own; they can get useful clues from magnetic and gravitational fields that surround the planet. These fields vary slightly depending on the location, and the variations, or anomalies, are recorded in various maps. By precisely measuring the local magnetic or gravitational field and comparing those values with anomaly maps, quantum navigation systems can track the location of a vehicle. 

Allison Kealy, a navigation researcher at Swinburne University in Australia, is working on the hardware needed for this approach. Her team uses a material called nitrogen-vacancy diamond. In NV diamonds, one carbon atom in the lattice is replaced with a nitrogen atom, and one neighboring carbon atom is removed entirely. The quantum state of the electrons at the NV defect is very sensitive to magnetic fields. Carefully stimulating the electrons and watching the light they emit offers a way to precisely measure the strength of the field at the diamond’s location, making it possible to infer where it’s situated on the globe.  Kealy says these quantum magnetometers have a few big advantages over traditional ones, including the fact that they measure the direction of the Earth’s magnetic field in addition to its strength. That additional information could make it easier to determine location.  The technology is far from commercial deployment, but Kealy and several colleagues successfully tested their magnetometer in a set of flights in Australia late last year, and they plan to run more trials this year and next. “This is where it gets exciting, as we transition from theoretical models and controlled experiments to on-the-ground, operational systems,” she says. “This is a major step forward.”  Delicate systems Other teams, like Q-CTRL, an Australian quantum technology company, are focusing on using software to build robust systems from noisy quantum sensors. Quantum navigation involves taking those delicate sensors, honed in the placid conditions of a laboratory, and putting them in vehicles that make sharp turns, bounce with turbulence, and bob with waves, all of which interferes with the sensors’ functioning. Even the vehicles themselves present problems for magnetometers, especially “the fact that the airplane is made of metal, with all this wiring,” says Michael Biercuk, the CEO of Q-CTRL. “Usually there’s 100 to 1,000 times more noise than signal.”  After Q-CTRL engineers ran trials of their magnetic navigation system in a specially outfitted Cessna last year, they used machine learning to go through the data and try to sift out the signal from all the noise. Eventually they found they could track the plane’s location up to 94 times as accurately as a strategic-grade conventional inertial navigation system could, according to Biercuk. They announced their findings in a non-peer-reviewed paper last spring.  In August Q-CTRL received two contracts from DARPA to develop its “software-ruggedized” mag-nav product, named Ironstone Opal, for defense applications. The company is also testing the technology with commercial partners, including the defense contractors Northrop Grumman and Lockheed Martin and Airbus, an aerospace manufacturer.  An illustration showing the placement of Q-CTRL’s Ironstone Opal in a drone.COURTESY Q-CTRL “Northrop Grumman is working with Q-CTRL to develop a magnetic navigation system that can withstand the physical demands of the real world,” says Michael S. Larsen, a quantum systems architect at the company. “Technology like magnetic navigation and other quantum sensors will unlock capabilities to provide guidance even in GPS-denied or -degraded environments.”
Now Q-CTRL is working on putting Ironstone Opal into a smaller, more rugged container appropriate for deployment; currently, “it looks like a science experiment because it is a science experiment,” says Biercuk. He anticipates delivering the first commercial units next year.  Sensor fusion
Even as quantum navigation emerges as a legitimate alternative to satellite-based navigation, the satellites themselves are improving. Modern GPS III satellites include new civilian signals called L1C and L5, which should be more accurate and harder to jam and spoof than current signals. Both are scheduled to be fully operational later this decade.  US and allied military users are intended to have access to far hardier GPS tools, including M-code, a new form of GPS signal that is rolling out now, and Regional Military Protection, a focused GPS beam that will be restricted to small geographic areas. The latter will start to become available when the GPS IIIF generation of satellites is in orbit, with the first scheduled to go up in 2027. A Lockheed Martin spokesperson says new GPS satellites with M-code are eight times as powerful as previous ones, while the GPS IIIF model will be 60 times as strong. Other plans involve using navigation satellites in low Earth orbit—the zone inhabited by SpaceX’s internet-providing Starlink constellation—rather than the medium Earth orbit used by GPS. Since objects in LEO are closer to Earth, their signals are stronger, which makes them harder to jam and spoof. LEO satellites also transit the sky more quickly, which makes them harder still to spoof and helps GPS receivers get a lock on their position faster. “This really helps for signal convergence,” says Lotfi Massarweh, a satellite navigation researcher at Delft University of Technology, in the Netherlands. “They can get a good position in just a few minutes. So that is a huge leap.” Ultimately, says Massarweh, navigation will depend not only on satellites, quantum sensors, or any other single technology, but on the combination of all of them. “You need to think always in terms of sensor fusion,” he says.  The navigation resources that a vehicle draws on will change according to its environment—whether it’s an airliner, a submarine, or an autonomous car in an urban canyon. But quantum navigation will be one important resource. He says, “If quantum technology really delivers what we see in the literature—if it’s stable over one week rather than tens of minutes—at that point it is a complete game changer.”

Read More »

The fast and the future-focused are revolutionizing motorsport

In partnership withInfosys When the ABB FIA Formula E World Championship launched its first race through Beijing’s Olympic Park in 2014, the idea of all-electric motorsport still bordered on experimental. Batteries couldn’t yet last a full race, and drivers had to switch cars mid-competition. Just over a decade later, Formula E has evolved into a global entertainment brand broadcast in 150 countries, driving both technological innovation and cultural change in sport.   “Gen4, that’s to come next year,” says Dan Cherowbrier, Formula E’s chief technology and information officer. “You will see a really quite impressive car that starts us to question whether EV is there. It’s actually faster—it’s actually more than traditional [internal combustion engines] ICE.”  That acceleration isn’t just happening on the track. Formula E’s digital transformation, powered by its partnership with Infosys, is redefining what it means to be a fan. “It’s a movement to make motor sport accessible and exciting for the new generation,” says principal technologist at Infosys, Rohit Agnihotri.  From real-time leaderboards and predictive tools to personalized storylines that adapt to what individual fans care most about—whether it’s a driver rivalry or battery performance—Formula E and Infosys are using AI-powered platforms to create fan experiences as dynamic as the races themselves. “Technology is not just about meeting expectations; it’s elevating the entire fan experience and making the sport more inclusive,” says Agnihotri.  
AI is also transforming how the organization itself operates. “Historically, we would be going around the company, banging on everyone’s doors and dragging them towards technology, making them use systems, making them move things to the cloud,” Cherowbrier notes. “What AI has done is it’s turned that around on its head, and we now have people turning up, banging on our door because they want to use this tool, they want to use that tool.”  As audiences diversify and expectations evolve, Formula E is also a case study in sustainable innovation. Machine learning tools now help determine the most carbon-optimal way to ship batteries across continents, while remote broadcast production has sharply reduced travel emissions and democratized the company’s workforce. These advances show how digital intelligence can expand reach without deepening carbon footprints. 
For Cherowbrier, this convergence of sport, sustainability, and technology is just the beginning. With its data-driven approach to performance, experience, and impact, Formula E is offering a glimpse into how entertainment, innovation, and environmental responsibility can move forward in tandem.  “Our goal is clear,” says Agnihotri. “Help Formula E be the most digital and sustainable motor sport in the world. The future is electric, and with AI, it’s more engaging than ever.”  This episode of Business Lab is produced in partnership with Infosys.  Full Transcript:   Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab, and into the marketplace.   The ABB FIA Formula E World Championship, the world’s first all-electric racing series, made its debut in the grounds of the Olympic Park in Beijing in 2014. A little more than 10 years later, it’s a global entertainment brand with 10 teams, 20 drivers, and broadcasts in 150 countries. Technology is central to how Formula E is navigating that scale and to how it’s delivering more powerful personalized experiences.   Two words for you: elevated fandom.   My guests today are Rohit Agnihotri, principal technologist at Infosys, and Dan Cherowbrier, CTIO of Formula E.  

This episode is produced in partnership with Infosys.   Welcome, Rohit and Dan.  Dan Cherowbrier: Hi. Thanks for having us.  Megan: Dan, as I mentioned there, the first season of the ABB FIA Formula E World Championship launched in 2014. Can you talk us through how the first all-electric motor sport has evolved in the last decade? How has it changed in terms of its scale, the markets it operates in, and also, its audiences, of course?  Dan: When Formula E launched back in 2014, there were hardly any domestic EVs on the road. And probably if you’re from London, the ones you remember are the hybrid Priuses; that was what we knew of really. And at the time, they were unable to get a battery big enough for a car to do a full race. So the first generation of car, the first couple of seasons, the driver had to do a pit stop midway through the race, get out of one car, and get in another car, and then carry on, which sounds almost farcical now, but it’s what you had to do then to drive innovation, is to do that in order to go to the next stage.  Then in Gen2, that came up four years later, they had a battery big enough to start full races and start to actually make it a really good sport. Gen3, they’re going for some real speeds and making it happen. Gen4, that’s to come next year, you’ll see acceleration in line with Formula One. I’ve been fortunate enough to see some of the testing. You will see a really quite impressive car that starts us to question whether EV is there. It’s actually faster, it’s actually more than traditional ICE.  That’s the tech of the car. But then, if you also look at the sport and how people have come to it and the fans and the demographic of the fans, a lot has changed in the last 11 years. We were out to enter season 12. In the last 11 years, we’ve had a complete democratization of how people access content and what people want from content. And as a new generation of fan coming through. This new generation of fan is younger. They’re more gender diverse. We have much closer to 50-50 representation in our fan base. And they want things personalized, and they’re very demanding about how they want it and the experience they expect. No longer are you just able to give them one race and everybody watches the same thing. We need to make things for them. You see that sort of change that’s come through in the last 11 years.  Megan: It’s a huge amount of change in just over a decade, isn’t it? To navigate. And I wonder, Rohit, what was the strategic plan for Infosys when associating with Formula E? What did Infosys see in partnering with such a young sport? 
Rohit: Yeah. That’s a great question, Megan. When we looked at Formula E, we didn’t just see a racing championship. We saw the future. A sport, that’s electric, sustainable, and digital first. That’s exactly where Infosys wants to be, at the intersection of technology, innovation, and purpose. Our plan has three big goals. First, grow the fan base. Formula E wants to reach 500 million fans by 2030. That is not just a number. It’s a movement to make motor sport accessible and exciting for the new generation. To make that happen, we are building an AI-powered platform that gives personalized content to the fans, so that every fan feels connected and valued. Imagine a fan in Tokyo getting race insights tailored for their favorite driver, while another in London gets a sustainability story that matters to him. That’s the level of personalization we are aiming for.  Second, bringing technology innovation. We have already launched the Stats Centre, which turns race data into interactive stories. And soon, Race Centre will take this to the next level with real time leaderboards to the race or tracks, overtakes, attack mode timelines, and even AI generated live commentary. Fans will not just watch, they will interact, predict podium finishes, and share their views globally. And third, supports sustainability. Formula E is already net-zero, but now their goal is to cut carbon by 45% by 2030. We’ll be enabling that through AI-driven sustainability, data management, tracking every watt of energy, every logistics decision. and modeling scenarios to make racing even greener. Partnering with a young sport gives us a chance to shape its digital future and show how technology can make racing exciting and responsible. For us, Formula E is not just a sport, it’s a statement about where the world is headed. 
Megan: Fantastic. 500 million fans, that’s a huge number, isn’t it? And with more scale often comes a kind of greater expectation. Dan, I know you touched on this a little in your first question, but what is it that your fans now really want from their interactions? Can you talk a bit more about what experiences they’re looking for? And also, how complex that really is to deliver that as well?  Dan: I think a really telling thing about the modern day fan is I probably can’t tell you what they want from their experiences, because it’s individual and it’s unique for each of them.  Megan: Of course.  Dan: And it’s changing and it’s changing so fast. What somebody wants this month is going to be different from what they want in a couple of months’ time. And we’re having to learn to adapt to that. My CTO title, we often put focus on the technology in the middle of it. That’s what the T is. Actually, if you think about it, it’s continual transformation officer. You are constantly trying to change what you deliver and how you deliver it. Because if fans come through, they find new experiences, they find that in other sports. Sometimes not in sports, they find it outside, and then they’re coming in, and they expect that from you. So how can we make them more part of the sport, more personalized experience, get to know the athletes and the personalities and the characters within it? We’re a very technology centric sport. A lot of motor sport is, but really, people want to see people, right? And even when it’s technology, they want to see people interacting with technology, and it’s how do you get that out to show people.  Megan: Yeah, it’s no mean feat. Rohit, you’ve worked with brands on delivering these sort of fan experiences across different sports. Is motor sports perhaps more complicated than others, given that fans watch racing for different reasons than just a win? They could be focused on team dynamics, a particular driver, the way the engine is built, and so on and so forth. How does motor sports compare and how important is it therefore, that Formula E has embraced technology to manage expectations?  Rohit: Yeah, that’s an interesting point. Motor sports are definitely more complex than other sports. Fans don’t just care about who wins, they care about how some follow team strategies, others love driver rivalries, and many are fascinated by the car technology. Formula E adds another layer, sustainability and electric innovation. This makes personalization really important. Fans want more than results. They want stories and insights. Formula E understood this early and embraced technology. 
Think about the data behind a single race, lap times, energy usage, battery performance, attack mode activation, pit strategies, it’s a lot of data. If you just show the raw numbers, it’s overwhelming. But with Infosys Topaz, we turn that into simple and engaging stories. Fans can see how a driver fought back from 10th place to finish on the podium, or how a team managed energy better to gain an edge. And for new fans, we are adding explainer videos and interactive tools in the Race Center, so that they can learn about their sport easily. This is important because Formula E is still young, and many fans are discovering it for the first time. Technology is not just about meeting expectations; it’s elevating the entire fan experience and making the sport more inclusive.  Megan: There’s an awful lot going on there. What are some of the other ways that Formula E has already put generative AI and other emerging technologies to use? Dan, when we’ve spoken about the demand for more personalized experiences, for example.  Dan: I see the implementation of AI for us in three areas. We have AI within the sport. That’s in our DNA of the sport. Now, each team is using that, but how can we use that as a championship as well? How do we make it a competitive landscape? Now, we have AI that is in the fan-facing product. That’s what we’re working heavily on Infosys with, but we also have it in our broadcast product. As an example, you might have heard of a super slow-mo camera. A super slow-mo camera is basically, by taking three cameras and having them in exactly the same place so that you get three times the frame rate, and then you can do a slow-motion shot from that. And they used to be really expensive. Quite bulky cameras to put in. We are now using AI to take a traditional camera and interpolate between two frames to make it into a super slow image, and you wouldn’t really know the difference. Now, the joy of that, it means every camera can now be a super slow-mo camera.  Megan: Wow. 
Dan: In other ways, we use it a little bit in our graphics products, and we iterate and we use it for things like showing driver audio. When the driver is speaking to his engineer or her engineer in the garage, we show that text now on screen. We do that using AI. We use AI to pick out the difference between the driver and another driver and the team engineer or the team principal and show that in a really good way.  And we wouldn’t be able to do that. We’re not big enough to have a team of 24 people on stenographers typing. We have to use AI to be able to do that. That’s what’s really helped us grow. And then the last one is, how we use it in our business. Because ultimately, as we’ve got the fans, we’ve got the sport, but we also are running a business and we have to pick up these racetracks and move them around the world, and we have all these staff who have to get places. We have insurance who has to do all that kind of stuff, and we use it heavily in that area, particularly when it comes to what has a carbon impact for us.  So things like our freight and our travel. And we are using the AI tools to tell us, a battery for instance, should we fly it? Should we send it by sea freight? Should we send it by row freight? Or should we just have lots of them? And that sort of depends. Now, a battery, if it was heavy, you’d think you probably wouldn’t fly it. But actually, because of the materials in it, because of the source materials that make it, we’re better off flying it. We’ve used AI to work through all those different machinations of things that would be too difficult to do at speed for a person.  Megan: Well, sounds like there’s some fascinating things going on. I mean, of course, for a global brand, there is also the challenge of working in different markets. You mentioned moving everything around the world there. Each market with its own legal frameworks around data privacy, AI. How has technology also helped you navigate all of that, Dan?  Dan: The other really interesting thing about AI is… I’ve worked in technology leadership roles for some time now. And historically, we would be going around the company, banging on everyone’s doors and dragging them towards technology, making them use systems, making them move things to the cloud and things like that. What AI has done is it’s turned that around on its head, and we now have people turning up, banging on our door because they want to use this tool, they want to use that tool. And we’re trying to accommodate all of that and it’s a great pleasure to see people that are so keen. AI is driving the tech adoption in general, which really helps the business.  Megan: Dan, as the world’s first all-electric motor sport series, sustainability is obviously a real cornerstone of what Formula E is looking to do. Can you share with us how technology is helping you to achieve some of your ambitions when it comes to sustainability?  Dan: We’ve been the only sport with a certified net-zero pathway, and we have to stay that part. It’s a really core fundamental part of our DNA. I sit on our management team here. There is a sustainability VP that sits there as well, who checks and challenges everything we do. She looks at the data centers we use, why we use them, why we’ve made the decisions we’ve made, to make sure that we’re making them all for the right reasons and the right ways. We specifically embed technology in a couple of ways. One is, we mentioned a little bit earlier, on our freight. Formula E’s freight for the whole championship is probably akin to one Formula One team, but it’s still by far, our biggest contributor to our impact. So we look about how we can make sure that we’ve refined that to get the minimum amount of air freight and sea freight, and use local wherever we can. That’s also part of our pledge about investing in the communities that we race in.  The second then is about our staff travel. And we’ve done a really big piece of work over the last four to five years, partly accelerated through the covid-19 era actually, of doing remote working and remote TV production. Used to be traditionally, you would fly a hundred plus people out to racetracks, and then they would make the television all on site in trucks, and then they would be satellite distributed out of the venue. Now, what we do is we put in some internet connections, dual and diverse internet connections, and we stream every single camera back.  Megan: Right.  Dan: That means on site, we only need camera operators. Some of them actually, are remotely operated anyway, but we need camera operators, and then some engineering teams to just keep everything running. And then back in our home base, which is in London, in the UK, we have our remote production center where we layer on direction, graphics, audio, replay, team radio, all of those bits that break the color and make the program and add to that significant body of people. We do that all remotely now. Really interesting actually, a bit. So that’s the carbon sustainability story, but there is a further ESG piece that comes out of it and we haven’t really accommodated when we went into it, is the diversity in our workforce by doing that. We were discovering that we had quite a young, equally diverse workforce until around the age of 30. And then once that happened, then we were finding we were losing women, and that’s really because they didn’t want to travel.  Megan: Right.  Dan: And that’s the age of people starting to have children, and things were starting to change. And then we had some men that were traveling instead, and they weren’t seeing their children and it was sort of dividing it unnecessarily. But by going remote, by having so much of our people able to remotely… Or even if they do have to travel, they’re not traveling every single week. They’re now doing that one in three. They’re able to maintain the careers and the jobs they want to do, whilst having a family lifestyle. And it also just makes a better product by having people in that environment.  Megan: That’s such an interesting perspective, isn’t it? It’s a way of environmental sustainability intersects with social sustainability. And Rohit, and your work are so interesting. And Rohit, can you share any of the ways that Infosys has worked with Formula E, in terms of the role of technology as we say, in furthering those ambitions around sustainability?  Rohit: Yeah. Infosys understands that sustainability is at the heart of Formula E, and it’s a big part of why this partnership matters. Formula E is already net-zero certified, but now, they have an ambitious goal to cut carbon emissions by 45%. Infosys is helping in two ways. First, we have built AI-powered sustainability data tools that make carbon reporting accurate and traceable. Every watt of energy, every logistic decision, every material use can be tracked. Second, we use predictive analytics to model scenarios, like how changing race logistics or battery technology impact emissions so Formula E can make smarter, greener decisions. For us, it’s about turning sustainability from a report into an action plan, and making Formula E a global leader in green motor sport.  Megan: And in April 2025, Formula E working with Infosys launched its Stats Centre, which provides fans with interactive access to the performances of their drivers and teams, key milestones and narratives. I know you touched on this before, but I wonder if you could tell us a bit more about the design of that platform, Rohit, and how it fits into Formula E’s wider plans to personalize that fan experience?  Rohit: Sure. The Stats Centre was a big step forward. Before this, fans had access to basic statistics on the website and the mobile app, but nothing told the full story and we wanted to change that. Built on Infosys Topaz, the Stats Centre uses AI to turn race data into interactive stories. Fans can explore key stat cards that adapt to race timelines, and even chat with an AI companion to get instant answers. It’s like having a person race analyst at your fingertips. And we are going further. Next year, we’ll launch Race Centre. It’ll have live data boards, 2D track maps showing every driver’s position, overtakes and more attack timelines, and AI-generated commentary. Fans can predict podium finishes, vote for the driver of the race, and share their views on social media. Plus, we are adding video explainers for new fans, covering rules, strategies, and car technology. Our goal is simple: make every moment exciting and easy to understand. Whether you are a hardcore fan or someone watching Formula E for the first time, you’ll feel connected and informed.  Megan: Fantastic. Sounds brilliant. And as you’ve explained, Dan, leveraging data and AI can come with these huge benefits when it comes to the depth of fan experience that you can deliver, but it can also expose you to some challenges. How are you navigating those at Formula E?  Dan: The AI generation has presented two significant challenges to us. One is that traditional SEO, traditional search engine optimization, goes out the window. Right? You are now looking at how do we design and build our systems and how do we populate them with the right content and the right data, so that the engines are picking it up correctly and displaying it? The way that the foundational models are built and the speed and the cadence of which they’re updated, means quite often… We’re a very fast-changing organization. We’re a fast-changing product. Often, the models don’t keep up. And that’s because they are a point in time when they were trained. And that’s something that the big organizations, the big tech organizations will fix with time. But for now, what we have to do is we have to learn about how we can present our fan-facing, web-facing products to show that correctly. That’s all about having really accurate first-party content, effectively earned media. That’s the piece we need to do.  Then the second sort of challenge is sadly, whilst these tools are available to all of us, and we are using them effectively, so are another part of the technology landscape, and that is the cybersecurity basically they come with. If you look at the speed of the cadence and severity of hacks that are happening now, it’s just growing and growing and growing, and that’s because they have access to these tools too. And we’re having to really up our game and professionalize. And that’s really hard for an innovative organization. You don’t want to shut everything down. You don’t want to protect everything too much because you want people to be able to try new things. Right? If I block everything to only things that the IT team had heard of, we’d never get anything new in, and it’s about getting that balance right.  Megan: Right.  Dan: Rohit, you probably have similar experiences?  Megan: How has Infosys worked with Formula E to help it navigate some of that, Rohit?  Rohit: Yeah. Infosys has helped Formula E tackle some of the challenges in three key ways, simplify complex race data into engaging fan experience through platforms like Stats Centre, building a secure and scalable cloud data backbone for the real-time insights, and enabling sustainability goals with AI-driven carbon tracking and predictive analytics. This solution makes the sport interactive, more digital, and more responsible.  Megan: Fantastic. I wondered if we could close with a bit of a future forward look. Can you share with us any innovations on the horizon at Formula E that you are really excited about, Dan?  Dan: We have mentioned the Race Centre is going to launch in the next couple of months, but the really exciting thing for me is we’ve got an amazing season ahead of us. It’s the last season of our Gen3 car, with 10 really exciting teams on the grid. We are going at speed with our tech innovation roadmap and what our fans want. And we’re building up towards our Gen4 car, which will come out for season 13 in a year’s time. That will get launched in 2026, and I think it will be a game changer in how people perceive electric motor sport and electric cars in general.  Megan: It sounds like there’s all sorts of exciting things going on. And Rohit too, what’s coming up via this partnership that you are really looking forward to sharing with everyone?  Rohit: Two things stand out for me. First is the AI-powered fan data platform that I’ve already spoken about. Second is the launch of Race Centre. It’s going to change how fans experience live racing. And beyond final engagement, we are helping Formula E lead in sustainability with AI tools that model carbon impact and optimize logistics. This means every race can be smarter and greener. Our goal is clear: help Formula E be the most digital and sustainable motor sport in the world. The future is electric, and with AI, it’s more engaging than ever.  Megan: Fantastic. Thank you so much, both. That was Rohit Agnihotri, principal technologist at Infosys, and Dan Cherowbrier, CITO of Formula E, whom I spoke with from Brighton, England.   That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor and host for Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.   This show is available wherever you get your podcasts. And if you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review and this episode was produced by Giro Studios. Thanks for listening.  This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read More »

The Download: introducing the AI Hype Correction package

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Introducing: the AI Hype Correction package AI is going to reproduce human intelligence. AI will eliminate disease. AI is the single biggest, most important invention in human history. You’ve likely heard it all—but probably none of these things are true.AI is changing our world, but we don’t yet know the real winners, or how this will all shake out.After a few years of out-of-control hype, people are now starting to re-calibrate what AI is, what it can do, and how we should think about its ultimate impact.Here, at the end of 2025, we’re starting the post-hype phase. This new package of stories, called Hype Correction, is a way to reset expectations—a critical look at where we are, what AI makes possible, and where we go next.Here’s a sneak peek at what you can expect: + An introduction to four ways of thinking about the great AI hype correction of 2025.+  While it’s safe to say we’re definitely in an AI bubble right now, what’s less clear is what it really looks like—and what comes after it pops. Read the full story.+ Why OpenAI’s Sam Altman can be traced back to so many of the more outlandish proclamations about AI doing the rounds these days. Read the full story.+ It’s a weird time to be an AI doomer. But they’re not giving up.+ AI coding is now everywhere—but despite the billions of dollars being poured into improving AI models’ coding abilities, not everyone is convinced. Read the full story.+ If we really want to start finding new kinds of materials faster, AI materials discovery needs to make it out of the lab and move into the real world. Read the full story.+ Why reports of AI’s potential to replace trained human lawyers are greatly exaggerated.+ Dr. Margaret Mitchell, chief ethics scientist at AI startup Hugging Face, explains why the generative AI hype train is distracting us from what AI actually is and what it can—and crucially, cannot—do. Read the full story.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 iRobot has filed for bankruptcyThe Roomba maker is considering handing over control to its main Chinese supplier. (Bloomberg $)+ A proposed Amazon acquisition fell through close to two years ago. (FT $)+ How the company lost its way. (TechCrunch)+ A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook? (MIT Technology Review) 2 Meta’s 2025 has been a total rollercoaster rideFrom its controversial AI team to Mark Zuckerberg’s newfound appreciation for masculine energy. (Insider $) 3 The Trump administration is giving the crypto industry a much easier rideIt’s dismissed crypto lawsuits involving many firms with financial ties to Trump. (NYT $)+ Celebrities are feeling emboldened to flog crypto once again. (The Guardian)+ A bitcoin investor wants to set up a crypto libertarian community in the Caribbean. (FT $) 4 There’s a new weight-loss drug in townAnd people are already taking it, even though it’s unapproved. (Wired $)+ What we still don’t know about weight-loss drugs. (MIT Technology Review)5 Chinese billionaires are having dozens of US-born surrogate babiesAn entire industry has sprung up to support them. (WSJ $)+ A controversial Chinese CRISPR scientist is still hopeful about embryo gene editing. (MIT Technology Review) 6 Trump’s “big beautiful bill” funding hinges on states integrating AI into healthcareExperts fear it’ll be used as a cost-cutting measure, even if it doesn’t work. (The Guardian)+ Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions. (MIT Technology Review) 7 Extreme rainfall is wreaking havoc in the desertOman and the UAE are unaccustomed to increasingly common torrential downpours. (WP $) 8 Data centers are being built in countries that are too hot for themWhich makes it a lot harder to cool them sufficiently. (Rest of World)

9 Why AI image generators are getting deliberately worseTheir makers are pursuing realism—not that overly polished, Uncanny Valley look. (The Verge)+ Inside the AI attention economy wars. (NY Mag $) 10 How a tiny Swedish city became a major video game hubSkövde has formed an unlikely community of cutting-edge developers. (The Guardian)+ Google DeepMind is using Gemini to train agents inside one of Skövde’s biggest franchises. (MIT Technology Review) Quote of the day “They don’t care about the games. They don’t care about the art. They just want their money.” —Anna C Webster, chair of the freelancing committee of the United Videogame Workers union, tells the Guardian why their members are protesting the prestigious 2025 Game Awards in the wake of major layoffs. One more thing
Recapturing early internet whimsy with HTMLWebsites weren’t always slick digital experiences.There was a time when surfing the web involved opening tabs that played music against your will and sifting through walls of text on a colored background. In the 2000s, before Squarespace and social media, websites were manifestations of individuality—built from scratch using HTML, by users who had some knowledge of code.Scattered across the web are communities of programmers working to revive this seemingly outdated approach. And the movement is anything but a superficial appeal to retro aesthetics—it’s about celebrating the human touch in digital experiences. Read the full story. —Tiffany Ng
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) +  Here’s how a bit of math can help you wrap your presents much more neatly this year.+ It seems that humans mastered making fire way, way earlier than we realized.+ The Arab-owned cafes opening up across the US sound warm and welcoming.+ How to give a gift the recipient will still be using and loving for decades to come.

Read More »

AI coding is now everywhere. But not everyone is convinced.

Depending who you ask, AI-powered coding is either giving software developers an unprecedented productivity boost or churning out masses of poorly designed code that saps their attention and sets software projects up for serious long term-maintenance problems. The problem is right now, it’s not easy to know which is true. As tech giants pour billions into large language models (LLMs), coding has been touted as the technology’s killer app. Both Microsoft CEO Satya Nadella and Google CEO Sundar Pichai have claimed that around a quarter of their companies’ code is now AI-generated. And in March, Anthropic’s CEO, Dario Amodei, predicted that within six months 90% of all code would be written by AI. It’s an appealing and obvious use case. Code is a form of language, we need lots of it, and it’s expensive to produce manually. It’s also easy to tell if it works—run a program and it’s immediately evident whether it’s functional. This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next. Executives enamored with the potential to break through human bottlenecks are pushing engineers to lean into an AI-powered future. But after speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem.   For some developers on the front lines, initial enthusiasm is waning as they bump up against the technology’s limitations. And as a growing body of research suggests that the claimed productivity gains may be illusory, some are questioning whether the emperor is wearing any clothes.
The pace of progress is complicating the picture, though. A steady drumbeat of new model releases mean these tools’ capabilities and quirks are constantly evolving. And their utility often depends on the tasks they are applied to and the organizational structures built around them. All of this leaves developers navigating confusing gaps between expectation and reality.  Is it the best of times or the worst of times (to channel Dickens) for AI coding? Maybe both.
A fast-moving field It’s hard to avoid AI coding tools these days. There are a dizzying array of products available, both from model developers like Anthropic, OpenAI, and Google and from companies like Cursor and Windsurf, which wrap these models in polished code-editing software. And according to Stack Overflow’s 2025 Developer Survey, they’re being adopted rapidly, with 65% of developers now using them at least weekly. AI coding tools first emerged around 2016 but were supercharged with the arrival of LLMs. Early versions functioned as little more than autocomplete for programmers, suggesting what to type next. Today they can analyze entire code bases, edit across files, fix bugs, and even generate documentation explaining how the code works. All this is guided through natural-language prompts via a chat interface. “Agents”—autonomous LLM-powered coding tools that can take a high-level plan and build entire programs independently—represent the latest frontier in AI coding. This leap was enabled by the latest reasoning models, which can tackle complex problems step by step and, crucially, access external tools to complete tasks. “This is how the model is able to code, as opposed to just talk about coding,” says Boris Cherny, head of Claude Code, Anthropic’s coding agent. Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters These agents have made impressive progress on software engineering benchmarks—standardized tests that measure model performance. When OpenAI introduced the SWE-bench Verified benchmark in August 2024, offering a way to evaluate agents’ success at fixing real bugs in open-source repositories, the top model solved just 33% of issues. A year later, leading models consistently score above 70%.  In February, Andrej Karpathy, a founding member of OpenAI and former director of AI at Tesla, coined the term “vibe coding”—meaning an approach where people describe software in natural language and let AI write, refine, and debug the code. Social media abounds with developers who have bought into this vision, claiming massive productivity boosts. But while some developers and companies report such productivity gains, the hard evidence is more mixed. Early studies from GitHub, Google, and Microsoft—all vendors of AI tools—found developers completing tasks 20% to 55% faster. But a September report from the consultancy Bain & Company described real-world savings as “unremarkable.”
Data from the developer analytics firm GitClear shows that most engineers are producing roughly 10% more durable code—code that isn’t deleted or rewritten within weeks—since 2022, likely thanks to AI. But that gain has come with sharp declines in several measures of code quality. Stack Overflow’s survey also found trust and positive sentiment toward AI tools falling significantly for the first time. And most provocatively, a July study by the nonprofit research organization Model Evaluation & Threat Research (METR) showed that while experienced developers believed AI made them 20% faster, objective tests showed they were actually 19% slower. Growing disillusionment For Mike Judge, principal developer at the software consultancy Substantial, the METR study struck a nerve. He was an enthusiastic early adopter of AI tools, but over time he grew frustrated with their limitations and the modest boost they brought to his productivity. “I was complaining to people because I was like, ‘It’s helping me but I can’t figure out how to make it really help me a lot,’” he says. “I kept feeling like the AI was really dumb, but maybe I could trick it into being smart if I found the right magic incantation.” When asked by a friend, Judge had estimated the tools were providing a roughly 25% speedup. So when he saw similar estimates attributed to developers in the METR study he decided to test his own. For six weeks, he guessed how long a task would take, flipped a coin to decide whether to use AI or code manually, and timed himself. To his surprise, AI slowed him down by an median of 21%—mirroring the METR results. This got Judge crunching the numbers. If these tools were really speeding developers up, he reasoned, you should see a massive boom in new apps, website registrations, video games, and projects on GitHub. He spent hours and several hundred dollars analyzing all the publicly available data and found flat lines everywhere.
“Shouldn’t this be going up and to the right?” says Judge. “Where’s the hockey stick on any of these graphs? I thought everybody was so extraordinarily productive.” The obvious conclusion, he says, is that AI tools provide little productivity boost for most developers.  Developers interviewed by MIT Technology Review generally agree on where AI tools excel: producing “boilerplate code” (reusable chunks of code repeated in multiple places with little modification), writing tests, fixing bugs, and explaining unfamiliar code to new developers. Several noted that AI helps overcome the “blank page problem” by offering an imperfect first stab to get a developer’s creative juices flowing. It can also let nontechnical colleagues quickly prototype software features, easing the load on already overworked engineers. These tasks can be tedious, and developers are typically  glad to hand them off. But they represent only a small part of an experienced engineer’s workload. For the more complex problems where engineers really earn their bread, many developers told MIT Technology Review, the tools face significant hurdles. Perhaps the biggest problem is that LLMs can hold only a limited amount of information in their “context window”—essentially their working memory. This means they struggle to parse large code bases and are prone to forgetting what they’re doing on longer tasks. “It gets really nearsighted—it’ll only look at the thing that’s right in front of it,” says Judge. “And if you tell it to do a dozen things, it’ll do 11 of them and just forget that last one.”
DEREK BRAHNEY LLMs’ myopia can lead to headaches for human coders. While an LLM-generated response to a problem may work in isolation, software is made up of hundreds of interconnected modules. If these aren’t built with consideration for other parts of the software, it can quickly lead to a tangled, inconsistent code base that’s hard for humans to parse and, more important, to maintain. Developers have traditionally addressed this by following conventions—loosely defined coding guidelines that differ widely between projects and teams. “AI has this overwhelming tendency to not understand what the existing conventions are within a repository,” says Bill Harding, the CEO of GitClear. “And so it is very likely to come up with its own slightly different version of how to solve a problem.” The models also just get things wrong. Like all LLMs, coding models are prone to “hallucinating”—it’s an issue built into how they work. But because the code they output looks so polished, errors can be difficult to detect, says James Liu, director of software engineering at the advertising technology company Mediaocean. Put all these flaws together, and using these tools can feel a lot like pulling a lever on a one-armed bandit. “Some projects you get a 20x improvement in terms of speed or efficiency,” says Liu. “On other things, it just falls flat on its face, and you spend all this time trying to coax it into granting you the wish that you wanted and it’s just not going to.” Judge suspects this is why engineers often overestimate productivity gains. “You remember the jackpots. You don’t remember sitting there plugging tokens into the slot machine for two hours,” he says. And it can be particularly pernicious if the developer is unfamiliar with the task. Judge remembers getting AI to help set up a Microsoft cloud service called an Azure Functions, which he’d never used before. He thought it would take about two hours, but nine hours later he threw in the towel. “It kept leading me down these rabbit holes and I didn’t know enough about the topic to be able to tell it ‘Hey, this is nonsensical,’” he says. The debt begins to mount up Developers constantly make trade-offs between speed of development and the maintainability of their code—creating what’s known as “technical debt,” says Geoffrey G. Parker, professor of engineering innovation at Dartmouth College. Each shortcut adds complexity and makes the code base harder to manage, accruing “interest” that must eventually be repaid by restructuring the code. As this debt piles up, adding new features and maintaining the software becomes slower and more difficult.
Accumulating technical debt is inevitable in most projects, but AI tools make it much easier for time-pressured engineers to cut corners, says GitClear’s Harding. And GitClear’s data suggests this is happening at scale. Since 2020, the company has seen a significant rise in the amount of copy-pasted code—an indicator that developers are reusing more code snippets, most likely based on AI suggestions—and an even bigger decline in the amount of code moved from one place to another, which happens when developers clean up their code base. And as models improve, the code they produce is becoming increasingly verbose and complex, says Tariq Shaukat, CEO of Sonar, which makes tools for checking code quality. This is driving down the number of obvious bugs and security vulnerabilities, he says, but at the cost of increasing the number of “code smells”—harder-to-pinpoint flaws that lead to maintenance problems and technical debt. 
Recent research by Sonar found that these make up more than 90% of the issues found in code generated by leading AI models. “Issues that are easy to spot are disappearing, and what’s left are much more complex issues that take a while to find,” says Shaukat. “That’s what worries us about this space at the moment. You’re almost being lulled into a false sense of security.” If AI tools make it increasingly difficult to maintain code, that could have significant security implications, says Jessica Ji, a security researcher at Georgetown University. “The harder it is to update things and fix things, the more likely a code base or any given chunk of code is to become insecure over time,” says Ji. There are also more specific security concerns, she says. Researchers have discovered a worrying class of hallucinations where models reference nonexistent software packages in their code. Attackers can exploit this by creating packages with those names that harbor vulnerabilities, which the model or developer may then unwittingly incorporate into software.  LLMs are also vulnerable to “data-poisoning attacks,” where hackers seed the publicly available data sets models train on with data that alters the model’s behavior in undesirable ways, such as generating insecure code when triggered by specific phrases. In October, research by Anthropic found that as few as 250 malicious documents can introduce this kind of back door into an LLM regardless of its size. The converted Despite these issues, though, there’s probably no turning back. “Odds are that writing every line of code on a keyboard by hand—those days are quickly slipping behind us,” says Kyle Daigle, chief operating officer at the Microsoft-owned code-hosting platform GitHub, which produces a popular AI-powered tool called Copilot (not to be confused with the Microsoft product of the same name). The Stack Overflow report found that despite growing distrust in the technology, usage has increased rapidly and consistently over the past three years. Erin Yepis, a senior analyst at Stack Overflow, says this suggests that engineers are taking advantage of the tools with a clear-eyed view of the risks. The report also found that frequent users tend to be more enthusiastic and more than half of developers are not using the latest coding agents, perhaps explaining why many remain underwhelmed by the technology. Those latest tools can be a revelation. Trevor Dilley, CTO at the software development agency Twenty20 Ideas, says he had found some value in AI editors’ autocomplete functions, but when he tried anything more complex it would “fail catastrophically.” Then in March, while on vacation with his family, he set the newly released Claude Code to work on one of his hobby projects. It completed a four-hour task in two minutes, and the code was better than what he would have written. “I was like, Whoa,” he says. “That, for me, was the moment, really. There’s no going back from here.” Dilley has since cofounded a startup called DevSwarm, which is creating software that can marshal multiple agents to work in parallel on a piece of software.
The challenge, says Armin Ronacher, a prominent open-source developer, is that the learning curve for these tools is shallow but long. Until March he’d remained unimpressed by AI tools, but after leaving his job at the software company Sentry in April to launch a startup, he started experimenting with agents. “I basically spent a lot of months doing nothing but this,” he says. “Now, 90% of the code that I write is AI-generated.” Getting to that point involved extensive trial and error, to figure out which problems tend to trip the tools up and which they can handle efficiently. Today’s models can tackle most coding tasks with the right guardrails, says Ronacher, but these can be very task and project specific. To get the most out of these tools, developers must surrender control over individual lines of code and focus on the overall software architecture, says Nico Westerdale, chief technology officer at the veterinary staffing company IndeVets. He recently built a data science platform 100,000 lines of code long almost exclusively by prompting models rather than writing the code himself. Westerdale’s process starts with an extended conversation with the modelagent to develop a detailed plan for what to build and how. He then guides it through each step. It rarely gets things right on the first try and needs constant wrangling, but if you force it to stick to well-defined design patterns, the models can produce high-quality, easily maintainable code, says Westerdale. He reviews every line, and the code is as good as anything he’s ever produced, he says: “I’ve just found it absolutely revolutionary,. It’s also frustrating, difficult, a different way of thinking, and we’re only just getting used to it.” But while individual developers are learning how to use these tools effectively, getting consistent results across a large engineering team is significantly harder. AI tools amplify both the good and bad aspects of your engineering culture, says Ryan J. Salva, senior director of product management at Google. With strong processes, clear coding patterns, and well-defined best practices, these tools can shine.  DEREK BRAHNEY But if your development process is disorganized, they’ll only magnify the problems. It’s also essential to codify that institutional knowledge so the models can draw on it effectively. “A lot of work needs to be done to help build up context and get the tribal knowledge out of our heads,” he says. The cryptocurrency exchange Coinbase has been vocal about its adoption of AI tools. CEO Brian Armstrong made headlines in August when he revealed that the company had fired staff unwilling to adopt AI tools. But Coinbase’s head of platform, Rob Witoff, tells MIT Technology Review that while they’ve seen massive productivity gains in some areas, the impact has been patchy. For simpler tasks like restructuring the code base and writing tests, AI-powered workflows have achieved speedups of up to 90%. But gains are more modest for other tasks, and the disruption caused by overhauling existing processes often counteracts the increased coding speed, says Witoff. One factor is that AI tools let junior developers produce far more code,. As in almost all engineering teams, this code has to be reviewed by others, normally more senior developers, to catch bugs and ensure it meets quality standards. But the sheer volume of code now being churned out i whichs quickly saturatinges the ability of midlevel staff to review changes. “This is the cycle we’re going through almost every month, where we automate a new thing lower down in the stack, which brings more pressure higher up in the stack,” he says. “Then we’re looking at applying automation to that higher-up piece.” Developers also spend only 20% to 40% of their time coding, says Jue Wang, a partner at Bain, so even a significant speedup there often translates to more modest overall gains. Developers spend the rest of their time analyzing software problems and dealing with customer feedback, product strategy, and administrative tasks. To get significant efficiency boosts, companies may need to apply generative AI to all these other processes too, says Jue, and that is still in the works. Rapid evolution Programming with agents is a dramatic departure from previous working practices, though, so it’s not surprising companies are facing some teething issues. These are also very new products that are changing by the day. “Every couple months the model improves, and there’s a big step change in the model’s coding capabilities and you have to get recalibrated,” says Anthropic’s Cherny. For example, in June Anthropic introduced a built-in planning mode to Claude; it has since been replicated by other providers. In October, the company also enabled Claude to ask users questions when it needs more context or faces multiple possible solutions, which Cherny says helps it avoid the tendency to simply assume which path is the best way forward. Most significant, Anthropic has added features that make Claude better at managing its own context. When it nears the limits of its working memory, it summarizes key details and uses them to start a new context window, effectively giving it an “infinite” one, says Cherny. Claude can also invoke sub-agents to work on smaller tasks, so it no longer has to hold all aspects of the project in its own head. The company claims that its latest model, Claude 4.5 Sonnet, can now code autonomously for more than 30 hours without major performance degradation. Novel approaches to software development could also sidestep coding agents’ other flaws. MIT professor Max Tegmark has introduced something he calls “vericoding,” which could allow agents to produce entirely bug-free code from a natural-language description. It builds on an approach known as “formal verification,” where developers create a mathematical model of their software that can prove incontrovertibly that it functions correctly. This approach is used in high-stakes areas like flight-control systems and cryptographic libraries, but it remains costly and time-consuming, limiting its broader use. Rapid improvements in LLMs’ mathematical capabilities have opened up the tantalizing possibility of models that produce not only software but the mathematical proof that it’s bug free, says Tegmark. “You just give the specification, and the AI comes back with provably correct code,” he says. “You don’t have to touch the code. You don’t even have to ever look at the code.” When tested on about 2,000 vericoding problems in Dafny—a language designed for formal verification—the best LLMs solved over 60%, according to non-peer-reviewed research by Tegmark’s group. This was achieved with off-the-shelf LLMs, and Tegmark expects that training specifically for vericoding could improve scores rapidly. And counterintuitively, Tthe speed at which AI generates code could actuallylso ease maintainability concerns. Alex Worden, principal engineer at the business software giant Intuit, notes that maintenance is often difficult because engineers reuse components across projects, creating a tangle of dependencies where one change triggers cascading effects across the code base. Reusing code used to save developers time, but in a world where AI can produce hundreds of lines of code in seconds, that imperative has gone, says Worden. Instead, he advocates for “disposable code,” where each component is generated independently by AI without regard for whether it follows design patterns or conventions. They are then connected via APIs—sets of rules that let components request information or services from each other. Each component’s inner workings are not dependent on other parts of the code base, making it possible to rip them out and replace them without wider impact, says Worden.  “The industry is still concerned about humans maintaining AI-generated code,” he says. “I question how long humans will look at or care about code.” A narrowing talent pipeline For the foreseeable future, though, humans will still need to understand and maintain the code that underpins their projects. And one of the most pernicious side effects of AI tools may be a shrinking pool of people capable of doing so.  Early evidence suggests that fears around the job-destroying effects of AI may be justified. A recent Stanford University study found that employment among software developers aged 22 to 25 fell nearly 20% between 2022 and 2025, coinciding with the rise of AI-powered coding tools. Experienced developers could face difficulties too. Luciano Nooijen, an engineer at the video-game infrastructure developer Companion Group, used AI tools heavily in his day job, where they were provided for free. But when he began a side project without access to those tools, he found himself struggling with tasks that previously came naturally. “I was feeling so stupid because things that used to be instinct became manual, sometimes even cumbersome,” says Nooijen. Just as athletes still perform basic drills, he thinks the only way to maintain an instinct for coding is to regularly practice the grunt work. That’s why he’s largely abandoned AI tools, though he admits that deeper motivations are also at play.  Part of the reason Nooijen and other developers MIT Technology Review spoke to are pushing back against AI tools is a sense that they are hollowing out the parts of their jobs that they love. “I got into software engineering because I like working with computers. I like making machines do things that I want,” Nooijen says. “It’s just not fun sitting there with my work being done for me.”

Read More »

The AI doomers feel undeterred

It’s a weird time to be an AI doomer. This small but influential community of researchers, scientists, and policy experts believes, in the simplest terms, that AI could get so good it could be bad—very, very bad—for humanity. Though many of these people would be more likely to describe themselves as advocates for AI safety than as literal doomsayers, they warn that AI poses an existential risk to humanity. They argue that absent more regulation, the industry could hurtle toward systems it can’t control. They commonly expect such systems to follow the creation of artificial general intelligence (AGI), a slippery concept generally understood as technology that can do whatever humans can do, and better.  This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next. Though this is far from a universally shared perspective in the AI field, the doomer crowd has had some notable success over the past several years: helping shape AI policy coming from the Biden administration, organizing prominent calls for international “red lines” to prevent AI risks, and getting a bigger (and more influential) megaphone as some of its adherents win science’s most prestigious awards. But a number of developments over the past six months have put them on the back foot. Talk of an AI bubble has overwhelmed the discourse as tech companies continue to invest in multiple Manhattan Projects’ worth of data centers without any certainty that future demand will match what they’re building.  And then there was the August release of OpenAI’s latest foundation model, GPT-5, which proved something of a letdown. Maybe that was inevitable, since it was the most hyped AI release of all time; OpenAI CEO Sam Altman had boasted that GPT-5 felt “like a PhD-level expert” in every topic and told the podcaster Theo Von that the model was so good, it had made him feel “useless relative to the AI.” 
Many expected GPT-5 to be a big step toward AGI, but whatever progress the model may have made was overshadowed by a string of technical bugs and the company’s mystifying, quickly reversed decision to shut off access to every old OpenAI model without warning. And while the new model achieved state-of-the-art benchmark scores, many people felt, perhaps unfairly, that in day-to-day use GPT-5 was a step backward.  All this would seem to threaten some of the very foundations of the doomers’ case. In turn, a competing camp of AI accelerationists, who fear AI is actually not moving fast enough and that the industry is constantly at risk of being smothered by overregulation, is seeing a fresh chance to change how we approach AI safety (or, maybe more accurately, how we don’t). 
This is particularly true of the industry types who’ve decamped to Washington: “The Doomer narratives were wrong,” declared David Sacks, the longtime venture capitalist turned Trump administration AI czar. “This notion of imminent AGI has been a distraction and harmful and now effectively proven wrong,” echoed the White House’s senior policy advisor for AI and tech investor Sriram Krishnan. (Sacks and Krishnan did not reply to requests for comment.)  (There is, of course, another camp in the AI safety debate: the group of researchers and advocates commonly associated with the label “AI ethics.” Though they also favor regulation, they tend to think the speed of AI progress has been overstated and have often written off AGI as a sci-fi story or a scam that distracts us from the technology’s immediate threats. But any potential doomer demise wouldn’t exactly give them the same opening the accelerationists are seeing.) So where does this leave the doomers? As part of our Hype Correction package, we decided to ask some of the movement’s biggest names to see if the recent setbacks and general vibe shift had altered their views. Are they frustrated that policymakers no longer seem to heed their threats? Are they quietly adjusting their timelines for the apocalypse?  Recent interviews with 20 people who study or advocate AI safety and governance—including Nobel Prize winner Geoffrey Hinton, Turing Prize winner Yoshua Bengio, and high-profile experts like former OpenAI board member Helen Toner—reveal that rather than feeling chastened or lost in the wilderness, they’re still deeply committed to their cause, believing that AGI remains not just possible but incredibly dangerous. At the same time, they seem to be grappling with a near contradiction. While they’re somewhat relieved that recent developments suggest AGI is further out than they previously thought (“Thank God we have more time,” says AI researcher Jeffrey Ladish), they also feel angry that people in power are not taking them seriously enough (Daniel Kokotajlo, lead author of a cautionary forecast called “AI 2027,” calls the Sacks and Krishnan tweets “deranged and/or dishonest”).  Broadly speaking, these experts see the talk of an AI bubble as no more than a speed bump, and disappointment in GPT-5 as more distracting than illuminating. They still generally favor more robust regulation and worry that progress on policy—the implementation of the EU AI Act; the passage of the first major American AI safety bill, California’s SB 53; and new interest in AGI risk from some members of Congress—has become vulnerable as Washington overreacts to what doomers see as short-term failures to live up to the hype. 
Some were also eager to correct what they see as the most persistent misconceptions about the doomer world. Though their critics routinely mock them for predicting that AGI is right around the corner, they claim that’s never been an essential part of their case: It “isn’t about imminence,” says Berkeley professor Stuart Russell, the author of Human Compatible: Artificial Intelligence and the Problem of Control. Most people I spoke with say their timelines to dangerous systems have actually lengthened slightly in the last year—an important change given how quickly the policy and technical landscapes can shift.  “If someone said there’s a four-mile-diameter asteroid that’s going to hit the Earth in 2067, we wouldn’t say, ‘Remind me in 2066 and we’ll think about it.’” Many of them, in fact, emphasize the importance of changing timelines. And even if they are just a tad longer now, Toner tells me that one big-picture story of the ChatGPT era is the dramatic compression of these estimates across the AI world. For a long while, she says, AGI was expected in many decades. Now, for the most part, the predicted arrival is sometime in the next few years to 20 years. So even if we have a little bit more time, she (and many of her peers) continue to see AI safety as incredibly, vitally urgent. She tells me that if AGI were possible anytime in even the next 30 years, “It’s a huge fucking deal. We should have a lot of people working on this.” So despite the precarious moment doomers find themselves in, their bottom line remains that no matter when AGI is coming (and, again, they say it’s very likely coming), the world is far from ready.  Maybe you agree. Or maybe you may think this future is far from guaranteed. Or that it’s the stuff of science fiction. You may even think AGI is a great big conspiracy theory. You’re not alone, of course—this topic is polarizing. But whatever you think about the doomer mindset, there’s no getting around the fact that certain people in this world have a lot of influence. So here are some of the most prominent people in the space, reflecting on this moment in their own words. 
Interviews have been edited and condensed for length and clarity.  The Nobel laureate who’s not sure what’s coming Geoffrey Hinton, winner of the Turing Award and the Nobel Prize in physics for pioneering deep learning The biggest change in the last few years is that there are people who are hard to dismiss who are saying this stuff is dangerous. Like, [former Google CEO] Eric Schmidt, for example, really recognized this stuff could be really dangerous. He and I were in China recently talking to someone on the Politburo, the party secretary of Shanghai, to make sure he really understood—and he did. I think in China, the leadership understands AI and its dangers much better because many of them are engineers. I’ve been focused on the longer-term threat: When AIs get more intelligent than us, can we really expect that humans will remain in control or even relevant? But I don’t think anything is inevitable. There’s huge uncertainty on everything. We’ve never been here before. Anybody who’s confident they know what’s going to happen seems silly to me. I think this is very unlikely but maybe it’ll turn out that all the people saying AI is way overhyped are correct. Maybe it’ll turn out that we can’t get much further than the current chatbots—we hit a wall due to limited data. I don’t believe that. I think that’s unlikely, but it’s possible.  I also don’t believe people like Eliezer Yudkowsky, who say if anybody builds it, we’re all going to die. We don’t know that. 
But if you go on the balance of the evidence, I think it’s fair to say that most experts who know a lot about AI believe it’s very probable that we’ll have superintelligence within the next 20 years. [Google DeepMind CEO] Demis Hassabis says maybe 10 years. Even [prominent AI skeptic] Gary Marcus would probably say, “Well, if you guys make a hybrid system with good old-fashioned symbolic logic … maybe that’ll be superintelligent.” [Editor’s note: In September, Marcus predicted AGI would arrive between 2033 and 2040.] And I don’t think anybody believes progress will stall at AGI. I think more or less everybody believes a few years after AGI, we’ll have superintelligence, because the AGI will be better than us at building AI. So while I think it’s clear that the winds are getting more difficult, simultaneously, people are putting in many more resources [into developing advanced AI]. I think progress will continue just because there’s many more resources going in. The deep learning pioneer who wishes he’d seen the risks sooner Yoshua Bengio, winner of the Turing Award, chair of the International AI Safety Report, and founder of LawZero Some people thought that GPT-5 meant we had hit a wall, but that isn’t quite what you see in the scientific data and trends. There have been people overselling the idea that AGI is tomorrow morning, which commercially could make sense. But if you look at the various benchmarks, GPT-5 is just where you would expect the models at that point in time to be. By the way, it’s not just GPT-5, it’s Claude and Google models, too. In some areas where AI systems weren’t very good, like Humanity’s Last Exam or FrontierMath, they’re getting much better scores now than they were at the beginning of the year. At the same time, the overall landscape for AI governance and safety is not good. There’s a strong force pushing against regulation. It’s like climate change. We can put our head in the sand and hope it’s going to be fine, but it doesn’t really deal with the issue.
The biggest disconnect with policymakers is a misunderstanding of the scale of change that is likely to happen if the trend of AI progress continues. A lot of people in business and governments simply think of AI as just another technology that’s going to be economically very powerful. They don’t understand how much it might change the world if trends continue, and we approach human-level AI.  Like many people, I had been blinding myself to the potential risks to some extent. I should have seen it coming much earlier. But it’s human. You’re excited about your work and you want to see the good side of it. That makes us a little bit biased in not really paying attention to the bad things that could happen.
Even a small chance—like 1% or 0.1%—of creating an accident where billions of people die is not acceptable.  The AI veteran who believes AI is progressing—but not fast enough to prevent the bubble from bursting Stuart Russell, distinguished professor of computer science, University of California, Berkeley, and author of Human Compatible I hope the idea that talking about existential risk makes you a “doomer” or is “science fiction” comes to be seen as fringe, given that most leading AI researchers and most leading AI CEOs take it seriously.  There have been claims that AI could never pass a Turing test, or you could never have a system that uses natural language fluently, or one that could parallel-park a car. All these claims just end up getting disproved by progress. People are spending trillions of dollars to make superhuman AI happen. I think they need some new ideas, but there’s a significant chance they will come up with them, because many significant new ideas have happened in the last few years.  My fairly consistent estimate for the last 12 months has been that there’s a 75% chance that those breakthroughs are not going to happen in time to rescue the industry from the bursting of the bubble. Because the investments are consistent with a prediction that we’re going to have much better AI that will deliver much more value to real customers. But if those predictions don’t come true, then there’ll be a lot of blood on the floor in the stock markets. However, the safety case isn’t about imminence. It’s about the fact that we still don’t have a solution to the control problem. If someone said there’s a four-mile-diameter asteroid that’s going to hit the Earth in 2067, we wouldn’t say, “Remind me in 2066 and we’ll think about it.” We don’t know how long it takes to develop the technology needed to control superintelligent AI. Looking at precedents, the acceptable level of risk for a nuclear plant melting down is about one in a million per year. Extinction is much worse than that. So maybe set the acceptable risk at one in a billion. But the companies are saying it’s something like one in five. They don’t know how to make it acceptable. And that’s a problem. The professor trying to set the narrative straight on AI safety David Krueger, assistant professor in machine learning at the University of Montreal and Yoshua Bengio’s Mila Institute, and founder of Evitable I think people definitely overcorrected in their response to GPT-5. But there was hype. My recollection was that there were multiple statements from CEOs at various levels of explicitness who basically said that by the end of 2025, we’re going to have an automated drop-in replacement remote worker. But it seems like it’s been underwhelming, with agents just not really being there yet.
I’ve been surprised how much these narratives predicting AGI in 2027 capture the public attention. When 2027 comes around, if things still look pretty normal, I think people are going to feel like the whole worldview has been falsified. And it’s really annoying how often when I’m talking to people about AI safety, they assume that I think we have really short timelines to dangerous systems, or that I think LLMs or deep learning are going to give us AGI. They ascribe all these extra assumptions to me that aren’t necessary to make the case.  I’d expect we need decades for the international coordination problem. So even if dangerous AI is decades off, it’s already urgent. That point seems really lost on a lot of people. There’s this idea of “Let’s wait until we have a really dangerous system and then start governing it.” Man, that is way too late. I still think people in the safety community tend to work behind the scenes, with people in power, not really with civil society. It gives ammunition to people who say it’s all just a scam or insider lobbying. That’s not to say that there’s no truth to these narratives, but the underlying risk is still real. We need more public awareness and a broad base of support to have an effective response. If you actually believe there’s a 10% chance of doom in the next 10 years—which I think a reasonable person should, if they take a close look—then the first thing you think is: “Why are we doing this? This is crazy.” That’s just a very reasonable response once you buy the premise. The governance expert worried about AI safety’s credibility Helen Toner, acting executive director of Georgetown University’s Center for Security and Emerging Technology and former OpenAI board member When I got into the space, AI safety was more of a set of philosophical ideas. Today, it’s a thriving set of subfields of machine learning, filling in the gulf between some of the more “out there” concerns about AI scheming, deception, or power-seeking and real concrete systems we can test and play with.  “I worry that some aggressive AGI timeline estimates from some AI safety people are setting them up for a boy-who-cried-wolf moment.” AI governance is improving slowly. If we have lots of time to adapt and governance can keep improving slowly, I feel not bad. If we don’t have much time, then we’re probably moving too slow. I think GPT-5 is generally seen as a disappointment in DC. There’s a pretty polarized conversation around: Are we going to have AGI and superintelligence in the next few years? Or is AI actually just totally all hype and useless and a bubble? The pendulum had maybe swung too far toward “We’re going to have super-capable systems very, very soon.” And so now it’s swinging back toward “It’s all hype.” I worry that some aggressive AGI timeline estimates from some AI safety people are setting them up for a boy-who-cried-wolf moment. When the predictions about AGI coming in 2027 don’t come true, people will say, “Look at all these people who made fools of themselves. You should never listen to them again.” That’s not the intellectually honest response, if maybe they later changed their mind, or their take was that they only thought it was 20 percent likely and they thought that was still worth paying attention to. I think that shouldn’t be disqualifying for people to listen to you later, but I do worry it will be a big credibility hit. And that’s applying to people who are very concerned about AI safety and never said anything about very short timelines. The AI security researcher who now believes AGI is further out—and is grateful Jeffrey Ladish, executive director at Palisade Research In the last year, two big things updated my AGI timelines.  First, the lack of high-quality data turned out to be a bigger problem than I expected.  Second, the first “reasoning” model, OpenAI’s o1 in September 2024, showed reinforcement learning scaling was more effective than I thought it would be. And then months later, you see the o1 to o3 scale-up and you see pretty crazy impressive performance in math and coding and science—domains where it’s easier to sort of verify the results. But while we’re seeing continued progress, it could have been much faster. All of this bumps up my median estimate to the start of fully automated AI research and development from three years to maybe five or six years. But those are kind of made up numbers. It’s hard. I want to caveat all this with, like, “Man, it’s just really hard to do forecasting here.” Thank God we have more time. We have a possibly very brief window of opportunity to really try to understand these systems before they are capable and strategic enough to pose a real threat to our ability to control them. But it’s scary to see people think that we’re not making progress anymore when that’s clearly not true. I just know it’s not true because I use the models. One of the downsides of the way AI is progressing is that how fast it’s moving is becoming less legible to normal people.  Now, this is not true in some domains—like, look at Sora 2. It is so obvious to anyone who looks at it that Sora 2 is vastly better than what came before. But if you ask GPT-4 and GPT-5 why the sky is blue, they’ll give you basically the same answer. It is the correct answer. It’s already saturated the ability to tell you why the sky is blue. So the people who I expect to most understand AI progress right now are the people who are actually building with AIs or using AIs on very difficult scientific problems. The AGI forecaster who saw the critics coming Daniel Kokotajlo, executive director of the AI Futures Project; an OpenAI whistleblower; and lead author of “AI 2027,” a vivid scenario where—starting in 2027—AIs progress from “superhuman coders” to “wildly superintelligent” systems in the span of months AI policy seems to be getting worse, like the “Pro-AI” super PAC [launched earlier this year by executives from OpenAI and Andreessen Horowitz to lobby for a deregulatory agenda], and the deranged and/or dishonest tweets from Sriram Krishnan and David Sacks. AI safety research is progressing at the usual pace, which is excitingly rapid compared to most fields, but slow compared to how fast it needs to be. We said on the first page of “AI 2027” that our timelines were somewhat longer than 2027. So even when we launched AI 2027, we expected there to be a bunch of critics in 2028 triumphantly saying we’ve been discredited, like the tweets from Sacks and Krishnan. But we thought, and continue to think, that the intelligence explosion will probably happen sometime in the next five to 10 years, and that when it does, people will remember our scenario and realize it was closer to the truth than anything else available in 2025.  Predicting the future is hard, but it’s valuable to try; people should aim to communicate their uncertainty about the future in a way that is specific and falsifiable. This is what we’ve done and very few others have done. Our critics mostly haven’t made predictions of their own and often exaggerate and mischaracterize our views. They say our timelines are shorter than they are or ever were, or they say we are more confident than we are or were. I feel pretty good about having longer timelines to AGI. It feels like I just got a better prognosis from my doctor. The situation is still basically the same, though. Garrison Lovely is a freelance journalist and the author of Obsolete, an online publication and forthcoming book on the discourse, economics, and geopolitics of the race to build machine superintelligence (out spring 2026). His writing on AI has appeared in the New York Times, Nature, Bloomberg, Time, the Guardian, The Verge, and elsewhere.

Read More »

A brief history of Sam Altman’s hype

.cst-large,
.cst-default {
width: 100%;
}

@media (max-width: 767px) {
.cst-block {
overflow-x: hidden;
}
}

@media (min-width: 630px) {
.cst-large {
margin-left: -25%;
width: 150%;
}

@media (min-width: 960px) {
.cst-large {
margin-left: -16.666666666666664%;
width: 140.26%;
}
}

@media (min-width: 1312px) {
.cst-large {
width: 145.13%;
}
}
}
@media (min-width: 60rem) {
.flourish-embed {
width: 60vw;
transform: translateX(-50%);
left: 50%;
position: relative;
}
}
Each time you’ve heard a borderline outlandish idea of what AI will be capable of, it often turns out that Sam Altman was, if not the first to articulate it, at least the most persuasive and influential voice behind it.  For more than a decade he has been known in Silicon Valley as a world-class fundraiser and persuader. OpenAI’s early releases around 2020 set the stage for a mania around large language models, and the launch of ChatGPT in November 2022 granted Altman a world stage on which to present his new thesis: that these models mirror human intelligence and could swing the doors open to a healthier and wealthier techno-utopia. This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next. Throughout, Altman’s words have set the agenda. He has framed a prospective superintelligent AI as either humanistic or catastrophic, depending on what effect he was hoping to create, what he was raising money for, or which tech giant seemed like his most formidable competitor at the moment.  Examining Altman’s statements over the years reveals just how much his outlook has powered today’s AI boom. Even among Silicon Valley’s many hypesters, he’s been especially willing to speak about open questions—whether large language models contain the ingredients of human thought, whether language can also produce intelligence—as if they were already answered.  What he says about AI is rarely provable when he says it, but it persuades us of one thing: This road we’re on with AI can go somewhere either great or terrifying, and OpenAI will need epic sums to steer it toward the right destination. In this sense, he is the ultimate hype man. To understand how his voice has shaped our understanding of what AI can do, we read almost everything he’s ever said about the technology (we requested an interview with Altman, but he was not made available).  His own words trace how we arrived here. In conclusion …  Altman didn’t dupe the world. OpenAI has ushered in a genuine tech revolution, with increasingly impressive language models that have attracted millions of users. Even skeptics would concede that LLMs’ conversational ability is astonishing. But Altman’s hype has always hinged less on today’s capabilities than on a philosophical tomorrow—an outlook that quite handily doubles as a case for more capital and friendlier regulation. Long before large language models existed, he was imagining an AI powerful enough to require wealth redistribution, just as he imagined humanity colonizing other planets. Again and again, promises of a destination—abundance, superintelligence, a healthier and wealthier world—have come first, and the evidence second.  Even if LLMs eventually hit a wall, there’s little reason to think his faith in a techno-utopian future will falter. The vision was never really about the particulars of the current model anyway. 

Read More »

Shell Expects Weak Oil Trading Result for Q4

Shell Plc said its oil trading performance worsened in the fourth quarter as crude prices slumped, adding to signs that Big Oil is heading into a tougher earnings season.  Oil trading results for the year’s final three months are “expected to be significantly lower” than the previous quarter, Shell said in an update on Thursday, ahead of an earnings report due early next month. At the company’s troubled chemicals division, a “significant loss” is expected. The update comes at a time when the oil market is lurching into an oversupply that could make for challenging trading conditions in the months ahead. The international benchmark Brent plunged 18 percent last year and has been largely unaffected by turmoil in Venezuela, where the country’s president Nicolás Maduro has been captured by US forces. It’s a “rough end to the year” for Shell, RBC Capital Markets analyst Biraj Borkhataria said Thursday in a note. He had expected “a relatively weak quarter, and this looks worse than expected.” Shell’s shares fell as much as 2.3 percent in early London trading. Shell’s massive in-house trading business deals in oil, gas, fuels, chemicals and renewable power – trading both the company’s own production as well as supply from third parties. The energy major doesn’t disclose separate results for its traders, but the performance is closely watched as it can be a key driver of earnings. A strong trading performance in the third quarter was one of the reasons Shell cited for earnings that beat estimates.  Since taking over the London-based energy giant three years ago, Chief Executive Officer Wael Sawan has sought to cut costs and offload under-performing assets to improve the company’s balance sheet. This is his first test in a lower oil-price environment that will pressure the firm’s ability to maintain its level of share buybacks. US rival Exxon Mobil

Read More »

National Grid Unveils New Plan for Anglo-Dutch Cable Link

National Grid PLC on Thursday announced route changes for the proposed LionLink cable to the Netherlands, a project with TenneT BV. The interconnector is designed to carry up to two gigawatts of wind electricity, enough for about 2.5 million British homes, according to National Grid. LionLink would connect a wind farm offshore the Netherlands to the Dutch and UK grids, with a targeted start of operation in 2032, National Grid says on the project webpage. The power transmission and distribution operator said in an online statement Thursday it would launch an eight-week public consultation for its new plan for the cable to start underground in Suffolk’s Walberswick, “a decision made following an assessment of the environment and local residents’ concerns around access constraints and traffic impacts”. “An alternative underground HVDC [high-voltage direct current] cable corridor to the north of Southwold was discounted following the consultations”, National Grid said. “NGV is also working closely with local authorities to ensure no construction takes place on the beach, and there is no visible infrastructure once the project is complete”, it added, referring to National Grid Ventures, its unit tasked with building and operating LionLink. “84 percent of the UK section of the LionLink cable will be offshore, and all onshore sections will be buried underground”. The new plan was based on non-statutory consultations in 2022 and 2023, LionLink project director Gareth Burden said. “We are coordinating with other developers in Suffolk on a regular basis so that where possible, we can work together to ensure construction is carried out in manageable sections, and we can avoid long-term disruption in any one area”, Burden added. National Grid noted, “LionLink is set to be one of the first projects of its kind, helping to shape the future of offshore renewable energy by combining wind generation and

Read More »

Cisco identifies vulnerability in ISE network access control devices

Johannes Ullrich, dean of research at the SANS Institute, said, “Most likely, this is an XML External Entity vulnerability.” External entities, he explained, are an XML feature that instructs the parser to either read local files or access external URLs. In this case, an attacker could embed an external entity in the license file, instructing the XML parser to read a confidential file and include it in the response. This is a common vulnerability in XML parsers, he said, typically mitigated by disabling external entity parsing. An attacker would be able to obtain read access to confidential files like configuration files, he added, and possibly user credentials. Ullrich also said an ISE administrator may have access to a lot of the information, but they should not have access to user credentials. The Cisco advisory says an attacker could exploit this vulnerability by uploading a malicious file to the application: “A successful exploit could allow the attacker to read arbitrary files from the underlying operating system that could include sensitive data that should otherwise be inaccessible even to administrators. To exploit this vulnerability, the attacker must have valid administrative credentials.” Cisco said proof-of-concept exploit code is available for this vulnerability, but so far the company isn’t aware of any malicious use of the hole.  These days, admin credentials aren’t hard to get, Harrington noted. The “dirty secret that few people want to talk about is across IT and security operations there are so many systems that are left with default credentials.” That’s particularly common, he said, with devices behind a firewall, such as network access control servers, because admins think because they are inside the network they can’t be touched by external hackers. But lots of credentials can be scooped up in compromises of applications where Cisco admins might have stored passwords.

Read More »

JLL’s 2026 Global Data Center Outlook: Navigating the AI Supercycle, Power Scarcity and Structural Market Transformation

Sovereign AI and National Infrastructure Policy JLL frames artificial intelligence infrastructure as an emerging national strategic asset, with sovereign AI initiatives representing an estimated $8 billion in cumulative capital expenditure by 2030. While modest relative to hyperscale investment totals, this segment carries outsized strategic importance. Data localization mandates, evolving AI regulation, and national security considerations are increasingly driving governments to prioritize domestic compute capacity, often with pricing premiums reaching as high as 60%. Examples cited across Europe, the Middle East, North America, and Asia underscore a consistent pattern: digital sovereignty is no longer an abstract policy goal, but a concrete driver of data center siting, ownership structures, and financing models. In practice, sovereign AI initiatives are accelerating demand for locally controlled infrastructure, influencing where capital is deployed and how assets are underwritten. For developers and investors, this shift introduces a distinct set of considerations. Sovereign projects tend to favor jurisdictional alignment, long-term tenancy, and enhanced security requirements, while also benefiting from regulatory tailwinds and, in some cases, direct state involvement. As AI capabilities become more tightly linked to economic competitiveness and national resilience, policy-driven demand is likely to remain a durable (if specialized) component of global data center growth. Energy and Sustainability as the Central Constraint Energy availability emerges as the report’s dominant structural constraint. In many major markets, average grid interconnection timelines now extend beyond four years, effectively decoupling data center development schedules from traditional utility planning cycles. As a result, operators are increasingly pursuing alternative energy strategies to maintain project momentum, including: Behind-the-meter generation Expanded use of natural gas, particularly in the United States Private-wire renewable energy projects Battery energy storage systems (BESS) JLL points to declining battery costs, seen falling below $90 per kilowatt-hour in select deployments, as a meaningful enabler of grid flexibility, renewable firming, and

Read More »

Oil Prices Jump as Short Covering Builds

Oil moved higher as traders digested a mix of geopolitical risks that could add a premium to prices while continuing to assess US measures to exert control over Venezuela’s oil. West Texas Intermediate rose 3.2% to settle below $58 a barrel. Prices continued to climb after settlement, rising more than 1% and leaving the market poised to wipe out losses from earlier in the week. President Donald Trump threatened to hit Iran “hard” if the country’s government killed protesters amid an ongoing period of unrest. A disruption to Iranian supply would prove an unexpected hurdle in a market that’s currently anticipating a glut of oil. Adding to the bullish momentum, an annual period of commodity index rebalancing is expected to see cash flow back into crude over the next few days. Call skews for Brent have also strengthened as traders pile into the options market to hedge. And entering the day, trend-following commodity trading advisers were 91% short in WTI, according to data from Kpler’s Bridgeton Research group. That positioning can leave traders rushing to cover shorts in the event of a price spike. The confluence of bullish events arrived as traders were weighing the US’s efforts to control the Venezuelan oil industry. Energy Secretary Chris Wright said the US plans to control sales of Venezuelan oil and would initially offer stored crude, while the Energy Department said barrels already were being marketed. State-owned Petroleos de Venezuela SA said it’s in negotiations with Washington over selling crude through a framework similar to an arrangement with Chevron Corp., the only supermajor operating in the country. Meanwhile, President Donald Trump told the New York Times that US oversight of the country could last years and that “the oil will take a while.” “We are really talking about a trade-flow shift as the

Read More »

Survey Shows OPEC Held Supply Flat Last Month

OPEC’s crude production held steady in December as a slump in Venezuela’s output to the lowest in two years was offset by increases in Iraq and some other members, a Bloomberg survey showed.  The Organization of the Petroleum Exporting Countries pumped an average of just over 29 million barrels a day, little changed from the previous month, according to the survey. Venezuelan output declined by about 14% to 830,000 barrels a day as the US blocked and seized tankers as part of a strategy to pressure the country’s leadership. Supplies increased from Iraq and a few other nations as they pressed on with the last in a series of collective increases before a planned pause in the first quarter of this year. The alliance, led by Saudi Arabia, aims to keep output steady through the end of March while global oil markets confront a surplus. World markets have been buffeted this week after President Donald Trump’s administration captured Venezuelan leader Nicolás Maduro, and said it would assume control of the OPEC member’s oil exports indefinitely.  While Trump has said that US oil companies will invest billions of dollars to rebuild Venezuela’s crumbling energy infrastructure, the nation’s situation in the short term remains precarious. Last month, Caracas was forced to shutter wells at the oil-rich Orinoco Belt amid the American blockade.  The shock move is the latest in an array of geopolitical challenges confronting the broader OPEC+ coalition, ranging from forecasts of a record supply glut to unrest in Iran and Russia’s ongoing war against Ukraine, which is taking a toll on the oil exports of fellow alliance member Kazakhstan. Oil prices are trading near the lowest in five years at just over $60 a barrel in London, squeezing the finances of OPEC+ members. Amid the uncertain backdrop, eight key nations agreed again this month to freeze output levels during the first quarter,

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE