Your Gateway to Power, Energy, Datacenters, Bitcoin and AI
Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.
Discover What Matters Most to You

AI
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Discover What Matter Most to You
Featured Articles

Gemini 3 Deep Think: Advancing science, research and engineering
Today, we’re releasing a major upgrade to Gemini 3 Deep Think, our specialized reasoning mode, built to push the frontier of intelligence and solve modern challenges across science, research, and engineering.We updated Gemini 3 Deep Think in close partnership with scientists and researchers to tackle tough research challenges — where problems often lack clear guardrails or a single correct solution and data is often messy or incomplete. By blending deep scientific knowledge with everyday engineering utility, Deep Think moves beyond abstract theory to drive practical applications.The new Deep Think is now available in the Gemini app for Google AI Ultra subscribers and, for the first time, we’re also making Deep Think available via the Gemini API to select researchers, engineers and enterprises. Express interest in early access here.Here is how our early testers are already using the latest Deep Think:

Global Exploration Signaling ‘Early Recovery’
Global exploration is signaling an “early recovery”, according to an Enverus Intelligence Research (EIR) statement sent to Rigzone by the Enverus team recently. The statement, which highlighted that Enverus was releasing a new global exploration outlook, said EIR “finds that, while global exploration and appraisal activity in 2025 remained near historical lows, long lead indicators such as block awards, new country entries and increased seismic surveying point to a gradual recovery forming from a very low base”. “Despite depressed activity levels, exploration success rates have held steady in the 30 percent to 40 percent range, underscoring a continued focus on prospect high grading, capital discipline and risk weighted exploration strategies,” it added. EIR’s statement noted that offshore exploration accounted for more than 50 percent of total activity in 2025, “driven by infrastructure led exploration and renewed focus on higher impact opportunities”. It also said supermajors and national oil companies are leading the exploration recovery, “particularly in acquiring new acreage in regions where subsurface potential for giant discoveries is matched by above ground conditions that support faster project advancement”. Independent and junior explorers are increasing participation, according to the statement, “signaling broader industry reengagement beyond supermajors and national oil companies”. EIR noted in the statement that it expects “the slow recovery to contribute to a structural supply gap after 2030, as limited exploration today constrains future project pipelines and resource replacement”. EIR Director Patrick Rutty said in the statement, “exploration is not rebounding quickly, but the early indicators are clearly improving”. “Given recent drilling success and diminished concerns over peak demand, the industry is reprioritizing exploration, a dynamic that should drive resource capture to relatively high levels over the next five years but does not yet negate the risk of a structural supply gap later this decade,” he added. In a

Ukraine Strikes 2nd Lukoil Refinery in Russia This Week
Ukrainian drones hit another Russian refinery owned by Lukoil PJSC, as Kyiv’s attacks on its foe’s energy infrastructure resume after a lull last month. Fire crews are working to extinguish a blaze at an oil refinery in the city of Ukhta some 1,550 kilometers (965 miles) from Moscow, following a Ukrainian drone attack, Komi region Governor Rostislav Goldshtein said in a post on Telegram, without giving further details. The fire broke out at the refinery’s primary unit and a visbreaker, a unit designed to convert heavy residue into lighter oil products, Ukraine’s General Staff said on Telegram. Lukoil didn’t respond to a Bloomberg request for comment. Ukraine carried out multiple high-precision strikes on Russia’s energy assets last year, leading to refinery shutdowns, disruptions at oil terminals and the rerouting of some tankers. The attacks are designed to curb the Kremlin’s energy revenues and restrict fuel supplies to Russian front lines as the war is about to enter a fifth year. The attacks slowed in January, targeting three small independent Russian refineries that together account for about 7% of the country’s typical monthly crude throughput. The lull offered temporary relief for Russia’s downstream sector, allowing refinery runs to gradually recover. As processing rates improved, the government lifted its ban on most gasoline exports, enabling producers to resume shipments in February, a month earlier than planned. On Wednesday, however, Ukraine attacked Lukoil’s oil refinery in Russia’s Volgograd region in the first major strike on the country’s oil-processing industry this year. The plant’s design capacity is about 300,000 barrels of crude a day. The smaller Ukhta refinery has recently been processing just over 60,000 barrels per day. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting

Crude Glut Is a Boon for USA Refiners
Oil markets are awash in crude, keeping a lid on prices and squeezing drillers. For US refiners, though, the glut is proving a windfall. The big three US refiners — Marathon Petroleum, Valero Energy Corp. and Phillips 66 — all beat estimates in fourth quarter earnings results reported in recent weeks. On calls with analysts, executives signaled a profitable outlook for 2026 and the years ahead, not least because they’re set to benefit from an influx of cheaper and more readily available heavy crudes. The divergence reflects a growing imbalance in global fuel markets: demand for gasoline, diesel and jet fuel is rising faster than new refining capacity is growing, even as oil producers continue to pump more crude than the world needs. That dynamic allows refiners to buy cheaper feedstock while charging more for finished fuels. “We are very bullish,” Phillips 66 Chief Executive Officer Mark Lashier said on a Feb. 4 call with analysts. Fuel demand is set to grow in 2026, and global refining capacity additions will fall short, Lashier said. The upbeat tone is a far cry from early 2025, when President Donald Trump’s tariff uncertainty clouded the economic outlook and sparked concerns over fuel demand. At the time, the industry braced for a wave of plant closures. Since then, fuel consumption has remained resilient even as the supply glut drove oil prices lower. Brent crude, the global benchmark, is down about 10% over the past 12 months. Refining margins for America’s top fuel makers, who collectively process some 8 million barrels of oil a day, ended 2025 with profits that were about $5 a barrel higher than the fourth quarter of 2024. With fuel demand forecast to stay strong, the upward momentum for margins is likely to continue. Consultant Rapidan Energy, in its refined product outlook

Wright Says China Bought Some VEN Oil From the USA
China has bought some Venezuelan oil that was purchased earlier by the US, according to Energy Secretary Chris Wright. “China has already bought some of the crude that’s been sold by the US government,” Wright told the media in Caracas, without giving details. “Legitimate Chinese business deals under legitimate business conditions” would be fine, he said, when asked about possible joint ventures in the country. China’s Foreign Ministry spokesman Lin Jian said he wasn’t familiar with Wright’s comments when asked at a regular briefing in Beijing on Thursday. The global oil market was jolted in January as US forces swooped into Venezuela and seized former President Nicolás Maduro, with Washington asserting control over the OPEC member’s crude industry. Since then, traders have looked for signals about how export patterns may change, and how output may be revived after years of neglect, sanctions, and underinvestment. The South American country’s so-called “oil quarantine” was essentially over, Wright said on Thursday. Ahead of the intervention, the US blockaded the country’s oil flows with a vast naval force, and seized several vessels. Refiners in China — the largest world’s oil importer — were the biggest buyers of Venezuelan crude before the US move, with the bulk of the imports bought by private processors. Given those flows were sanctioned, they were typically offered with deep discounts, making them attractive to local users. After Maduro’s seizure, President Donald Trump said that Venezuela would turn over 30 million to 50 million barrels of sanctioned oil to the US, according to a post on Truth Social. In addition, Wright told Fox News in January that the US would not cut China off from accessing Venezuelan crude. Several Indian refiners have bought Venezuela’s flagship Merey-grade crude following the US action, and the government has asked state-owned processors to consider buying more Venezuelan and US oil.

USA Crude Oil Stocks Rise More Than 8MM Barrels WoW
U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), increased by 8.5 million barrels from the week ending January 30 to the week ending February 6. That’s what the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report, which was released on Wednesday and included data for the week ending February 6. According to the EIA report, crude oil stocks, not including the SPR, stood at 428.8 million barrels on February 6, 420.3 million barrels on January 30, and 427.9 million barrels on February 7, 2025. Crude oil in the SPR stood at 415.2 million barrels on February 6, 415.2 million barrels on January 30, and 395.3 million barrels on February 7, 2025, the EIA report revealed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.689 billion barrels on February 6, the report highlighted. Total petroleum stocks were down 1.7 million barrels week on week and up 81.9 million barrels year on year, the report pointed out. “At 428.8 million barrels, U.S. crude oil inventories are about three percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 1.2 million barrels from last week and are about four percent above the five year average for this time of year. Finished gasoline inventories decreased, while blending components inventories increased last week,” it added. “Distillate fuel inventories decreased by 2.7 million barrels last week and are about four percent below the five year average for this time of year. Propane/propylene inventories decreased 5.4 million barrels from last week and are about 36 percent above the five

Gemini 3 Deep Think: Advancing science, research and engineering
Today, we’re releasing a major upgrade to Gemini 3 Deep Think, our specialized reasoning mode, built to push the frontier of intelligence and solve modern challenges across science, research, and engineering.We updated Gemini 3 Deep Think in close partnership with scientists and researchers to tackle tough research challenges — where problems often lack clear guardrails or a single correct solution and data is often messy or incomplete. By blending deep scientific knowledge with everyday engineering utility, Deep Think moves beyond abstract theory to drive practical applications.The new Deep Think is now available in the Gemini app for Google AI Ultra subscribers and, for the first time, we’re also making Deep Think available via the Gemini API to select researchers, engineers and enterprises. Express interest in early access here.Here is how our early testers are already using the latest Deep Think:

Global Exploration Signaling ‘Early Recovery’
Global exploration is signaling an “early recovery”, according to an Enverus Intelligence Research (EIR) statement sent to Rigzone by the Enverus team recently. The statement, which highlighted that Enverus was releasing a new global exploration outlook, said EIR “finds that, while global exploration and appraisal activity in 2025 remained near historical lows, long lead indicators such as block awards, new country entries and increased seismic surveying point to a gradual recovery forming from a very low base”. “Despite depressed activity levels, exploration success rates have held steady in the 30 percent to 40 percent range, underscoring a continued focus on prospect high grading, capital discipline and risk weighted exploration strategies,” it added. EIR’s statement noted that offshore exploration accounted for more than 50 percent of total activity in 2025, “driven by infrastructure led exploration and renewed focus on higher impact opportunities”. It also said supermajors and national oil companies are leading the exploration recovery, “particularly in acquiring new acreage in regions where subsurface potential for giant discoveries is matched by above ground conditions that support faster project advancement”. Independent and junior explorers are increasing participation, according to the statement, “signaling broader industry reengagement beyond supermajors and national oil companies”. EIR noted in the statement that it expects “the slow recovery to contribute to a structural supply gap after 2030, as limited exploration today constrains future project pipelines and resource replacement”. EIR Director Patrick Rutty said in the statement, “exploration is not rebounding quickly, but the early indicators are clearly improving”. “Given recent drilling success and diminished concerns over peak demand, the industry is reprioritizing exploration, a dynamic that should drive resource capture to relatively high levels over the next five years but does not yet negate the risk of a structural supply gap later this decade,” he added. In a

Ukraine Strikes 2nd Lukoil Refinery in Russia This Week
Ukrainian drones hit another Russian refinery owned by Lukoil PJSC, as Kyiv’s attacks on its foe’s energy infrastructure resume after a lull last month. Fire crews are working to extinguish a blaze at an oil refinery in the city of Ukhta some 1,550 kilometers (965 miles) from Moscow, following a Ukrainian drone attack, Komi region Governor Rostislav Goldshtein said in a post on Telegram, without giving further details. The fire broke out at the refinery’s primary unit and a visbreaker, a unit designed to convert heavy residue into lighter oil products, Ukraine’s General Staff said on Telegram. Lukoil didn’t respond to a Bloomberg request for comment. Ukraine carried out multiple high-precision strikes on Russia’s energy assets last year, leading to refinery shutdowns, disruptions at oil terminals and the rerouting of some tankers. The attacks are designed to curb the Kremlin’s energy revenues and restrict fuel supplies to Russian front lines as the war is about to enter a fifth year. The attacks slowed in January, targeting three small independent Russian refineries that together account for about 7% of the country’s typical monthly crude throughput. The lull offered temporary relief for Russia’s downstream sector, allowing refinery runs to gradually recover. As processing rates improved, the government lifted its ban on most gasoline exports, enabling producers to resume shipments in February, a month earlier than planned. On Wednesday, however, Ukraine attacked Lukoil’s oil refinery in Russia’s Volgograd region in the first major strike on the country’s oil-processing industry this year. The plant’s design capacity is about 300,000 barrels of crude a day. The smaller Ukhta refinery has recently been processing just over 60,000 barrels per day. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting

Crude Glut Is a Boon for USA Refiners
Oil markets are awash in crude, keeping a lid on prices and squeezing drillers. For US refiners, though, the glut is proving a windfall. The big three US refiners — Marathon Petroleum, Valero Energy Corp. and Phillips 66 — all beat estimates in fourth quarter earnings results reported in recent weeks. On calls with analysts, executives signaled a profitable outlook for 2026 and the years ahead, not least because they’re set to benefit from an influx of cheaper and more readily available heavy crudes. The divergence reflects a growing imbalance in global fuel markets: demand for gasoline, diesel and jet fuel is rising faster than new refining capacity is growing, even as oil producers continue to pump more crude than the world needs. That dynamic allows refiners to buy cheaper feedstock while charging more for finished fuels. “We are very bullish,” Phillips 66 Chief Executive Officer Mark Lashier said on a Feb. 4 call with analysts. Fuel demand is set to grow in 2026, and global refining capacity additions will fall short, Lashier said. The upbeat tone is a far cry from early 2025, when President Donald Trump’s tariff uncertainty clouded the economic outlook and sparked concerns over fuel demand. At the time, the industry braced for a wave of plant closures. Since then, fuel consumption has remained resilient even as the supply glut drove oil prices lower. Brent crude, the global benchmark, is down about 10% over the past 12 months. Refining margins for America’s top fuel makers, who collectively process some 8 million barrels of oil a day, ended 2025 with profits that were about $5 a barrel higher than the fourth quarter of 2024. With fuel demand forecast to stay strong, the upward momentum for margins is likely to continue. Consultant Rapidan Energy, in its refined product outlook

Wright Says China Bought Some VEN Oil From the USA
China has bought some Venezuelan oil that was purchased earlier by the US, according to Energy Secretary Chris Wright. “China has already bought some of the crude that’s been sold by the US government,” Wright told the media in Caracas, without giving details. “Legitimate Chinese business deals under legitimate business conditions” would be fine, he said, when asked about possible joint ventures in the country. China’s Foreign Ministry spokesman Lin Jian said he wasn’t familiar with Wright’s comments when asked at a regular briefing in Beijing on Thursday. The global oil market was jolted in January as US forces swooped into Venezuela and seized former President Nicolás Maduro, with Washington asserting control over the OPEC member’s crude industry. Since then, traders have looked for signals about how export patterns may change, and how output may be revived after years of neglect, sanctions, and underinvestment. The South American country’s so-called “oil quarantine” was essentially over, Wright said on Thursday. Ahead of the intervention, the US blockaded the country’s oil flows with a vast naval force, and seized several vessels. Refiners in China — the largest world’s oil importer — were the biggest buyers of Venezuelan crude before the US move, with the bulk of the imports bought by private processors. Given those flows were sanctioned, they were typically offered with deep discounts, making them attractive to local users. After Maduro’s seizure, President Donald Trump said that Venezuela would turn over 30 million to 50 million barrels of sanctioned oil to the US, according to a post on Truth Social. In addition, Wright told Fox News in January that the US would not cut China off from accessing Venezuelan crude. Several Indian refiners have bought Venezuela’s flagship Merey-grade crude following the US action, and the government has asked state-owned processors to consider buying more Venezuelan and US oil.

USA Crude Oil Stocks Rise More Than 8MM Barrels WoW
U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), increased by 8.5 million barrels from the week ending January 30 to the week ending February 6. That’s what the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report, which was released on Wednesday and included data for the week ending February 6. According to the EIA report, crude oil stocks, not including the SPR, stood at 428.8 million barrels on February 6, 420.3 million barrels on January 30, and 427.9 million barrels on February 7, 2025. Crude oil in the SPR stood at 415.2 million barrels on February 6, 415.2 million barrels on January 30, and 395.3 million barrels on February 7, 2025, the EIA report revealed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.689 billion barrels on February 6, the report highlighted. Total petroleum stocks were down 1.7 million barrels week on week and up 81.9 million barrels year on year, the report pointed out. “At 428.8 million barrels, U.S. crude oil inventories are about three percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 1.2 million barrels from last week and are about four percent above the five year average for this time of year. Finished gasoline inventories decreased, while blending components inventories increased last week,” it added. “Distillate fuel inventories decreased by 2.7 million barrels last week and are about four percent below the five year average for this time of year. Propane/propylene inventories decreased 5.4 million barrels from last week and are about 36 percent above the five

Global Exploration Signaling ‘Early Recovery’
Global exploration is signaling an “early recovery”, according to an Enverus Intelligence Research (EIR) statement sent to Rigzone by the Enverus team recently. The statement, which highlighted that Enverus was releasing a new global exploration outlook, said EIR “finds that, while global exploration and appraisal activity in 2025 remained near historical lows, long lead indicators such as block awards, new country entries and increased seismic surveying point to a gradual recovery forming from a very low base”. “Despite depressed activity levels, exploration success rates have held steady in the 30 percent to 40 percent range, underscoring a continued focus on prospect high grading, capital discipline and risk weighted exploration strategies,” it added. EIR’s statement noted that offshore exploration accounted for more than 50 percent of total activity in 2025, “driven by infrastructure led exploration and renewed focus on higher impact opportunities”. It also said supermajors and national oil companies are leading the exploration recovery, “particularly in acquiring new acreage in regions where subsurface potential for giant discoveries is matched by above ground conditions that support faster project advancement”. Independent and junior explorers are increasing participation, according to the statement, “signaling broader industry reengagement beyond supermajors and national oil companies”. EIR noted in the statement that it expects “the slow recovery to contribute to a structural supply gap after 2030, as limited exploration today constrains future project pipelines and resource replacement”. EIR Director Patrick Rutty said in the statement, “exploration is not rebounding quickly, but the early indicators are clearly improving”. “Given recent drilling success and diminished concerns over peak demand, the industry is reprioritizing exploration, a dynamic that should drive resource capture to relatively high levels over the next five years but does not yet negate the risk of a structural supply gap later this decade,” he added. In a

Wright Says China Bought Some VEN Oil From the USA
China has bought some Venezuelan oil that was purchased earlier by the US, according to Energy Secretary Chris Wright. “China has already bought some of the crude that’s been sold by the US government,” Wright told the media in Caracas, without giving details. “Legitimate Chinese business deals under legitimate business conditions” would be fine, he said, when asked about possible joint ventures in the country. China’s Foreign Ministry spokesman Lin Jian said he wasn’t familiar with Wright’s comments when asked at a regular briefing in Beijing on Thursday. The global oil market was jolted in January as US forces swooped into Venezuela and seized former President Nicolás Maduro, with Washington asserting control over the OPEC member’s crude industry. Since then, traders have looked for signals about how export patterns may change, and how output may be revived after years of neglect, sanctions, and underinvestment. The South American country’s so-called “oil quarantine” was essentially over, Wright said on Thursday. Ahead of the intervention, the US blockaded the country’s oil flows with a vast naval force, and seized several vessels. Refiners in China — the largest world’s oil importer — were the biggest buyers of Venezuelan crude before the US move, with the bulk of the imports bought by private processors. Given those flows were sanctioned, they were typically offered with deep discounts, making them attractive to local users. After Maduro’s seizure, President Donald Trump said that Venezuela would turn over 30 million to 50 million barrels of sanctioned oil to the US, according to a post on Truth Social. In addition, Wright told Fox News in January that the US would not cut China off from accessing Venezuelan crude. Several Indian refiners have bought Venezuela’s flagship Merey-grade crude following the US action, and the government has asked state-owned processors to consider buying more Venezuelan and US oil.

Crude Glut Is a Boon for USA Refiners
Oil markets are awash in crude, keeping a lid on prices and squeezing drillers. For US refiners, though, the glut is proving a windfall. The big three US refiners — Marathon Petroleum, Valero Energy Corp. and Phillips 66 — all beat estimates in fourth quarter earnings results reported in recent weeks. On calls with analysts, executives signaled a profitable outlook for 2026 and the years ahead, not least because they’re set to benefit from an influx of cheaper and more readily available heavy crudes. The divergence reflects a growing imbalance in global fuel markets: demand for gasoline, diesel and jet fuel is rising faster than new refining capacity is growing, even as oil producers continue to pump more crude than the world needs. That dynamic allows refiners to buy cheaper feedstock while charging more for finished fuels. “We are very bullish,” Phillips 66 Chief Executive Officer Mark Lashier said on a Feb. 4 call with analysts. Fuel demand is set to grow in 2026, and global refining capacity additions will fall short, Lashier said. The upbeat tone is a far cry from early 2025, when President Donald Trump’s tariff uncertainty clouded the economic outlook and sparked concerns over fuel demand. At the time, the industry braced for a wave of plant closures. Since then, fuel consumption has remained resilient even as the supply glut drove oil prices lower. Brent crude, the global benchmark, is down about 10% over the past 12 months. Refining margins for America’s top fuel makers, who collectively process some 8 million barrels of oil a day, ended 2025 with profits that were about $5 a barrel higher than the fourth quarter of 2024. With fuel demand forecast to stay strong, the upward momentum for margins is likely to continue. Consultant Rapidan Energy, in its refined product outlook

Ukraine Strikes 2nd Lukoil Refinery in Russia This Week
Ukrainian drones hit another Russian refinery owned by Lukoil PJSC, as Kyiv’s attacks on its foe’s energy infrastructure resume after a lull last month. Fire crews are working to extinguish a blaze at an oil refinery in the city of Ukhta some 1,550 kilometers (965 miles) from Moscow, following a Ukrainian drone attack, Komi region Governor Rostislav Goldshtein said in a post on Telegram, without giving further details. The fire broke out at the refinery’s primary unit and a visbreaker, a unit designed to convert heavy residue into lighter oil products, Ukraine’s General Staff said on Telegram. Lukoil didn’t respond to a Bloomberg request for comment. Ukraine carried out multiple high-precision strikes on Russia’s energy assets last year, leading to refinery shutdowns, disruptions at oil terminals and the rerouting of some tankers. The attacks are designed to curb the Kremlin’s energy revenues and restrict fuel supplies to Russian front lines as the war is about to enter a fifth year. The attacks slowed in January, targeting three small independent Russian refineries that together account for about 7% of the country’s typical monthly crude throughput. The lull offered temporary relief for Russia’s downstream sector, allowing refinery runs to gradually recover. As processing rates improved, the government lifted its ban on most gasoline exports, enabling producers to resume shipments in February, a month earlier than planned. On Wednesday, however, Ukraine attacked Lukoil’s oil refinery in Russia’s Volgograd region in the first major strike on the country’s oil-processing industry this year. The plant’s design capacity is about 300,000 barrels of crude a day. The smaller Ukhta refinery has recently been processing just over 60,000 barrels per day. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting

USA Crude Oil Stocks Rise More Than 8MM Barrels WoW
U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), increased by 8.5 million barrels from the week ending January 30 to the week ending February 6. That’s what the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report, which was released on Wednesday and included data for the week ending February 6. According to the EIA report, crude oil stocks, not including the SPR, stood at 428.8 million barrels on February 6, 420.3 million barrels on January 30, and 427.9 million barrels on February 7, 2025. Crude oil in the SPR stood at 415.2 million barrels on February 6, 415.2 million barrels on January 30, and 395.3 million barrels on February 7, 2025, the EIA report revealed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.689 billion barrels on February 6, the report highlighted. Total petroleum stocks were down 1.7 million barrels week on week and up 81.9 million barrels year on year, the report pointed out. “At 428.8 million barrels, U.S. crude oil inventories are about three percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 1.2 million barrels from last week and are about four percent above the five year average for this time of year. Finished gasoline inventories decreased, while blending components inventories increased last week,” it added. “Distillate fuel inventories decreased by 2.7 million barrels last week and are about four percent below the five year average for this time of year. Propane/propylene inventories decreased 5.4 million barrels from last week and are about 36 percent above the five

USA Labor Market Report Underpins Energy Demand
In a market update sent to Rigzone late Wednesday, Rystad Energy outlined that the January U.S. labor market report “surprise[d]… to the upside, underpinning energy demand”. Rystad noted in the report that the latest U.S. jobs data “shows promise, with the unemployment rate falling by 4.3 percent, pointing to market stability”. “Non-farm payrolls increased by 130,000 jobs in January, while the rise for December was downwardly revised to 48,000,” it pointed out, adding that the unemployment rate in December was 4.4 percent. “The latest data compares with consensus expectations of job gains of around 70,000 and the unemployment rate holding steady at 4.4 percent,” Rystad said. In the update, Rystad Energy Chief Economist Claudio Galimberti noted that payroll growth exceeded expectations and that unemployment edged lower. “Following a series of weaker private indicators, the data suggests stabilization rather than strong acceleration,” Galimberti said. “Markets that had positioned for a rapid easing cycle responded by repricing yields higher and scaling back expectations for near-term rate cuts,” he added. “For energy markets, the implications are moderately supportive. A resilient labor market underpins demand for transport fuels, petrochemicals and power generation, reducing downside risks to U.S. consumption at a time when macro sentiment had turned cautious,” he continued. “While the U.S. is not the primary driver of incremental global oil demand, labor market stability reinforces the view that the demand picture is firming up,” he went on to state. Galimberti noted in the update that “revisions to prior data confirm that the cycle is mature, not accelerating”. “Still, in a market already balancing OPEC+ supply management against geopolitical risk, a firmer U.S. macro signal marginally strengthens the demand outlook,” he said. “The result is a modestly constructive backdrop for oil prices in the near term, without materially shifting the fundamentals,” Galimberti concluded. In

Microsoft will invest $80B in AI data centers in fiscal 2025
And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs). In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

John Deere unveils more autonomous farm machines to address skill labor shortage
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

2025 playbook for enterprise AI success, from agents to evals
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Three Aberdeen oil company headquarters sell for £45m
Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

2025 ransomware predictions, trends, and how to prepare
Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

Gemini 3 Deep Think: Advancing science, research and engineering
Today, we’re releasing a major upgrade to Gemini 3 Deep Think, our specialized reasoning mode, built to push the frontier of intelligence and solve modern challenges across science, research, and engineering.We updated Gemini 3 Deep Think in close partnership with scientists and researchers to tackle tough research challenges — where problems often lack clear guardrails or a single correct solution and data is often messy or incomplete. By blending deep scientific knowledge with everyday engineering utility, Deep Think moves beyond abstract theory to drive practical applications.The new Deep Think is now available in the Gemini app for Google AI Ultra subscribers and, for the first time, we’re also making Deep Think available via the Gemini API to select researchers, engineers and enterprises. Express interest in early access here.Here is how our early testers are already using the latest Deep Think:

The Download: AI-enhanced cybercrime, and secure AI assistants
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. AI is already making online crimes easier. It could get much worse. Just as software engineers are using artificial intelligence to help write code and check for bugs, hackers are using these tools to reduce the time and effort required to orchestrate an attack, lowering the barriers for less experienced attackers to try something out.Some in Silicon Valley warn that AI is on the brink of being able to carry out fully automated attacks. But most security researchers instead argue that we should be paying closer attention to the much more immediate risks posed by AI, which is already speeding up and increasing the volume of scams.Criminals are increasingly exploiting the latest deepfake technologies to impersonate people and swindle victims out of vast sums of money. And we need to be ready for what comes next. Read the full story.—Rhiannon Williams This story is from the next print issue of MIT Technology Review magazine, which is all about crime. If you haven’t already, subscribe now to receive future issues once they land.
Is a secure AI assistant possible?
AI agents are a risky business. Even when stuck inside the chatbox window, LLMs will make mistakes and behave badly. Once they have tools that they can use to interact with the outside world, such as web browsers and email addresses, the consequences of those mistakes become far more serious. Viral AI agent project OpenClaw, which has made headlines across the world in recent weeks, harnesses existing LLMs to let users create their own bespoke assistants. For some users, this means handing over reams of personal data, from years of emails to the contents of their hard drive. That has security experts thoroughly freaked out. In response to these concerns, its creator warned that nontechnical people should not use the software. But there’s a clear appetite for what OpenClaw is offering, and any AI companies hoping to get in on the personal assistant business will need to figure out how to build a system that will keep users’ data safe and secure. To do so, they’ll need to borrow approaches from the cutting edge of agent security research. Read the full story. —Grace Huckins What’s next for Chinese open-source AI The past year has marked a turning point for Chinese AI. Since DeepSeek released its R1 reasoning model in January 2025, Chinese companies have repeatedly delivered AI models that match the performance of leading Western models at a fraction of the cost.These models differ in a crucial way from most US models like ChatGPT or Claude, which you pay to access and can’t inspect. The Chinese companies publish their models’ weights—numerical values that get set when a model is trained—so anyone can download, run, study, and modify them. If open-source AI models keep getting better, they will not just offer the cheapest options for people who want access to frontier AI capabilities; they will change where innovation happens and who sets the standards. Here’s what may come next.
—Caiwei Chen This is part of our What’s Next series, which looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here. Why EVs are gaining ground in Africa EVs are getting cheaper and more common all over the world. But the technology still faces major challenges in some markets, including many countries in Africa. Some regions across the continent still have limited grid and charging infrastructure, and those that do have widespread electricity access sometimes face reliability issues—a problem for EV owners, who require a stable electricity source to charge up and get around. But there are some signs of progress. Read the full story. —Casey Crownhart This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Instagram’s head has denied that social media is “clinically addictive” Adam Mosseri disputed allegations the platform prioritized profits over protecting its younger users’ mental health. (NYT $)+ Meta researchers’ correspondence seems to suggest otherwise. (The Guardian)2 The Pentagon is pushing AI companies to drop tools’ restrictionsIn a bid to make AI models available on classified networks. (Reuters)+ The Pentagon has gutted the team that tests AI and weapons systems. (MIT Technology Review) 3 The FTC has warned Apple News not to stifle conservative contentIt has accused the company’s news arm of promoting what it calls “leftist outlets.” (FT $) 4 Anthropic has pledged to minimize the impact of its data centersBy covering electricity price increases and the cost of grid infrastructure upgrades. (NBC News)+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review) 5 Online harassers are posting Grok-generated nude images on OnlyFans Kylie Brewer, a feminism-focused content creator, says the latest online campaign against her feels like an escalation. (404 Media)+ Inside the marketplace powering bespoke AI deepfakes of real women. (MIT Technology Review)6 Venture capitalists are hedging their AI betsThey’re breaking a cardinal rule by investing in both OpenAI and rival Anthropic. (Bloomberg $)+ OpenAI has set itself some seriously lofty revenue goals. (NYT $)+ AI giants are notoriously inconsistent when reporting deprecation expenses. (WSJ $) 7 We’re learning more about the links between weight loss drugs and addictionSome patients report lowered urges for drugs and alcohol. But can it last? (New Yorker $)+ What we still don’t know about weight-loss drugs. (MIT Technology Review)8 Meta has patented an AI that keeps the accounts of dead users activeBut it claims to have “no plans to move forward” with it. (Insider $)+ Deepfakes of your dead loved ones are a booming Chinese business. (MIT Technology Review)
9 Slime mold is cleverer than you may thinkA certain type appears able to learn, remember and make decisions. (Knowable Magazine)+ And that’s not all—this startup thinks it can help us design better cities, too. (MIT Technology Review)10 Meditation can actually alter your brain activity 🧘According to a new study conducted on Buddhist monks. (Wired $) Quote of the day “I still try to believe that the good that I’m doing is greater than the horrors that are a part of this. But there’s a limit to what we can put up with. And I’ve hit my limit.”
—An anonymous Microsoft worker explains why they’re growing increasingly frustrated with their employer’s links to ICE, the Verge reports. One more thing Motor neuron diseases took their voices. AI is bringing them back.Jules Rodriguez lost his voice in October 2024. His speech had been deteriorating since a diagnosis of amyotrophic lateral sclerosis (ALS) in 2020, but a tracheostomy to help him breathe dealt the final blow. Rodriguez and his wife, Maria Fernandez, who live in Miami, thought they would never hear his voice again. Then they re-created it using AI. After feeding old recordings of Rodriguez’s voice into a tool trained on voices from film, television, radio, and podcasts, the couple were able to generate a voice clone—a way for Jules to communicate in his “old voice.” Rodriguez is one of over a thousand people with speech difficulties who have cloned their voices using free software from ElevenLabs. The AI voice clones aren’t perfect. But they represent a vast improvement on previous communication technologies and are already improving the lives of people with motor neuron diseases. Read the full story. —Jessica Hamzelou We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + We all know how the age of the dinosaurs ended. But how did it begin?+ There’s only one Miss Piggy—and her fashion looks through the ages are iconic.+ Australia’s hospital for injured and orphaned flying foxes is unbearably cute.+ 81-year old Juan López is a fitness inspiration to us all.

AI is already making online swindles easier. It could get much worse.
Anton Cherepanov is always on the lookout for something interesting. And in late August last year, he spotted just that. It was a file uploaded to VirusTotal, a site cybersecurity researchers like him use to analyze submissions for potential viruses and other types of malicious software, often known as malware. On the surface it seemed innocuous, but it triggered Cherepanov’s custom malware-detecting measures. Over the next few hours, he and his colleague Peter Strýček inspected the sample and realized they’d never come across anything like it before. The file contained ransomware, a nasty strain of malware that encrypts the files it comes across on a victim’s system, rendering them unusable until a ransom is paid to the attackers behind it. But what set this example apart was that it employed large language models (LLMs). Not just incidentally, but across every stage of an attack. Once it was installed, it could tap into an LLM to generate customized code in real time, rapidly map a computer to identify sensitive data to copy or encrypt, and write personalized ransom notes based on the files’ content. The software could do this autonomously, without any human intervention. And every time it ran, it would act differently, making it harder to detect. Cherepanov and Strýček were confident that their discovery, which they dubbed PromptLock, marked a turning point in generative AI, showing how the technology could be exploited to create highly flexible malware attacks. They published a blog post declaring that they’d uncovered the first example of AI-powered ransomware, which quickly became the object of widespread global media attention. But the threat wasn’t quite as dramatic as it first appeared. The day after the blog post went live, a team of researchers from New York University claimed responsibility, explaining that the malware was not, in fact, a full attack let loose in the wild but a research project, merely designed to prove it was possible to automate each step of a ransomware campaign—which, they said, they had.
PromptLock may have turned out to be an academic project, but the real bad guys are using the latest AI tools. Just as software engineers are using artificial intelligence to help write code and check for bugs, hackers are using these tools to reduce the time and effort required to orchestrate an attack, lowering the barriers for less experienced attackers to try something out. The likelihood that cyberattacks will now become more common and more effective over time is not a remote possibility but “a sheer reality,” says Lorenzo Cavallaro, a professor of computer science at University College London.
Some in Silicon Valley warn that AI is on the brink of being able to carry out fully automated attacks. But most security researchers say this claim is overblown. “For some reason, everyone is just focused on this malware idea of, like, AI superhackers, which is just absurd,” says Marcus Hutchins, who is principal threat researcher at the security company Expel and famous in the security world for ending a giant global ransomware attack called WannaCry in 2017. Instead, experts argue, we should be paying closer attention to the much more immediate risks posed by AI, which is already speeding up and increasing the volume of scams. Criminals are increasingly exploiting the latest deepfake technologies to impersonate people and swindle victims out of vast sums of money. These AI-enhanced cyberattacks are only set to get more frequent and more destructive, and we need to be ready. Spam and beyond Attackers started adopting generative AI tools almost immediately after ChatGPT exploded on the scene at the end of 2022. These efforts began, as you might imagine, with the creation of spam—and a lot of it. Last year, a report from Microsoft said that in the year leading up to April 2025, the company had blocked $4 billion worth of scams and fraudulent transactions, “many likely aided by AI content.” At least half of spam email is now generated using LLMs, according to estimates by researchers at Columbia University, the University of Chicago, and Barracuda Networks, who analyzed nearly 500,000 malicious messages collected before and after the launch of ChatGPT. They also found evidence that AI is increasingly being deployed in more sophisticated schemes. They looked at targeted email attacks, which impersonate a trusted figure in order to trick a worker within an organization out of funds or sensitive information. By April 2025, they found, at least 14% of those sorts of focused email attacks were generated using LLMs, up from 7.6% in April 2024. In one high-profile case, a worker was tricked into transferring $25 million to criminals via a video call with digital versions of the company’s chief financial officer and other employees. And the generative AI boom has made it easier and cheaper than ever before to generate not only emails but highly convincing images, videos, and audio. The results are much more realistic than even just a few short years ago, and it takes much less data to generate a fake version of someone’s likeness or voice than it used to. Criminals aren’t deploying these sorts of deepfakes to prank people or to simply mess around—they’re doing it because it works and because they’re making money out of it, says Henry Ajder, a generative AI expert. “If there’s money to be made and people continue to be fooled by it, they’ll continue to do it,” he says. In one high-profile case reported in 2024, a worker at the British engineering firm Arup was tricked into transferring $25 million to criminals via a video call with digital versions of the company’s chief financial officer and other employees. That’s likely only the tip of the iceberg, and the problem posed by convincing deepfakes is only likely to get worse as the technology improves and is more widely adopted. BRIAN STAUFFER Criminals’ tactics evolve all the time, and as AI’s capabilities improve, such people are constantly probing how those new capabilities can help them gain an advantage over victims. Billy Leonard, tech leader of Google’s Threat Analysis Group, has been keeping a close eye on changes in the use of AI by potential bad actors (a widely used term in the industry for hackers and others attempting to use computers for criminal purposes). In the latter half of 2024, he and his team noticed prospective criminals using tools like Google Gemini the same way everyday users do—to debug code and automate bits and pieces of their work—as well as tasking it with writing the odd phishing email. By 2025, they had progressed to using AI to help create new pieces of malware and release them into the wild, he says. The big question now is how far this kind of malware can go. Will it ever become capable enough to sneakily infiltrate thousands of companies’ systems and extract millions of dollars, completely undetected?
Most popular AI models have guardrails in place to prevent them from generating malicious code or illegal material, but bad actors still find ways to work around them. For example, Google observed a China-linked actor asking its Gemini AI model to identify vulnerabilities on a compromised system—a request it initially refused on safety grounds. However, the attacker managed to persuade Gemini to break its own rules by posing as a participant in a capture-the-flag competition, a popular cybersecurity game. This sneaky form of jailbreaking led Gemini to hand over information that could have been used to exploit the system. (Google has since adjusted Gemini to deny these kinds of requests.) But bad actors aren’t just focusing on trying to bend the AI giants’ models to their nefarious ends. Going forward, they’re increasingly likely to adopt open-source AI models, as it’s easier to strip out their safeguards and get them to do malicious things, says Ashley Jess, a former tactical specialist at the US Department of Justice and now a senior intelligence analyst at the cybersecurity company Intel 471. “Those are the ones I think that [bad] actors are going to adopt, because they can jailbreak them and tailor them to what they need,” she says. The NYU team used two open-source models from OpenAI in its PromptLock experiment, and the researchers found they didn’t even need to resort to jailbreaking techniques to get the model to do what they wanted. They say that makes attacks much easier. Although these kinds of open-source models are designed with an eye to ethical alignment, meaning that their makers do consider certain goals and values in dictating the way they respond to requests, the models don’t have the same kinds of restrictions as their closed-source counterparts, says Meet Udeshi, a PhD student at New York University who worked on the project. “That is what we were trying to test,” he says. “These LLMs claim that they are ethically aligned—can we still misuse them for these purposes? And the answer turned out to be yes.” It’s possible that criminals have already successfully pulled off covert PromptLock-style attacks and we’ve simply never seen any evidence of them, says Udeshi. If that’s the case, attackers could—in theory—have created a fully autonomous hacking system. But to do that they would have had to overcome the significant barrier that is getting AI models to behave reliably, as well as any inbuilt aversion the models have to being used for malicious purposes—all while evading detection. Which is a pretty high bar indeed. Productivity tools for hackers So, what do we know for sure? Some of the best data we have now on how people are attempting to use AI for malicious purposes comes from the big AI companies themselves. And their findings certainly sound alarming, at least at first. In November, Leonard’s team at Google released a report that found bad actors were using AI tools (including Google’s Gemini) to dynamically alter malware’s behavior; for example, it could self-modify to evade detection. The team wrote that it ushered in “a new operational phase of AI abuse.” However, the five malware families the report dug into (including PromptLock) consisted of code that was easily detected and didn’t actually do any harm, the cybersecurity writer Kevin Beaumont pointed out on social media. “There’s nothing in the report to suggest orgs need to deviate from foundational security programmes—everything worked as it should,” he wrote. It’s true that this malware activity is in an early phase, concedes Leonard. Still, he sees value in making these kinds of reports public if it helps security vendors and others build better defenses to prevent more dangerous AI attacks further down the line. “Cliché to say, but sunlight is the best disinfectant,” he says. “It doesn’t really do us any good to keep it a secret or keep it hidden away. We want people to be able to know about this— we want other security vendors to know about this—so that they can continue to build their own detections.” And it’s not just new strains of malware that would-be attackers are experimenting with—they also seem to be using AI to try to automate the process of hacking targets. In November, Anthropic announced it had disrupted a large-scale cyberattack, the first reported case of one executed without “substantial human intervention.” Although the company didn’t go into much detail about the exact tactics the hackers used, the report’s authors said a Chinese state-sponsored group had used its Claude Code assistant to automate up to 90% of what they called a “highly sophisticated espionage campaign.”
“We’re entering an era where the barrier to sophisticated cyber operations has fundamentally lowered, and the pace of attacks will accelerate faster than many organizations are prepared for.” Jacob Klein, head of threat intelligence at Anthropic But, as with the Google findings, there were caveats. A human operator, not AI, selected the targets before tasking Claude with identifying vulnerabilities. And of 30 attempts, only a “handful” were successful. The Anthropic report also found that Claude hallucinated and ended up fabricating data during the campaign, claiming it had obtained credentials it hadn’t and “frequently” overstating its findings, so the attackers would have had to carefully validate those results to make sure they were actually true. “This remains an obstacle to fully autonomous cyberattacks,” the report’s authors wrote. Existing controls within any reasonably secure organization would stop these attacks, says Gary McGraw, a veteran security expert and cofounder of the Berryville Institute of Machine Learning in Virginia. “None of the malicious-attack part, like the vulnerability exploit … was actually done by the AI—it was just prefabricated tools that do that, and that stuff’s been automated for 20 years,” he says. “There’s nothing novel, creative, or interesting about this attack.”
Anthropic maintains that the report’s findings are a concerning signal of changes ahead. “Tying this many steps of an intrusion campaign together through [AI] agentic orchestration is unprecedented,” Jacob Klein, head of threat intelligence at Anthropic, said in a statement. “It turns what has always been a labor-intensive process into something far more scalable. We’re entering an era where the barrier to sophisticated cyber operations has fundamentally lowered, and the pace of attacks will accelerate faster than many organizations are prepared for.” Some are not convinced there’s reason to be alarmed. AI hype has led a lot of people in the cybersecurity industry to overestimate models’ current abilities, Hutchins says. “They want this idea of unstoppable AIs that can outmaneuver security, so they’re forecasting that’s where we’re going,” he says. But “there just isn’t any evidence to support that, because the AI capabilities just don’t meet any of the requirements.” BRIAN STAUFFER Indeed, for now criminals mostly seem to be tapping AI to enhance their productivity: using LLMs to write malicious code and phishing lures, to conduct reconnaissance, and for language translation. Jess sees this kind of activity a lot, alongside efforts to sell tools in underground criminal markets. For example, there are phishing kits that compare the click-rate success of various spam campaigns, so criminals can track which campaigns are most effective at any given time. She is seeing a lot of this activity in what could be called the “AI slop landscape” but not as much “widespread adoption from highly technical actors,” she says. But attacks don’t need to be sophisticated to be effective. Models that produce “good enough” results allow attackers to go after larger numbers of people than previously possible, says Liz James, a managing security consultant at the cybersecurity company NCC Group. “We’re talking about someone who might be using a scattergun approach phishing a whole bunch of people with a model that, if it lands itself on a machine of interest that doesn’t have any defenses … can reasonably competently encrypt your hard drive,” she says. “You’ve achieved your objective.” On the defense For now, researchers are optimistic about our ability to defend against these threats—regardless of whether they are made with AI. “Especially on the malware side, a lot of the defenses and the capabilities and the best practices that we’ve recommended for the past 10-plus years—they all still apply,” says Leonard. The security programs we use to detect standard viruses and attack attempts work; a lot of phishing emails will still get caught in inbox spam filters, for example. These traditional forms of defense will still largely get the job done—at least for now. And in a neat twist, AI itself is helping to counter security threats more effectively. After all, it is excellent at spotting patterns and correlations. Vasu Jakkal, corporate vice president of Microsoft Security, says that every day, the company processes more than 100 trillion signals flagged by its AI systems as potentially malicious or suspicious events.
Despite the cybersecurity landscape’s constant state of flux, Jess is heartened by how readily defenders are sharing detailed information with each other about attackers’ tactics. Mitre’s Adversarial Threat Landscape for Artificial-Intelligence Systems and the GenAI Security Project from the Open Worldwide Application Security Project are two helpful initiatives documenting how potential criminals are incorporating AI into their attacks and how AI systems are being targeted by them. “We’ve got some really good resources out there for understanding how to protect your own internal AI toolings and understand the threat from AI toolings in the hands of cybercriminals,” she says. PromptLock, the result of a limited university project, isn’t representative of how an attack would play out in the real world. But if it taught us anything, it’s that the technical capabilities of AI shouldn’t be dismissed.New York University’s Udeshi says he wastaken aback at how easily AI was able to handle a full end-to-end chain of attack, from mapping and working out how to break into a targeted computer system to writing personalized ransom notes to victims: “We expected it would do the initial task very well but it would stumble later on, but we saw high—80% to 90%—success throughout the whole pipeline.” AI is still evolving rapidly, and today’s systems are already capable of things that would have seemed preposterously out of reach just a few short years ago. That makes it incredibly tough to say with absolute confidence what it will—or won’t—be able to achieve in the future. While researchers are certain that AI-driven attacks are likely to increase in both volume and severity, the forms they could take are unclear. Perhaps the most extreme possibility is that someone makes an AI model capable of creating and automating its own zero-day exploits—highly dangerous cyberattacks that take advantage of previously unknown vulnerabilities in software. But building and hosting such a model—and evading detection—would require billions of dollars in investment, says Hutchins, meaning it would only be in the reach of a wealthy nation-state. Engin Kirda, a professor at Northeastern University in Boston who specializes in malware detection and analysis, says he wouldn’t be surprised if this was already happening. “I’m sure people are investing in it, but I’m also pretty sure people are already doing it, especially [in] China—they have good AI capabilities,” he says. It’s a pretty scary possibility. But it’s one that—thankfully—is still only theoretical. A large-scale campaign that is both effective and clearly AI-driven has yet to materialize. What we can say is that generative AI is already significantly lowering the bar for criminals. They’ll keep experimenting with the newest releases and updates and trying to find new ways to trick us into parting with important information and precious cash. For now, all we can do is be careful, remain vigilant, and—for all our sakes—stay on top of those system updates.

Why EVs are gaining ground in Africa
EVs are getting cheaper and more common all over the world. But the technology still faces major challenges in some markets, including many countries in Africa. Some regions across the continent still have limited grid and charging infrastructure, and those that do have widespread electricity access sometimes face reliability issues—a problem for EV owners, who require a stable electricity source to charge up and get around. But there are some signs of progress. I just finished up a story about the economic case: A recent study in Nature Energy found that EVs from scooters to minibuses could be cheaper to own than gas-powered vehicles in Africa by 2040. If there’s one thing to know about EVs in Africa, it’s that each of the 54 countries on the continent faces drastically different needs, challenges, and circumstances. There’s also a wide range of reasons to be optimistic about the prospects for EVs in the near future, including developing policies, a growing grid, and an expansion of local manufacturing.
Even the world’s leading EV markets fall short of Ethiopia’s aggressively pro-EV policies. In 2024, the country became the first in the world to ban the import of non-electric private vehicles. The case is largely an economic one: Gasoline is expensive there, and the country commissioned Africa’s largest hydropower dam in September 2025, providing a new source of cheap and abundant clean electricity. The nearly $5 billion project has a five-gigawatt capacity, doubling the grid’s peak power in the country.
Much of Ethiopia’s vehicle market is for used cars, and some drivers are still opting for older gas-powered vehicles. But this nudge could help increase the market for EVs there. Other African countries are also pushing some drivers toward electrification. Rwanda banned new registrations for commercial gas-powered motorbikes in the capital city of Kigali last year, encouraging EVs as an alternative. These motorbike taxis can make up over half the vehicles on the city’s streets, so the move is a major turning point for transportation there. Smaller two- and three-wheelers are a bright spot for EVs globally: In 2025, EVs made up about 45% of new sales for such vehicles. (For cars and trucks, the share was about 25%.) And Africa’s local market is starting to really take off. There’s already some local assembly of electric two-wheelers in countries including Morocco, Kenya, and Rwanda, says Nelson Nsitem, lead Africa energy transition analyst at BloombergNEF, an energy consultancy. Spiro, a Dubai-based electric motorbike company, recently raised $100 million in funding to expand operations in Africa. The company currently assembles its bikes in Uganda, Kenya, Nigeria, and Rwanda, and as of October it has over 60,000 bikes deployed and 1,500 battery swap stations operating. Assembly and manufacturing for larger EVs and batteries is also set to expand. Gotion High-Tech, a Chinese battery company, is currently building Africa’s first battery gigafactory. It’s a $5.6 billion project that could produce 20 gigawatt-hours of batteries annually, starting in 2026. (That’s enough for hundreds of thousands of EVs each year.) Chinese EV companies are looking to growing markets like Southeast Asia and Africa as they attempt to expand beyond an oversaturated domestic scene. BYD, the world’s largest EV company, is aggressively expanding across South Africa and plans to have as many as 70 dealerships in the country by the end of this year. That will mean more options for people in Africa looking to buy electric. “You have very high-quality, very affordable vehicles coming onto the market that are benefiting from the economies of scale in China. These countries stand to benefit from that,” says Kelly Carlin, a manager in the program on carbon-free transportation at the Rocky Mountain Institute, an energy think tank. “It’s a game changer,” he adds. This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

What’s next for Chinese open-source AI
The past year has marked a turning point for Chinese AI. Since DeepSeek released its R1 reasoning model in January 2025, Chinese companies have repeatedly delivered AI models that match the performance of leading Western models at a fraction of the cost. Just last week the Chinese firm Moonshot AI released its latest open-weight model, Kimi K2.5, which came close to top proprietary systems such as Anthropic’s Claude Opus on some early benchmarks. The difference: K2.5 is roughly one-seventh Opus’s price. On Hugging Face, Alibaba’s Qwen family—after ranking as the most downloaded model series in 2025 and 2026—has overtaken Meta’s Llama models in cumulative downloads. And a recent MIT study found that Chinese open-source models have surpassed US models in total downloads. For developers and builders worldwide, access to near-frontier AI capabilities has never been this broad or this affordable. These models differ in a crucial way from most US models like ChatGPT or Claude, which you pay to access and can’t inspect. The Chinese companies publish their models’ weights—numerical values that get set when a model is trained—so anyone can download, run, study, and modify them.
If open-source AI models keep getting better, they will not just offer the cheapest options for people who want access to frontier AI capabilities; they will change where innovation happens and who sets the standards. Here’s what may come next.
China’s commitment to open source will continue When DeepSeek launched R1, much of the initial shock centered on its origin. Suddenly, a Chinese team had released a reasoning model that could stand alongside the best systems from US labs. But the long tail of DeepSeek’s impact had less to do with nationality than with distribution. R1 was released as an open-weight model under a permissive MIT license, allowing anyone to download, inspect, and deploy it. On top of that, DeepSeek also published a paper detailing its training process and techniques. For developers who access models via an API, DeepSeek also undercut competitors on price, offering access at a fraction the cost of OpenAI’s o1, the leading proprietary reasoning model at the time. Within days of its release, DeepSeek replaced ChatGPT as the most downloaded free app in the US App Store. The moment spilled beyond developer circles into financial markets, triggering a sharp sell-off in US tech stocks that briefly erased roughly $1 trillion in market value. Almost overnight, DeepSeek went from a little-known spin-off team backed by a quantitative hedge fund to the most visible symbol of China’s push for open-source AI. China’s decision to lean into open source isn’t surprising. It has the world’s second-largest concentration of AI talent after the US. plus a vast, well-resourced tech industry. After ChatGPT broke into the mainstream, China’s AI sector went through a reckoning—and emerged determined to catch up. Pursuing an open-source strategy was seen as the fastest way to close the gap by rallying developers, spreading adoption, and setting standards. DeepSeek’s success injected confidence into an industry long used to following global standards rather than setting them. “Thirty years ago, no Chinese person would believe they could be at the center of global innovation,” says Alex Chenglin Wu, CEO and founder of Atoms, an AI agent company and prominent contributor to China’s open-source ecosystem. “DeepSeek shows that with solid technical talent, a supportive environment, and the right organizational culture, it’s possible to do truly world-class work.” DeepSeek’s breakout moment wasn’t China’s first open-source success. Alibaba’s Qwen Lab had been releasing open-weight models for years. By September 2024, well before DeepSeek’s V3 launch, Alibaba was saying that global downloads had exceeded 600 million. On Hugging Face, Qwen accounted for more than 30% of all model downloads in 2024. Other institutions, including the Beijing Academy of Artificial Intelligence and the AI firm Baichuan, were also releasing open models as early as 2023. But since the success of DeepSeek, the field has widened rapidly. Companies such as Z.ai (formerly Zhipu), MiniMax, Tencent, and a growing number of smaller labs have released models that are competitive on reasoning, coding, and agent-style tasks. The growing number of capable models has sped up progress. Capabilities that once took months to make it to the open-source world now emerge within weeks, even days. “Chinese AI firms have seen real gains from the open-source playbook,” says Liu Zhiyuan, a professor of computer science at Tsinghua University and chief scientist at the AI startup ModelBest. “By releasing strong research, they build reputation and gain free publicity.” Beyond commercial incentives, Liu says, open source has taken on cultural and strategic weight. “In the Chinese programmer community, open source has become politically correct,” he says, framing it as a response to US.dominance in proprietary AI systems.
That shift is also reflected at the institutional level. Universities including Tsinghua have begun encouraging AI development and open-source contributions, while policymakers have moved to formalize those incentives. In August, China’s State Council released a draft policy encouraging universities to reward open-source work, proposing that students’ contributions on platforms such as GitHub or Gitee could eventually be counted toward academic credit. With growing momentum and a reinforcing feedback loop, China’s push for open-source models is likely to continue in the near term, though its long-term sustainability still hinges on financial results, says Tiezhen Wang, who helps lead work on global AI at Hugging Face. In January, the model labs Z.ai and MiniMax went public in Hong Kong. “Right now, the focus is on making the cake bigger,” says Wang. “The next challenge is figuring out how each company secures its share.” The next wave of models will be narrower—and better Chinese open-source models are leading not just in download volume but also in variety. Alibaba’s Qwen has become one of the most diversified open model families in circulation, offering a wide range of variants optimized for different uses. The lineup ranges from lightweight models that can run on a single laptop to large, multi-hundred-billion-parameter systems designed for data-center deployment. Qwen features many task-optimized variants created by the community: the “instruct” models are good at following orders, and “code” variants specialize in coding. Although this strategy isn’t unique to Chinese labs, Qwen was the first open model family to roll out so many high-quality options that it started to feel like a full product line—one that’s free to use. The open-weight nature of these releases also makes it easy for others to adapt them through techniques like fine-tuning and distillation, which means training a smaller model to mimic a larger one. According to ATOM (American Truly Open Models), a project by the AI researcher Nathan Lambert, by August 4, 2025, new model variations derived from Qwen were “more than 40%” of new Hugging Face language-model derivatives, while Llama had fallen to about 15%. This means that Qwen has become the default base model for all the “remixes.” This pattern has made the case for smaller, more specialized models. “Compute and energy are real constraints for any deployment,” Liu says. He told MIT Technology Review that the rise of small models is about making AI cheaper to run and easier for more people to use. His company, ModelBest, focuses on small language models designed to run locally on devices such as phones, cars, and other consumer hardware. While an average user might interact with AI only through the web or an app for simple conversations, power users of AI models with some technical background are experimenting with giving AI more autonomy to solve large-scale problems. OpenClaw, an open-source AI agent that recently went viral within the AI hacker world, allows AI to take over your computer—it can run 24-7, going through your emails and work tasks without supervision. OpenClaw, like many other open-source tools, allows users to connect to different AI models via an application programming interface, or API. Within days of OpenClaw’s release, the team revealed that Kimi’s K2.5 had surpassed Claude Opus and became the most used AI model—by token count, meaning it was handling more total text processed across user prompts and model responses.
Cost has been a major reason Chinese models have gained traction, but it would be a mistake to treat them as mere “dupes” of Western frontier systems, Wang suggests. Like any product, a model only needs to be good enough for the job at hand. The landscape of open-source models in China is also getting more specialized. Research groups such as Shanghai AI Laboratory have released models geared toward scientific and technical tasks; several projects from Tencent have focused specifically on music generation. Ubiquant, a quantitative finance firm like DeepSeek’s parent High-Flyer, has released an open model aimed at medical reasoning.
In the meantime, innovative architectural ideas from Chinese labs are being picked up more broadly. DeepSeek has published work exploring model efficiency and memory; techniques that compress the model’s attention “cache,” reducing memory and inference costs while mostly preserving performance, have drawn significant attention in the research community. “The impact of these research breakthroughs is amplified because they’re open-sourced and can be picked up quickly across the field,” says Wang. Chinese open models will become infrastructure for global AI builders The adoption of Chinese models is picking up in Silicon Valley, too. Martin Casado, a general partner at Andreessen Horowitz, has put a number on it: Among startups pitching with open-source stacks, there’s about an 80% chance they’re running on Chinese open models, according to a post he made on X. Usage data tells a similar story. OpenRouter, a middleman that tracks how people use different AI models through its API, shows Chinese open models rising from almost none in late 2024 to nearly 30% of usage in some recent weeks. The demand is also rising globally. Z.ai limited new subscriptions to its GLM coding plan (a coding tool based on its flagship GLM models) after demand surged, citing compute constraints. What’s notable is where the demand is coming from: CNBC reports that the system’s user base is primarily concentrated in the United States and China, followed by India, Japan, Brazil, and the UK. “The open-source ecosystems in China and the US are tightly bound together,” says Wang at Hugging Face. Many Chinese open models still rely on Nvidia and US cloud platforms to train and serve them, which keeps the business ties tangled. Talent is fluid too: Researchers move across borders and companies, and many still operate as a global community, sharing code and ideas in public. That interdependence is part of what makes Chinese developers feel optimistic about this moment: The work travels, gets remixed, and actually shows up in products. But openness can also accelerate the competition. Dario Amodei, the CEO of Anthropic, made a version of this point after DeepSeek’s 2025 releases: He wrote that export controls are “not a way to duck the competition” between the US and China, and that AI companies in the US “must have better models” if they want to prevail. For the past decade, the story of Chinese tech in the West has been one of big expectations that ran into scrutiny, restrictions, and political backlash. This time the export isn’t just an app or a consumer platform. It’s the underlying model layer that other people build on. Whether that will play out differently is still an open question.

Is a secure AI assistant possible?
EXECUTIVE SUMMARY AI agents are a risky business. Even when stuck inside the chatbox window, LLMs will make mistakes and behave badly. Once they have tools that they can use to interact with the outside world, such as web browsers and email addresses, the consequences of those mistakes become far more serious. That might explain why the first breakthrough LLM personal assistant came not from one of the major AI labs, which have to worry about reputation and liability, but from an independent software engineer, Peter Steinberger. In November of 2025, Steinberger uploaded his tool, now called OpenClaw, to GitHub, and in late January the project went viral. OpenClaw harnesses existing LLMs to let users create their own bespoke assistants. For some users, this means handing over reams of personal data, from years of emails to the contents of their hard drive. That has security experts thoroughly freaked out. The risks posed by OpenClaw are so extensive that it would probably take someone the better part of a week to read all of the security blog posts on it that have cropped up in the past few weeks. The Chinese government took the step of issuing a public warning about OpenClaw’s security vulnerabilities. In response to these concerns, Steinberger posted on X that nontechnical people should not use the software. (He did not respond to a request for comment for this article.) But there’s a clear appetite for what OpenClaw is offering, and it’s not limited to people who can run their own software security audits. Any AI companies that hope to get in on the personal assistant business will need to figure out how to build a system that will keep users’ data safe and secure. To do so, they’ll need to borrow approaches from the cutting edge of agent security research.
Risk management OpenClaw is, in essence, a mecha suit for LLMs. Users can choose any LLM they like to act as the pilot; that LLM then gains access to improved memory capabilities and the ability to set itself tasks that it repeats on a regular cadence. Unlike the agentic offerings from the major AI companies, OpenClaw agents are meant to be on 24-7, and users can communicate with them using WhatsApp or other messaging apps. That means they can act like a superpowered personal assistant who wakes you each morning with a personalized to-do list, plans vacations while you work, and spins up new apps in its spare time. But all that power has consequences. If you want your AI personal assistant to manage your inbox, then you need to give it access to your email—and all the sensitive information contained there. If you want it to make purchases on your behalf, you need to give it your credit card info. And if you want it to do tasks on your computer, such as writing code, it needs some access to your local files.
There are a few ways this can go wrong. The first is that the AI assistant might make a mistake, as when a user’s Google Antigravity coding agent reportedly wiped his entire hard drive. The second is that someone might gain access to the agent using conventional hacking tools and use it to either extract sensitive data or run malicious code. In the weeks since OpenClaw went viral, security researchers have demonstrated numerous such vulnerabilities that put security-naïve users at risk. Both of these dangers can be managed: Some users are choosing to run their OpenClaw agents on separate computers or in the cloud, which protects data on their hard drives from being erased, and other vulnerabilities could be fixed using tried-and-true security approaches. But the experts I spoke to for this article were focused on a much more insidious security risk known as prompt injection. Prompt injection is effectively LLM hijacking: Simply by posting malicious text or images on a website that an LLM might peruse, or sending them to an inbox that an LLM reads, attackers can bend it to their will. And if that LLM has access to any of its user’s private information, the consequences could be dire. “Using something like OpenClaw is like giving your wallet to a stranger in the street,” says Nicolas Papernot, a professor of electrical and computer engineering at the University of Toronto. Whether or not the major AI companies can feel comfortable offering personal assistants may come down to the quality of the defenses that they can muster against such attacks. It’s important to note here that prompt injection has not yet caused any catastrophes, or at least none that have been publicly reported. But now that there are likely hundreds of thousands of OpenClaw agents buzzing around the internet, prompt injection might start to look like a much more appealing strategy for cybercriminals. “Tools like this are incentivizing malicious actors to attack a much broader population,” Papernot says. Building guardrails The term “prompt injection” was coined by the popular LLM blogger Simon Willison in 2022, a couple of months before ChatGPT was released. Even back then, it was possible to discern that LLMs would introduce a completely new type of security vulnerability once they came into widespread use. LLMs can’t tell apart the instructions that they receive from users and the data that they use to carry out those instructions, such as emails and web search results—to an LLM, they’re all just text. So if an attacker embeds a few sentences in an email and the LLM mistakes them for an instruction from its user, the attacker can get the LLM to do anything it wants. Prompt injection is a tough problem, and it doesn’t seem to be going away anytime soon. “We don’t really have a silver-bullet defense right now,” says Dawn Song, a professor of computer science at UC Berkeley. But there’s a robust academic community working on the problem, and they’ve come up with strategies that could eventually make AI personal assistants safe. Technically speaking, it is possible to use OpenClaw today without risking prompt injection: Just don’t connect it to the internet. But restricting OpenClaw from reading your emails, managing your calendar, and doing online research defeats much of the purpose of using an AI assistant. The trick of protecting against prompt injection is to prevent the LLM from responding to hijacking attempts while still giving it room to do its job.
One strategy is to train the LLM to ignore prompt injections. A major part of the LLM development process, called post-training, involves taking a model that knows how to produce realistic text and turning it into a useful assistant by “rewarding” it for answering questions appropriately and “punishing” it when it fails to do so. These rewards and punishments are metaphorical, but the LLM learns from them as an animal would. Using this process, it’s possible to train an LLM not to respond to specific examples of prompt injection. But there’s a balance: Train an LLM to reject injected commands too enthusiastically, and it might also start to reject legitimate requests from the user. And because there’s a fundamental element of randomness in LLM behavior, even an LLM that has been very effectively trained to resist prompt injection will likely still slip up every once in a while. Another approach involves halting the prompt injection attack before it ever reaches the LLM. Typically, this involves using a specialized detector LLM to determine whether or not the data being sent to the original LLM contains any prompt injections. In a recent study, however, even the best-performing detector completely failed to pick up on certain categories of prompt injection attack. The third strategy is more complicated. Rather than controlling the inputs to an LLM by detecting whether or not they contain a prompt injection, the goal is to formulate a policy that guides the LLM’s outputs—i.e., its behaviors—and prevents it from doing anything harmful. Some defenses in this vein are quite simple: If an LLM is allowed to email only a few pre-approved addresses, for example, then it definitely won’t send its user’s credit card information to an attacker. But such a policy would prevent the LLM from completing many useful tasks, such as researching and reaching out to potential professional contacts on behalf of its user. “The challenge is how to accurately define those policies,” says Neil Gong, a professor of electrical and computer engineering at Duke University. “It’s a trade-off between utility and security.” On a larger scale, the entire agentic world is wrestling with that trade-off: At what point will agents be secure enough to be useful? Experts disagree. Song, whose startup, Virtue AI, makes an agent security platform, says she thinks it’s possible to safely deploy an AI personal assistant now. But Gong says, “We’re not there yet.” Even if AI agents can’t yet be entirely protected against prompt injection, there are certainly ways to mitigate the risks. And it’s possible that some of those techniques could be implemented in OpenClaw. Last week, at the inaugural ClawCon event in San Francisco, Steinberger announced that he’d brought a security person on board to work on the tool. As of now, OpenClaw remains vulnerable, though that hasn’t dissuaded its multitude of enthusiastic users. George Pickett, a volunteer maintainer of the OpenGlaw GitHub repository and a fan of the tool, says he’s taken some security measures to keep himself safe while using it: He runs it in the cloud, so that he doesn’t have to worry about accidentally deleting his hard drive, and he’s put mechanisms in place to ensure that no one else can connect to his assistant. But he hasn’t taken any specific actions to prevent prompt injection. He’s aware of the risk but says he hasn’t yet seen any reports of it happening with OpenClaw. “Maybe my perspective is a stupid way to look at it, but it’s unlikely that I’ll be the first one to be hacked,” he says.

Gemini 3 Deep Think: Advancing science, research and engineering
Today, we’re releasing a major upgrade to Gemini 3 Deep Think, our specialized reasoning mode, built to push the frontier of intelligence and solve modern challenges across science, research, and engineering.We updated Gemini 3 Deep Think in close partnership with scientists and researchers to tackle tough research challenges — where problems often lack clear guardrails or a single correct solution and data is often messy or incomplete. By blending deep scientific knowledge with everyday engineering utility, Deep Think moves beyond abstract theory to drive practical applications.The new Deep Think is now available in the Gemini app for Google AI Ultra subscribers and, for the first time, we’re also making Deep Think available via the Gemini API to select researchers, engineers and enterprises. Express interest in early access here.Here is how our early testers are already using the latest Deep Think:

Global Exploration Signaling ‘Early Recovery’
Global exploration is signaling an “early recovery”, according to an Enverus Intelligence Research (EIR) statement sent to Rigzone by the Enverus team recently. The statement, which highlighted that Enverus was releasing a new global exploration outlook, said EIR “finds that, while global exploration and appraisal activity in 2025 remained near historical lows, long lead indicators such as block awards, new country entries and increased seismic surveying point to a gradual recovery forming from a very low base”. “Despite depressed activity levels, exploration success rates have held steady in the 30 percent to 40 percent range, underscoring a continued focus on prospect high grading, capital discipline and risk weighted exploration strategies,” it added. EIR’s statement noted that offshore exploration accounted for more than 50 percent of total activity in 2025, “driven by infrastructure led exploration and renewed focus on higher impact opportunities”. It also said supermajors and national oil companies are leading the exploration recovery, “particularly in acquiring new acreage in regions where subsurface potential for giant discoveries is matched by above ground conditions that support faster project advancement”. Independent and junior explorers are increasing participation, according to the statement, “signaling broader industry reengagement beyond supermajors and national oil companies”. EIR noted in the statement that it expects “the slow recovery to contribute to a structural supply gap after 2030, as limited exploration today constrains future project pipelines and resource replacement”. EIR Director Patrick Rutty said in the statement, “exploration is not rebounding quickly, but the early indicators are clearly improving”. “Given recent drilling success and diminished concerns over peak demand, the industry is reprioritizing exploration, a dynamic that should drive resource capture to relatively high levels over the next five years but does not yet negate the risk of a structural supply gap later this decade,” he added. In a

Ukraine Strikes 2nd Lukoil Refinery in Russia This Week
Ukrainian drones hit another Russian refinery owned by Lukoil PJSC, as Kyiv’s attacks on its foe’s energy infrastructure resume after a lull last month. Fire crews are working to extinguish a blaze at an oil refinery in the city of Ukhta some 1,550 kilometers (965 miles) from Moscow, following a Ukrainian drone attack, Komi region Governor Rostislav Goldshtein said in a post on Telegram, without giving further details. The fire broke out at the refinery’s primary unit and a visbreaker, a unit designed to convert heavy residue into lighter oil products, Ukraine’s General Staff said on Telegram. Lukoil didn’t respond to a Bloomberg request for comment. Ukraine carried out multiple high-precision strikes on Russia’s energy assets last year, leading to refinery shutdowns, disruptions at oil terminals and the rerouting of some tankers. The attacks are designed to curb the Kremlin’s energy revenues and restrict fuel supplies to Russian front lines as the war is about to enter a fifth year. The attacks slowed in January, targeting three small independent Russian refineries that together account for about 7% of the country’s typical monthly crude throughput. The lull offered temporary relief for Russia’s downstream sector, allowing refinery runs to gradually recover. As processing rates improved, the government lifted its ban on most gasoline exports, enabling producers to resume shipments in February, a month earlier than planned. On Wednesday, however, Ukraine attacked Lukoil’s oil refinery in Russia’s Volgograd region in the first major strike on the country’s oil-processing industry this year. The plant’s design capacity is about 300,000 barrels of crude a day. The smaller Ukhta refinery has recently been processing just over 60,000 barrels per day. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting

Crude Glut Is a Boon for USA Refiners
Oil markets are awash in crude, keeping a lid on prices and squeezing drillers. For US refiners, though, the glut is proving a windfall. The big three US refiners — Marathon Petroleum, Valero Energy Corp. and Phillips 66 — all beat estimates in fourth quarter earnings results reported in recent weeks. On calls with analysts, executives signaled a profitable outlook for 2026 and the years ahead, not least because they’re set to benefit from an influx of cheaper and more readily available heavy crudes. The divergence reflects a growing imbalance in global fuel markets: demand for gasoline, diesel and jet fuel is rising faster than new refining capacity is growing, even as oil producers continue to pump more crude than the world needs. That dynamic allows refiners to buy cheaper feedstock while charging more for finished fuels. “We are very bullish,” Phillips 66 Chief Executive Officer Mark Lashier said on a Feb. 4 call with analysts. Fuel demand is set to grow in 2026, and global refining capacity additions will fall short, Lashier said. The upbeat tone is a far cry from early 2025, when President Donald Trump’s tariff uncertainty clouded the economic outlook and sparked concerns over fuel demand. At the time, the industry braced for a wave of plant closures. Since then, fuel consumption has remained resilient even as the supply glut drove oil prices lower. Brent crude, the global benchmark, is down about 10% over the past 12 months. Refining margins for America’s top fuel makers, who collectively process some 8 million barrels of oil a day, ended 2025 with profits that were about $5 a barrel higher than the fourth quarter of 2024. With fuel demand forecast to stay strong, the upward momentum for margins is likely to continue. Consultant Rapidan Energy, in its refined product outlook

Wright Says China Bought Some VEN Oil From the USA
China has bought some Venezuelan oil that was purchased earlier by the US, according to Energy Secretary Chris Wright. “China has already bought some of the crude that’s been sold by the US government,” Wright told the media in Caracas, without giving details. “Legitimate Chinese business deals under legitimate business conditions” would be fine, he said, when asked about possible joint ventures in the country. China’s Foreign Ministry spokesman Lin Jian said he wasn’t familiar with Wright’s comments when asked at a regular briefing in Beijing on Thursday. The global oil market was jolted in January as US forces swooped into Venezuela and seized former President Nicolás Maduro, with Washington asserting control over the OPEC member’s crude industry. Since then, traders have looked for signals about how export patterns may change, and how output may be revived after years of neglect, sanctions, and underinvestment. The South American country’s so-called “oil quarantine” was essentially over, Wright said on Thursday. Ahead of the intervention, the US blockaded the country’s oil flows with a vast naval force, and seized several vessels. Refiners in China — the largest world’s oil importer — were the biggest buyers of Venezuelan crude before the US move, with the bulk of the imports bought by private processors. Given those flows were sanctioned, they were typically offered with deep discounts, making them attractive to local users. After Maduro’s seizure, President Donald Trump said that Venezuela would turn over 30 million to 50 million barrels of sanctioned oil to the US, according to a post on Truth Social. In addition, Wright told Fox News in January that the US would not cut China off from accessing Venezuelan crude. Several Indian refiners have bought Venezuela’s flagship Merey-grade crude following the US action, and the government has asked state-owned processors to consider buying more Venezuelan and US oil.

USA Crude Oil Stocks Rise More Than 8MM Barrels WoW
U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), increased by 8.5 million barrels from the week ending January 30 to the week ending February 6. That’s what the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report, which was released on Wednesday and included data for the week ending February 6. According to the EIA report, crude oil stocks, not including the SPR, stood at 428.8 million barrels on February 6, 420.3 million barrels on January 30, and 427.9 million barrels on February 7, 2025. Crude oil in the SPR stood at 415.2 million barrels on February 6, 415.2 million barrels on January 30, and 395.3 million barrels on February 7, 2025, the EIA report revealed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.689 billion barrels on February 6, the report highlighted. Total petroleum stocks were down 1.7 million barrels week on week and up 81.9 million barrels year on year, the report pointed out. “At 428.8 million barrels, U.S. crude oil inventories are about three percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 1.2 million barrels from last week and are about four percent above the five year average for this time of year. Finished gasoline inventories decreased, while blending components inventories increased last week,” it added. “Distillate fuel inventories decreased by 2.7 million barrels last week and are about four percent below the five year average for this time of year. Propane/propylene inventories decreased 5.4 million barrels from last week and are about 36 percent above the five
Stay Ahead with the Paperboy Newsletter
Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.