Stay Ahead, Stay ONMINE

Is a secure AI assistant possible?

EXECUTIVE SUMMARY AI agents are a risky business. Even when stuck inside the chatbox window, LLMs will make mistakes and behave badly. Once they have tools that they can use to interact with the outside world, such as web browsers and email addresses, the consequences of those mistakes become far more serious. That might explain why the first breakthrough LLM personal assistant came not from one of the major AI labs, which have to worry about reputation and liability, but from an independent software engineer, Peter Steinberger. In November of 2025, Steinberger uploaded his tool, now called OpenClaw, to GitHub, and in late January the project went viral. OpenClaw harnesses existing LLMs to let users create their own bespoke assistants. For some users, this means handing over reams of personal data, from years of emails to the contents of their hard drive. That has security experts thoroughly freaked out. The risks posed by OpenClaw are so extensive that it would probably take someone the better part of a week to read all of the security blog posts on it that have cropped up in the past few weeks. The Chinese government took the step of issuing a public warning about OpenClaw’s security vulnerabilities. In response to these concerns, Steinberger posted on X that nontechnical people should not use the software. (He did not respond to a request for comment for this article.) But there’s a clear appetite for what OpenClaw is offering, and it’s not limited to people who can run their own software security audits. Any AI companies that hope to get in on the personal assistant business will need to figure out how to build a system that will keep users’ data safe and secure. To do so, they’ll need to borrow approaches from the cutting edge of agent security research. Risk management OpenClaw is, in essence, a mecha suit for LLMs. Users can choose any LLM they like to act as the pilot; that LLM then gains access to improved memory capabilities and the ability to set itself tasks that it repeats on a regular cadence. Unlike the agentic offerings from the major AI companies, OpenClaw agents are meant to be on 24-7, and users can communicate with them using WhatsApp or other messaging apps. That means they can act like a superpowered personal assistant who wakes you each morning with a personalized to-do list, plans vacations while you work, and spins up new apps in its spare time. But all that power has consequences. If you want your AI personal assistant to manage your inbox, then you need to give it access to your email—and all the sensitive information contained there. If you want it to make purchases on your behalf, you need to give it your credit card info. And if you want it to do tasks on your computer, such as writing code, it needs some access to your local files.  There are a few ways this can go wrong. The first is that the AI assistant might make a mistake, as when a user’s Google Antigravity coding agent reportedly wiped his entire hard drive. The second is that someone might gain access to the agent using conventional hacking tools and use it to either extract sensitive data or run malicious code. In the weeks since OpenClaw went viral, security researchers have demonstrated numerous such vulnerabilities that put security-naïve users at risk. Both of these dangers can be managed: Some users are choosing to run their OpenClaw agents on separate computers or in the cloud, which protects data on their hard drives from being erased, and other vulnerabilities could be fixed using tried-and-true security approaches. But the experts I spoke to for this article were focused on a much more insidious security risk known as prompt injection. Prompt injection is effectively LLM hijacking: Simply by posting malicious text or images on a website that an LLM might peruse, or sending them to an inbox that an LLM reads, attackers can bend it to their will. And if that LLM has access to any of its user’s private information, the consequences could be dire. “Using something like OpenClaw is like giving your wallet to a stranger in the street,” says Nicolas Papernot, a professor of electrical and computer engineering at the University of Toronto. Whether or not the major AI companies can feel comfortable offering personal assistants may come down to the quality of the defenses that they can muster against such attacks. It’s important to note here that prompt injection has not yet caused any catastrophes, or at least none that have been publicly reported. But now that there are likely hundreds of thousands of OpenClaw agents buzzing around the internet, prompt injection might start to look like a much more appealing strategy for cybercriminals. “Tools like this are incentivizing malicious actors to attack a much broader population,” Papernot says.  Building guardrails The term “prompt injection” was coined by the popular LLM blogger Simon Willison in 2022, a couple of months before ChatGPT was released. Even back then, it was possible to discern that LLMs would introduce a completely new type of security vulnerability once they came into widespread use. LLMs can’t tell apart the instructions that they receive from users and the data that they use to carry out those instructions, such as emails and web search results—to an LLM, they’re all just text. So if an attacker embeds a few sentences in an email and the LLM mistakes them for an instruction from its user, the attacker can get the LLM to do anything it wants. Prompt injection is a tough problem, and it doesn’t seem to be going away anytime soon. “We don’t really have a silver-bullet defense right now,” says Dawn Song, a professor of computer science at UC Berkeley. But there’s a robust academic community working on the problem, and they’ve come up with strategies that could eventually make AI personal assistants safe. Technically speaking, it is possible to use OpenClaw today without risking prompt injection: Just don’t connect it to the internet. But restricting OpenClaw from reading your emails, managing your calendar, and doing online research defeats much of the purpose of using an AI assistant. The trick of protecting against prompt injection is to prevent the LLM from responding to hijacking attempts while still giving it room to do its job. One strategy is to train the LLM to ignore prompt injections. A major part of the LLM development process, called post-training, involves taking a model that knows how to produce realistic text and turning it into a useful assistant by “rewarding” it for answering questions appropriately and “punishing” it when it fails to do so. These rewards and punishments are metaphorical, but the LLM learns from them as an animal would. Using this process, it’s possible to train an LLM not to respond to specific examples of prompt injection. But there’s a balance: Train an LLM to reject injected commands too enthusiastically, and it might also start to reject legitimate requests from the user. And because there’s a fundamental element of randomness in LLM behavior, even an LLM that has been very effectively trained to resist prompt injection will likely still slip up every once in a while. Another approach involves halting the prompt injection attack before it ever reaches the LLM. Typically, this involves using a specialized detector LLM to determine whether or not the data being sent to the original LLM contains any prompt injections. In a recent study, however, even the best-performing detector completely failed to pick up on certain categories of prompt injection attack. The third strategy is more complicated. Rather than controlling the inputs to an LLM by detecting whether or not they contain a prompt injection, the goal is to formulate a policy that guides the LLM’s outputs—i.e., its behaviors—and prevents it from doing anything harmful. Some defenses in this vein are quite simple: If an LLM is allowed to email only a few pre-approved addresses, for example, then it definitely won’t send its user’s credit card information to an attacker. But such a policy would prevent the LLM from completing many useful tasks, such as researching and reaching out to potential professional contacts on behalf of its user. “The challenge is how to accurately define those policies,” says Neil Gong, a professor of electrical and computer engineering at Duke University. “It’s a trade-off between utility and security.” On a larger scale, the entire agentic world is wrestling with that trade-off: At what point will agents be secure enough to be useful? Experts disagree. Song, whose startup, Virtue AI, makes an agent security platform, says she thinks it’s possible to safely deploy an AI personal assistant now. But Gong says, “We’re not there yet.”  Even if AI agents can’t yet be entirely protected against prompt injection, there are certainly ways to mitigate the risks. And it’s possible that some of those techniques could be implemented in OpenClaw. Last week, at the inaugural ClawCon event in San Francisco, Steinberger announced that he’d brought a security person on board to work on the tool. As of now, OpenClaw remains vulnerable, though that hasn’t dissuaded its multitude of enthusiastic users. George Pickett, a volunteer maintainer of the OpenGlaw GitHub repository and a fan of the tool, says he’s taken some security measures to keep himself safe while using it: He runs it in the cloud, so that he doesn’t have to worry about accidentally deleting his hard drive, and he’s put mechanisms in place to ensure that no one else can connect to his assistant. But he hasn’t taken any specific actions to prevent prompt injection. He’s aware of the risk but says he hasn’t yet seen any reports of it happening with OpenClaw. “Maybe my perspective is a stupid way to look at it, but it’s unlikely that I’ll be the first one to be hacked,” he says.

AI agents are a risky business. Even when stuck inside the chatbox window, LLMs will make mistakes and behave badly. Once they have tools that they can use to interact with the outside world, such as web browsers and email addresses, the consequences of those mistakes become far more serious.

That might explain why the first breakthrough LLM personal assistant came not from one of the major AI labs, which have to worry about reputation and liability, but from an independent software engineer, Peter Steinberger. In November of 2025, Steinberger uploaded his tool, now called OpenClaw, to GitHub, and in late January the project went viral.

OpenClaw harnesses existing LLMs to let users create their own bespoke assistants. For some users, this means handing over reams of personal data, from years of emails to the contents of their hard drive. That has security experts thoroughly freaked out. The risks posed by OpenClaw are so extensive that it would probably take someone the better part of a week to read all of the security blog posts on it that have cropped up in the past few weeks. The Chinese government took the step of issuing a public warning about OpenClaw’s security vulnerabilities.

In response to these concerns, Steinberger posted on X that nontechnical people should not use the software. (He did not respond to a request for comment for this article.) But there’s a clear appetite for what OpenClaw is offering, and it’s not limited to people who can run their own software security audits. Any AI companies that hope to get in on the personal assistant business will need to figure out how to build a system that will keep users’ data safe and secure. To do so, they’ll need to borrow approaches from the cutting edge of agent security research.

Risk management

OpenClaw is, in essence, a mecha suit for LLMs. Users can choose any LLM they like to act as the pilot; that LLM then gains access to improved memory capabilities and the ability to set itself tasks that it repeats on a regular cadence. Unlike the agentic offerings from the major AI companies, OpenClaw agents are meant to be on 24-7, and users can communicate with them using WhatsApp or other messaging apps. That means they can act like a superpowered personal assistant who wakes you each morning with a personalized to-do list, plans vacations while you work, and spins up new apps in its spare time.

But all that power has consequences. If you want your AI personal assistant to manage your inbox, then you need to give it access to your email—and all the sensitive information contained there. If you want it to make purchases on your behalf, you need to give it your credit card info. And if you want it to do tasks on your computer, such as writing code, it needs some access to your local files. 

There are a few ways this can go wrong. The first is that the AI assistant might make a mistake, as when a user’s Google Antigravity coding agent reportedly wiped his entire hard drive. The second is that someone might gain access to the agent using conventional hacking tools and use it to either extract sensitive data or run malicious code. In the weeks since OpenClaw went viral, security researchers have demonstrated numerous such vulnerabilities that put security-naïve users at risk.

Both of these dangers can be managed: Some users are choosing to run their OpenClaw agents on separate computers or in the cloud, which protects data on their hard drives from being erased, and other vulnerabilities could be fixed using tried-and-true security approaches.

But the experts I spoke to for this article were focused on a much more insidious security risk known as prompt injection. Prompt injection is effectively LLM hijacking: Simply by posting malicious text or images on a website that an LLM might peruse, or sending them to an inbox that an LLM reads, attackers can bend it to their will.

And if that LLM has access to any of its user’s private information, the consequences could be dire. “Using something like OpenClaw is like giving your wallet to a stranger in the street,” says Nicolas Papernot, a professor of electrical and computer engineering at the University of Toronto. Whether or not the major AI companies can feel comfortable offering personal assistants may come down to the quality of the defenses that they can muster against such attacks.

It’s important to note here that prompt injection has not yet caused any catastrophes, or at least none that have been publicly reported. But now that there are likely hundreds of thousands of OpenClaw agents buzzing around the internet, prompt injection might start to look like a much more appealing strategy for cybercriminals. “Tools like this are incentivizing malicious actors to attack a much broader population,” Papernot says. 

Building guardrails

The term “prompt injection” was coined by the popular LLM blogger Simon Willison in 2022, a couple of months before ChatGPT was released. Even back then, it was possible to discern that LLMs would introduce a completely new type of security vulnerability once they came into widespread use. LLMs can’t tell apart the instructions that they receive from users and the data that they use to carry out those instructions, such as emails and web search results—to an LLM, they’re all just text. So if an attacker embeds a few sentences in an email and the LLM mistakes them for an instruction from its user, the attacker can get the LLM to do anything it wants.

Prompt injection is a tough problem, and it doesn’t seem to be going away anytime soon. “We don’t really have a silver-bullet defense right now,” says Dawn Song, a professor of computer science at UC Berkeley. But there’s a robust academic community working on the problem, and they’ve come up with strategies that could eventually make AI personal assistants safe.

Technically speaking, it is possible to use OpenClaw today without risking prompt injection: Just don’t connect it to the internet. But restricting OpenClaw from reading your emails, managing your calendar, and doing online research defeats much of the purpose of using an AI assistant. The trick of protecting against prompt injection is to prevent the LLM from responding to hijacking attempts while still giving it room to do its job.

One strategy is to train the LLM to ignore prompt injections. A major part of the LLM development process, called post-training, involves taking a model that knows how to produce realistic text and turning it into a useful assistant by “rewarding” it for answering questions appropriately and “punishing” it when it fails to do so. These rewards and punishments are metaphorical, but the LLM learns from them as an animal would. Using this process, it’s possible to train an LLM not to respond to specific examples of prompt injection.

But there’s a balance: Train an LLM to reject injected commands too enthusiastically, and it might also start to reject legitimate requests from the user. And because there’s a fundamental element of randomness in LLM behavior, even an LLM that has been very effectively trained to resist prompt injection will likely still slip up every once in a while.

Another approach involves halting the prompt injection attack before it ever reaches the LLM. Typically, this involves using a specialized detector LLM to determine whether or not the data being sent to the original LLM contains any prompt injections. In a recent study, however, even the best-performing detector completely failed to pick up on certain categories of prompt injection attack.

The third strategy is more complicated. Rather than controlling the inputs to an LLM by detecting whether or not they contain a prompt injection, the goal is to formulate a policy that guides the LLM’s outputs—i.e., its behaviors—and prevents it from doing anything harmful. Some defenses in this vein are quite simple: If an LLM is allowed to email only a few pre-approved addresses, for example, then it definitely won’t send its user’s credit card information to an attacker. But such a policy would prevent the LLM from completing many useful tasks, such as researching and reaching out to potential professional contacts on behalf of its user.

“The challenge is how to accurately define those policies,” says Neil Gong, a professor of electrical and computer engineering at Duke University. “It’s a trade-off between utility and security.”

On a larger scale, the entire agentic world is wrestling with that trade-off: At what point will agents be secure enough to be useful? Experts disagree. Song, whose startup, Virtue AI, makes an agent security platform, says she thinks it’s possible to safely deploy an AI personal assistant now. But Gong says, “We’re not there yet.” 

Even if AI agents can’t yet be entirely protected against prompt injection, there are certainly ways to mitigate the risks. And it’s possible that some of those techniques could be implemented in OpenClaw. Last week, at the inaugural ClawCon event in San Francisco, Steinberger announced that he’d brought a security person on board to work on the tool.

As of now, OpenClaw remains vulnerable, though that hasn’t dissuaded its multitude of enthusiastic users. George Pickett, a volunteer maintainer of the OpenGlaw GitHub repository and a fan of the tool, says he’s taken some security measures to keep himself safe while using it: He runs it in the cloud, so that he doesn’t have to worry about accidentally deleting his hard drive, and he’s put mechanisms in place to ensure that no one else can connect to his assistant.

But he hasn’t taken any specific actions to prevent prompt injection. He’s aware of the risk but says he hasn’t yet seen any reports of it happening with OpenClaw. “Maybe my perspective is a stupid way to look at it, but it’s unlikely that I’ll be the first one to be hacked,” he says.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

NetBrain’s new AI agents automate network diagnosis

In testing, the system handled the majority of real-world network issues. “90% of the real-world network issues that they had when they threw them at it, it handled it,” Nixon said. “[People] couldn’t quite believe that it was at the 90% mark. People went in thinking, ‘Well, if this gives me

Read More »

IBM FlashSystems gain AI-assisted telemetry, analytics

For security, the systems include a new FlashCore Module all-flash drive, which brings hardware-accelerated, real-time ransomware detection, data reduction, analytics and operations. The devices can spot anomalies and patterns in data that need to be remediated, IBM noted. “The next-generation IBM FlashSystem elevates storage to an intelligent, always-available layer, where autonomous

Read More »

Versa bolsters data protection, AI-powered operations in SASE upgrade

Docker-containerized ML models execute data discovery and classification locally, maintaining data sovereignty while scanning file repositories, SaaS applications, and inline traffic flows, the authors stated.  “Versa DLP uses advanced transformer models and fine-tuned Large Language Models (LLMs) to detect sensitive information across diverse document types and formats. Unlike traditional pattern

Read More »

DKnife targets network gateways in long running AitM campaign

Beyond update hijacking, the framework supports DNS manipulation, binary replacement, and selective traffic forwarding, giving attackers control over how specific requests are handled. Indicators point to China-Nexus development and targeting Several aspects of DKnife’s design and operation suggested ties to China-aligned threat actors. Talos identified configuration data and code comments written in

Read More »

OPEC Says Oil Production Declined Last Month

OPEC+ oil production declined sharply last month amid losses in Kazakhstan, Venezuela and Iran, the group said.  The 22 nations of the alliance produced an average of 42.448 million barrels a day in January, or 439,000 a day less than the previous month, according to a copy of the group’s monthly report obtained by Bloomberg. Kazakhstan accounted for more than half of the drop. While the report didn’t give a reason for the overall decline, Kazakhstan’s production fell as it suspended operations at the Tengiz oil field, the country’s largest. The Chevron-led venture started to restore output there at the end of last month.  Separately, Venezuelan oil exports were disrupted by a US blockade during the ousting of former President Nicolas Maduro, while Iran continues to face American sanctions. Saudi Arabia and several other key nations held steady in January as the Organization of the Petroleum Exporting Countries and its allies began a three-month freeze to offset a seasonal lull in consumption. They’ll meet online on March 1 to review production levels for April and beyond. OPEC kept forecasts for global oil supply and demand unchanged for this year and next, according to the report. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Ukraine Hits Lukoil Refinery

Ukraine attacked an oil refinery in Russia’s Volgograd region in the first major strike on Russia’s oil-processing industry this year. An overnight drone strike sparked a fire at the facility, Ukraine’s General Staff said on Telegram Wednesday. “The scope of the damage is being clarified,” it said, adding that the refinery helps supply the Russian army. Ukraine carried out multiple high-precision strikes on Russia’s energy assets last year, leading to refinery shutdowns, disruptions at oil terminals and the rerouting of some tankers. The attacks were designed to curb the Kremlin’s energy revenues and restrict fuel supplies to Russian front lines in the war, now nearing its fifth year. The Volgograd refinery, which was attacked several times last year, has a design capacity of about 300,000 barrels of crude a day. It mainly supplies oil products to southern Russia, with some volumes exported. The administration of the Volgograd region said in a Telegram statement that an an industrial plant caught fire after a drone attack but did not name the facility. Lukoil, Russia’s largest private oil producer, did not immediately respond to a request for comment. Satellite images from NASA’s Fire Information for Resource Management System show multiple fires at the refinery that began during the night of Feb. 10-11. The fires were not visible the previous day, according to the data. In January, Ukraine targeted three small independent Russian refineries, which together account for about 7% of Russia’s typical monthly crude throughput. The lull in drone strikes had offered temporary relief for Russia’s downstream sector, allowing refinery runs to gradually increase. Encouraged by the recovery, the government lifted its ban on most gasoline exports, permitting producers to resume shipments in February — a month earlier than planned. While Ukrainian attacks on Russia’s oil industry slowed in January, Moscow continued intense assaults on energy infrastructure

Read More »

TotalEnergies Cuts Buyback to Lower End of Range

(Update) February 11, 2026, 5:10 PM GMT: Article updated with comments on dividend growth, potential investment decisions and acquisitions from 14th paragraph. TotalEnergies SE trimmed its share buybacks to the lower end of its guidance range, aiming to keep debt in check as it adjusts to lower oil prices. The company plans to repurchase $750 million of stock in the first quarter, compared with $1.5 billion in the final three months of 2025, it said in an earnings statement Wednesday. For the year, its buyback target was kept at a range of $3 billion to $6 billion. TotalEnergies is the third and last of Europe’s top oil and gas producers to release earnings after Shell Plc and BP Plc published disappointing quarterly reports. The company has a lower ratio of debt to equity than its European peers and kept quarterly dividend unchanged. “This year we want to balance cash generation with cash expenditure,” Chief Executive Officer Patrick Pouyanne said during a press conference in Paris to discuss earnings. “We don’t know what will happen this year. We want to keep a healthy balance sheet.” Shares of Total closed 2.7% up, at their highest since July 2024. The company has a “solid balance sheet despite uncertain environment,“ Jefferies analysts led by Mark Wilson said in a note after the earnings release. While Big Oil is still churning out hefty profits, cash flows — particularly in Europe — have been undermined by last year’s 18% dive in crude prices. There are also widespread forecasts that the market will remain oversupplied this year as production swells both inside and outside the OPEC+ alliance. “Oil supply remains abundant, so the market is rather trending down,” Pouyanne said, adding that sanctions on Russia are causing a buildup of the nation’s crude at sea. Total’s adjusted

Read More »

EIA Sees Brent Price Dropping in 2026 and 2027

In its latest short term energy outlook (STEO), which was released on February 10, the U.S. Energy Information Administration (EIA) projected that the average Brent spot price will drop in 2026 and 2027. According to this STEO, the EIA sees the Brent spot price coming in at $57.69 per barrel in 2026 and $53.00 per barrel in 2027. The Brent spot price averaged $69.04 per barrel in 2025, the STEO showed. A quarterly breakdown included in the EIA’s latest STEO showed that the organization expects the Brent spot price to come in at $64.44 per barrel in the first quarter of this year, $57.32 per barrel in the second quarter, $55.35 per barrel in the third quarter, $54.00 per barrel in the fourth quarter, and $53.00 per barrel across the first, second, third, and fourth quarters of next year. In the STEO, the EIA highlighted that the Brent crude oil spot price averaged $67 per barrel in January, which it pointed out was $4 per barrel higher than the average in December. The EIA noted that daily Brent crude oil prices increased from an average of $62 per barrel on January 2 to $72 per barrel on January 30. “Crude oil prices rose in response to disruptions to crude oil production in the United States and Kazakhstan,” the EIA highlighted in the STEO. “Despite the near-term increase in prices and short-term disruptions to oil supply, we forecast that strong growth in global oil production will result in high global oil inventory builds over the forecast, causing crude oil prices to fall,” it added. “We forecast that Brent spot prices will average $58 per barrel in 2026 and $53 per barrel in 2027, down from an average of $69 per barrel in 2025,” it continued. In its STEO, the EIA said

Read More »

USA Allows Oilfield Contractors to Go to Work in VEN Fields

The US government issued a general license to allow oilfield-service companies to work in Venezuela as the Trump administration eases sanctions and pushes to rebuild the nation’s crude infrastructure. The license issued by the Treasury Department allows US firms to explore, develop and produce oil and natural gas in Venezuela under certain limited conditions, according to a statement Tuesday. The move is the latest in a series of steps Washington has taken to entice US companies to revive output from Venezuela’s vast crude reserves after last month’s capture of strongman Nicolás Maduro. In January, the US issued a general license that allowed for a wide range of crude operations, including exporting, transporting, refining and buying and selling crude. The general license announced Tuesday involves tasks such as geological mapping, reservoir analysis and related tasks that augment the commencement of oil production.  However, the license does not allow new joint ventures in Venezuela. US people and firms will need to provide detailed plans to the State Department and Department of Energy for any work in the country, according to the statement. The Treasury Department is also preparing to issue a general license allowing companies to pump oil in Venezuela, Bloomberg reported earlier this month.  Oilfield service companies are hired by producers to asses discoveries, drill wells, and enhance output from older assets. SLB Ltd., Halliburton Co. and Baker Hughes Co. dominate the sector. SLB has been working in Venezuela for Chevron Corp., operating under a US license held by the supermajor. The other large contractors scaled back or shut down their primary operations in the country as the previous regime tightened control over the energy industry.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate

Read More »

EIA Fuel Update Shows Increasing Price Trend for USA Gasoline

The U.S. Energy Information Administration’s (EIA) latest gasoline fuel update, which was released on February 10, showed an increasing trend for the U.S. regular gasoline price. According to the update, the U.S. regular gasoline price averaged $2.853 per gallon on January 26, $2.867 per gallon on February 2, and $2.902 per gallon on February 9. Although the February 9 price was up $0.035 from the week ago price, it was down $0.226 from the year ago price, the update outlined. Of the five Petroleum Administration for Defense District (PADD) regions highlighted in the EIA’s latest fuel update, the West Coast was shown to have the highest U.S. regular gasoline price as of February 9, at $3.938 per gallon. The Gulf Coast was shown in the update to have the lowest U.S. regular gasoline price as of February 9, at $2.476 per gallon. A glossary section of the EIA site notes that the 50 U.S. states and the District of Columbia are divided into five districts, with PADD 1 further split into three subdistricts. PADDs 6 and 7 encompass U.S. territories, the site adds. In a blog posted on its website on February 9, GasBuddy noted that, according to its data, the U.S. average price of gasoline “has risen 1.2 cents over the last week and stands at $2.84 per gallon”. “The national average is up 5.4 cents from a month ago and is 24.9 cents per gallon lower than a year ago,” it added. In that blog, Patrick De Haan, head of petroleum analysis at GasBuddy, said, “the national average price of gasoline only edged slightly higher last week, but nine of the ten largest weekly price movements were increases, led by West Coast states as California begins the transition to summer gasoline”. “Most states saw relatively minor fluctuations, but we’re

Read More »

Energy providers seek flexible load strategies for data center operations

“In theory, yes, they’d have to wait a little bit longer while their queries are routed to a data center that has capacity,” said Lawrence. The one thing the industry cannot do is operate like it has in the past, where data center power was tuned and then forgotten for six months. Previously, data centers would test their power sources once or twice a year. They don’t have that luxury anymore. They need to check their power sources and loads far more regularly, according to Lawrence. “I think that for that for the data center industry to continue to survive like we all need it, there’s going to have to be some realignment on the incentives to why somebody would become flexible,” said Lawrence. The survey suggests that utilities and load operators expect to expand their demand response activities and budgets in the near term. Sixty-three percent of respondents anticipate DR program funding to grow by 50% or more over the next three years. While they remain a major source of load growth and system strain, 57% of respondents indicate that onsite power generation from data centers will be most important to improving grid stability over the next five years. One of the proposed fixes to the power shortage has been small modular nuclear reactors. These have gained a lot of traction in the marketplace even if they have nothing to sell yet. But Lawrence said that that’s not an ideal solution for existing power generators, ironically enough.

Read More »

Nokia predicts huge WAN traffic growth, but experts question assumptions

Consumer, which includes both mobile access and fixed access, including fixed wireless access. Enterprise and industrial, which covers wide-area connectivity that supports knowledge work, automation, machine vision, robotics coordination, field support, and industrial IoT. AI, including applications that people directly invoke, such as assistants, copilots, and media generation, as well as autonomous use cases in which AI systems trigger other AI systems to perform functions and move data across networks. The report outlines three scenarios: conservative, moderate, and aggressive. “Our goal is to present scenarios that fall within a realistic range of possible outcomes, encouraging stakeholders to plan across the full spectrum of high-impact demand possibilities,” the report says. Nokia’s prediction for global WAN traffic growth ranges from a 13% CAGR for the conservative scenario to 16% CAGR for moderate and 22% CAGR for aggressive. Looking more closely at the moderate scenario, it’s clear that consumer traffic dominates. Enterprise and industrial traffic make up only about 14% to 17% of overall WAN traffic, although their share is expected to grow during the 10-year forecast period. “On the consumer side, the vast majority of traffic by volume is video,” says William Webb, CEO of the consulting firm Commcisive. Asked whether any of that consumer traffic is at some point served up by enterprises, the answer is a decisive “no.” It’s mostly YouTube and streaming services like Netflix, he says. In short, that doesn’t raise enterprise concerns. Nokia predicts AI traffic boom AI is a different story. “Consumer- and enterprise-generated AI traffic imposes a substantial impact on the wide-area network (WAN) by adding AI workloads processed by data centers across the WAN. AI traffic does not stay inside one data center; it moves across edge, metro, core, and cloud infrastructure, driving dense lateral flows and new capacity demands,” the report says. An

Read More »

Cisco amps up Silicon One line, delivers new systems and optics for AI networking

Those building blocks include the new G300 as well as the G200 51.2 Tbps chip, which is aimed at spine and aggregation applications, and the G100 25.6 Tbps chip, which is aimed at leaf operations. Expanded portfolio of Silicon One P200-powered systems Cisco in October rolled out the P200 Silicon One chip and the high-end, 51.2 Tbps 8223 router aimed at distributed AI workloads. That system supports Octal Small Form-Factor Pluggable (OSFP) and Quad Small Form-Factor Pluggable Double Density (QSFP-DD) optical form factors that help the box support geographically dispersed AI clusters. Cisco grew the G200 family this week with the addition of the 8122X-64EF-O, a 64x800G switch that will run the SONiC OS and includes support for Cisco 800G Linear Pluggable Optics (LPO) connectivity. LPO components typically set up direct links between fiber optic modules, eliminating the need for traditional components such as a digital signal processor. Cisco said its P200 systems running IOS XR software now better support core routing services to allow data-center-to-data-center links and data center interconnect applications. In addition, Cisco introduced a P200-powered 88-LC2-36EF-M line card, which delivers 28.8T of capacity. “Available for both our 8-slot and 18-slot modular systems, this line card enables up to an unprecedented 518.4T of total system bandwidth, the highest in the industry,” wrote Guru Shenoy, senior vice president of the Cisco provider connectivity group, in a blog post about the news. “When paired with Cisco 800G ZR/ZR+ coherent pluggable optics, these systems can easily connect sites over 1,000 kilometers apart, providing the high-density performance needed for modern data center interconnects and core routing.”

Read More »

NetBox Labs ships AI copilot designed for network engineers, not developers

Natural language for network engineers Beevers explained that network operations teams face two fundamental barriers to automation. First, they lack accurate data about their infrastructure. Second, they aren’t software developers and shouldn’t have to become them. “These are not software developers. They are network engineers or IT infrastructure engineers,” Beevers said. “The big realization for us through the copilot journey is they will never be software developers. Let’s stop trying to make them be. Let’s let these computers that are really good at being software developers do that, and let’s let the network engineers or the data center engineers be really good at what they’re really good at.”  That vision drove the development of NetBox Copilot’s natural language interface and its capabilities. Grounding AI in infrastructure reality The challenge with deploying AI  in network operations is trust. Generic large language models hallucinate, produce inconsistent results, and lack the operational context to make reliable decisions. NetBox Copilot addresses this by grounding the AI agent in NetBox’s comprehensive infrastructure data model. NetBox serves as the system of record for network and infrastructure teams, maintaining a semantic map of devices, connections, IP addressing, rack layouts, power distribution and the relationships between these elements. Copilot has native awareness of this data structure and the context it provides. This enables queries that would be difficult or impossible with traditional interfaces. Network engineers can ask “Which devices are missing IP addresses?” to validate data completeness, “Who changed this prefix last week?” for change tracking and compliance, or “What depends on this switch?” for impact analysis before maintenance windows.

Read More »

US pushes voluntary pact to curb AI data center energy impact

Others note that cost pressure isn’t limited to the server rack. Danish Faruqui, CEO of Fab Economics, said the AI ecosystem is layered from silicon to software services, creating multiple points where infrastructure expenses eventually resurface. “Cloud service providers are likely to gradually introduce more granular pricing models across cloud, AI, and SaaS offerings, tailored by customer type, as they work to absorb the costs associated with the White House energy and grid compact,” Faruqui said.   This may not show up as explicit energy surcharges, but instead surface through reduced discounts, higher spending commitments, and premiums for guaranteed capacity or performance. “Smaller enterprises will feel the impact first, while large strategic customers remain insulated longer,” Rawat said. “Ultimately, the compact would delay and redistribute cost pressure; it does not eliminate it.” Implications for data center design The proposal is also likely to accelerate changes in how AI facilities are designed. “Data centers will evolve into localized microgrids that combine utility power with on-site generation and higher-level implementation of battery energy storage systems,” Faruqui said. “Designing for grid interaction will become imperative for AI data centers, requiring intelligent, high-speed switching gear, increased battery energy storage capacity for frequency regulation, and advanced control systems that can manage on-site resources.”

Read More »

Intel teams with SoftBank to develop new memory type

However, don’t expect anything anytime soon. Intel’s Director of Global Strategic Partnerships Sanam Masroor outlined the plans in a blog post. Operations are expected to begin in Q1 2026, with prototypes due in 2027 and commercial products by 2030. While Intel has not come out and said it, that memory design is almost identical to HBM used in GPU accelerators and AI data centers. HBM sits right on the GPU die for immediate access to the GPU, unlike standard DRAM which resides on memory sticks plugged into the motherboard. HBM is much faster than DDR memory but is also much more expensive to produce. It’s also much more profitable than standard DRAM which is why the big three memory makers – Micron, Samsung, and SK Hynix – are favoring production of it.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »