Stay Ahead, Stay ONMINE

Cyberattacks by AI agents are coming

Agents are the talk of the AI industry—they’re capable of planning, reasoning, and executing complex tasks like scheduling meetings, ordering groceries, or even taking over your computer to change settings on your behalf. But the same sophisticated abilities that make agents helpful assistants could also make them powerful tools for conducting cyberattacks. They could readily be used to identify vulnerable targets, hijack their systems, and steal valuable data from unsuspecting victims.   At present, cybercriminals are not deploying AI agents to hack at scale. But researchers have demonstrated that agents are capable of executing complex attacks (Anthropic, for example, observed its Claude LLM successfully replicating an attack designed to steal sensitive information), and cybersecurity experts warn that we should expect to start seeing these types of attacks spilling over into the real world. “I think ultimately we’re going to live in a world where the majority of cyberattacks are carried out by agents,” says Mark Stockley, a security expert at the cybersecurity company Malwarebytes. “It’s really only a question of how quickly we get there.” While we have a good sense of the kinds of threats AI agents could present to cybersecurity, what’s less clear is how to detect them in the real world. The AI research organization Palisade Research has built a system called LLM Agent Honeypot in the hopes of doing exactly this. It has set up vulnerable servers that masquerade as sites for valuable government and military information to attract and try to catch AI agents attempting to hack in.The team behind it hopes that by tracking these attempts in the real world, the project will act as an early warning system and help experts develop effective defenses against AI threat actors by the time they become a serious issue. “Our intention was to try and ground the theoretical concerns people have,” says Dmitrii Volkov, research lead at Palisade. “We’re looking out for a sharp uptick, and when that happens, we’ll know that the security landscape has changed. In the next few years, I expect to see autonomous hacking agents being told: ‘This is your target. Go and hack it.’” AI agents represent an attractive prospect to cybercriminals. They’re much cheaper than hiring the services of professional hackers and could orchestrate attacks more quickly and at a far larger scale than humans could. While cybersecurity experts believe that ransomware attacks—the most lucrative kind—are relatively rare because they require considerable human expertise, those attacks could be outsourced to agents in the future, says Stockley. “If you can delegate the work of target selection to an agent, then suddenly you can scale ransomware in a way that just isn’t possible at the moment,” he says. “If I can reproduce it once, then it’s just a matter of money for me to reproduce it 100 times.” Agents are also significantly smarter than the kinds of bots that are typically used to hack into systems. Bots are simple automated programs that run through scripts, so they struggle to adapt to unexpected scenarios. Agents, on the other hand, are able not only to adapt the way they engage with a hacking target but also to avoid detection—both of which are beyond the capabilities of limited, scripted programs, says Volkov. “They can look at a target and guess the best ways to penetrate it,” he says. “That kind of thing is out of reach of, like, dumb scripted bots.” Since LLM Agent Honeypot went live in October of last year, it has logged more than 11 million attempts to access it—the vast majority of which were from curious humans and bots. But among these, the researchers have detected eight potential AI agents, two of which they have confirmed are agents that appear to originate from Hong Kong and Singapore, respectively.  “We would guess that these confirmed agents were experiments directly launched by humans with the agenda of something like ‘Go out into the internet and try and hack something interesting for me,’” says Volkov. The team plans to expand its honeypot into social media platforms, websites, and databases to attract and capture a broader range of attackers, including spam bots and phishing agents, to analyze future threats.   To determine which visitors to the vulnerable servers were LLM-powered agents, the researchers embedded prompt-injection techniques into the honeypot. These attacks are designed to change the behavior of AI agents by issuing them new instructions and asking questions that require humanlike intelligence. This approach wouldn’t work on standard bots. For example, one of the injected prompts asked the visitor to return the command “cat8193” to gain access. If the visitor correctly complied with the instruction, the researchers checked how long it took to do so, assuming that LLMs are able to respond in much less time than it takes a human to read the request and type out an answer—typically in under 1.5 seconds. While the two confirmed AI agents passed both tests, the six others only entered the command but didn’t meet the response time that would identify them as AI agents. Experts are still unsure when agent-orchestrated attacks will become more widespread. Stockley, whose company Malwarebytes named agentic AI as a notable new cybersecurity threat in its 2025 State of Malware report, thinks we could be living in a world of agentic attackers as soon as this year.  And although regular agentic AI is still at a very early stage—and criminal or malicious use of agentic AI even more so—it’s even more of a Wild West than the LLM field was two years ago, says Vincenzo Ciancaglini, a senior threat researcher at the security company Trend Micro.  “Palisade Research’s approach is brilliant: basically hacking the AI agents that try to hack you first,” he says. “While in this case we’re witnessing AI agents trying to do reconnaissance, we’re not sure when agents will be able to carry out a full attack chain autonomously. That’s what we’re trying to keep an eye on.”  And while it’s possible that malicious agents will be used for intelligence gathering before graduating to simple attacks and eventually complex attacks as the agentic systems themselves become more complex and reliable, it’s equally possible there will be an unexpected overnight explosion in criminal usage, he says: “That’s the weird thing about AI development right now.” Those trying to defend against agentic cyberattacks should keep in mind that AI is currently more of an accelerant to existing attack techniques than something that fundamentally changes the nature of attacks, says Chris Betz, chief information security officer at Amazon Web Services. “Certain attacks may be simpler to conduct and therefore more numerous; however, the foundation of how to detect and respond to these events remains the same,” he says. Agents could also be deployed to detect vulnerabilities and protect against intruders, says Edoardo Debenedetti, a PhD student at ETH Zürich in Switzerland, pointing out that if a friendly agent cannot find any vulnerabilities in a system, it’s unlikely that a similarly capable agent used by a malicious party is going to be able to find any either. While we know that AI’s potential to autonomously conduct cyberattacks is a growing risk and that AI agents are already scanning the internet, one useful next step is to evaluate how good agents are at finding and exploiting these real-world vulnerabilities. Daniel Kang, an assistant professor at the University of Illinois Urbana-Champaign, and his team have built a benchmark to evaluate this; they have found that current AI agents successfully exploited up to 13% of vulnerabilities for which they had no prior knowledge. Providing the agents with a brief description of the vulnerability pushed the success rate up to 25%, demonstrating how AI systems are able to identify and exploit weaknesses even without training. Basic bots would presumably do much worse. The benchmark provides a standardized way to assess these risks, and Kang hopes it can guide the development of safer AI systems. “I’m hoping that people start to be more proactive about the potential risks of AI and cybersecurity before it has a ChatGPT moment,” he says. “I’m afraid people won’t realize this until it punches them in the face.”

Agents are the talk of the AI industry—they’re capable of planning, reasoning, and executing complex tasks like scheduling meetings, ordering groceries, or even taking over your computer to change settings on your behalf. But the same sophisticated abilities that make agents helpful assistants could also make them powerful tools for conducting cyberattacks. They could readily be used to identify vulnerable targets, hijack their systems, and steal valuable data from unsuspecting victims.  

At present, cybercriminals are not deploying AI agents to hack at scale. But researchers have demonstrated that agents are capable of executing complex attacks (Anthropic, for example, observed its Claude LLM successfully replicating an attack designed to steal sensitive information), and cybersecurity experts warn that we should expect to start seeing these types of attacks spilling over into the real world.

“I think ultimately we’re going to live in a world where the majority of cyberattacks are carried out by agents,” says Mark Stockley, a security expert at the cybersecurity company Malwarebytes. “It’s really only a question of how quickly we get there.”

While we have a good sense of the kinds of threats AI agents could present to cybersecurity, what’s less clear is how to detect them in the real world. The AI research organization Palisade Research has built a system called LLM Agent Honeypot in the hopes of doing exactly this. It has set up vulnerable servers that masquerade as sites for valuable government and military information to attract and try to catch AI agents attempting to hack in.

The team behind it hopes that by tracking these attempts in the real world, the project will act as an early warning system and help experts develop effective defenses against AI threat actors by the time they become a serious issue.

“Our intention was to try and ground the theoretical concerns people have,” says Dmitrii Volkov, research lead at Palisade. “We’re looking out for a sharp uptick, and when that happens, we’ll know that the security landscape has changed. In the next few years, I expect to see autonomous hacking agents being told: ‘This is your target. Go and hack it.’”

AI agents represent an attractive prospect to cybercriminals. They’re much cheaper than hiring the services of professional hackers and could orchestrate attacks more quickly and at a far larger scale than humans could. While cybersecurity experts believe that ransomware attacks—the most lucrative kind—are relatively rare because they require considerable human expertise, those attacks could be outsourced to agents in the future, says Stockley. “If you can delegate the work of target selection to an agent, then suddenly you can scale ransomware in a way that just isn’t possible at the moment,” he says. “If I can reproduce it once, then it’s just a matter of money for me to reproduce it 100 times.”

Agents are also significantly smarter than the kinds of bots that are typically used to hack into systems. Bots are simple automated programs that run through scripts, so they struggle to adapt to unexpected scenarios. Agents, on the other hand, are able not only to adapt the way they engage with a hacking target but also to avoid detection—both of which are beyond the capabilities of limited, scripted programs, says Volkov. “They can look at a target and guess the best ways to penetrate it,” he says. “That kind of thing is out of reach of, like, dumb scripted bots.”

Since LLM Agent Honeypot went live in October of last year, it has logged more than 11 million attempts to access it—the vast majority of which were from curious humans and bots. But among these, the researchers have detected eight potential AI agents, two of which they have confirmed are agents that appear to originate from Hong Kong and Singapore, respectively. 

“We would guess that these confirmed agents were experiments directly launched by humans with the agenda of something like ‘Go out into the internet and try and hack something interesting for me,’” says Volkov. The team plans to expand its honeypot into social media platforms, websites, and databases to attract and capture a broader range of attackers, including spam bots and phishing agents, to analyze future threats.  

To determine which visitors to the vulnerable servers were LLM-powered agents, the researchers embedded prompt-injection techniques into the honeypot. These attacks are designed to change the behavior of AI agents by issuing them new instructions and asking questions that require humanlike intelligence. This approach wouldn’t work on standard bots.

For example, one of the injected prompts asked the visitor to return the command “cat8193” to gain access. If the visitor correctly complied with the instruction, the researchers checked how long it took to do so, assuming that LLMs are able to respond in much less time than it takes a human to read the request and type out an answer—typically in under 1.5 seconds. While the two confirmed AI agents passed both tests, the six others only entered the command but didn’t meet the response time that would identify them as AI agents.

Experts are still unsure when agent-orchestrated attacks will become more widespread. Stockley, whose company Malwarebytes named agentic AI as a notable new cybersecurity threat in its 2025 State of Malware report, thinks we could be living in a world of agentic attackers as soon as this year. 

And although regular agentic AI is still at a very early stage—and criminal or malicious use of agentic AI even more so—it’s even more of a Wild West than the LLM field was two years ago, says Vincenzo Ciancaglini, a senior threat researcher at the security company Trend Micro. 

“Palisade Research’s approach is brilliant: basically hacking the AI agents that try to hack you first,” he says. “While in this case we’re witnessing AI agents trying to do reconnaissance, we’re not sure when agents will be able to carry out a full attack chain autonomously. That’s what we’re trying to keep an eye on.” 

And while it’s possible that malicious agents will be used for intelligence gathering before graduating to simple attacks and eventually complex attacks as the agentic systems themselves become more complex and reliable, it’s equally possible there will be an unexpected overnight explosion in criminal usage, he says: “That’s the weird thing about AI development right now.”

Those trying to defend against agentic cyberattacks should keep in mind that AI is currently more of an accelerant to existing attack techniques than something that fundamentally changes the nature of attacks, says Chris Betz, chief information security officer at Amazon Web Services. “Certain attacks may be simpler to conduct and therefore more numerous; however, the foundation of how to detect and respond to these events remains the same,” he says.

Agents could also be deployed to detect vulnerabilities and protect against intruders, says Edoardo Debenedetti, a PhD student at ETH Zürich in Switzerland, pointing out that if a friendly agent cannot find any vulnerabilities in a system, it’s unlikely that a similarly capable agent used by a malicious party is going to be able to find any either.

While we know that AI’s potential to autonomously conduct cyberattacks is a growing risk and that AI agents are already scanning the internet, one useful next step is to evaluate how good agents are at finding and exploiting these real-world vulnerabilities. Daniel Kang, an assistant professor at the University of Illinois Urbana-Champaign, and his team have built a benchmark to evaluate this; they have found that current AI agents successfully exploited up to 13% of vulnerabilities for which they had no prior knowledge. Providing the agents with a brief description of the vulnerability pushed the success rate up to 25%, demonstrating how AI systems are able to identify and exploit weaknesses even without training. Basic bots would presumably do much worse.

The benchmark provides a standardized way to assess these risks, and Kang hopes it can guide the development of safer AI systems. “I’m hoping that people start to be more proactive about the potential risks of AI and cybersecurity before it has a ChatGPT moment,” he says. “I’m afraid people won’t realize this until it punches them in the face.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

VMware (quietly) brings back its free ESXi hypervisor

By many accounts, Broadcom’s handling of the VMware acquisition was clumsy and caused many enterprises to reevaluate their relationship with the vendor The move to subscription models was tilted in favor of larger customers and longer, three-year licenses. Because the string of bad publicity and VMware’s competitors pounced, offering migration

Read More »

CoreWeave offers cloud-based Grace Blackwell GPUs for AI training

Cloud services provider CoreWeave has announced it is offering Nvidia’s GB200 NVL72 systems, otherwise known as “Grace Blackwell,” to customers looking to do intensive AI training. CoreWeave said its portfolio of cloud services are optimized for the GB200 NVL72, including CoreWeave’s Kubernetes Service, Slurm on Kubernetes (SUNK), Mission Control, and

Read More »

Kyndryl launches private cloud services for enterprise AI deployments

Kyndryl’s AI Private Cloud environment includes services and capabilities around containerization, data science tools, and microservices to deploy and manage AI applications on the private cloud. The service supports AI data foundations and MLOps/LLMOps services, letting customers manage their AI data pipelines and machine learning operation, Kyndryl stated. These tools facilitate

Read More »

BP Ships First Greater Tortue Ahmeyim LNG Cargo

BP p.l.c. marked a major milestone at its Greater Tortue Ahmeyim liquefied natural gas (LNG) Phase 1 project offshore Mauritania and Senegal. The oil and gas major said in a media release that it loaded the first LNG cargo from the GTA project following first gas earlier this year. The initial LNG shipment at GTA marks BP’s third significant upstream project launch of the year. This is the first of ten anticipated by the conclusion of 2027, aligning with BP’s strategy to expand its upstream oil and gas operations. “This first cargo from Mauritania and Senegal marks a significant new supply for global energy markets. Starting exports from GTA Phase 1 is an important step for BP and our oil and gas business as we celebrate the creation of a new production hub within our global portfolio”, Gordon Birrell, EVP production & operations, said. “This is the culmination of years of work from the entire project and operations teams – congratulations to all who were involved in safely reaching this landmark. I would also like to thank the governments of Mauritania and Senegal, and our partners – Kosmos Energy, PETROSEN, and SMH – for their ongoing support and collaboration”, he said. The initial delivery of LNG was transferred from the project’s floating liquefied natural gas (FLNG) vessel situated 10 kilometers (6.2 miles) offshore, where the natural gas was cooled to a cryogenic state, liquefied, and stored, BP said. The company added that GTA stands as one of the most profound offshore projects in Africa, with gas reserves found at water depths reaching up to 2,850 meters, and has been recognized as “a project of strategic national importance” by the governments of Mauritania and Senegal. Once fully operational, GTA Phase 1 is anticipated to generate approximately 2.4 million tonnes of LNG

Read More »

Atlas Professionals Secures Crew Management Contract from Noble Corporation

Atlas Professionals B.V., a global recruitment and HR services company, has secured a Crew Management contract from Noble Corporation. Under the contract, Atlas Professionals will support the Noble Developer’s upcoming drilling campaign in Suriname. Atlas Professionals said in a media release that the award marks a significant milestone in a trusted relationship spanning over 15 years and reinforces its commitment to delivering tailored workforce solutions in complex and emerging markets. Under this agreement, Atlas will supply junior and expatriate crew for the campaign, as well as extensive logistics, training, and immigration support, all managed from its Paramaribo, Suriname office. “It’s great to be able to support Noble Corporation with this project in Suriname”, Lourdes Landa, Global Network Development Director, said. “Our long history of working together means we understand exactly what Noble needs in emerging markets like Suriname. Thanks to our global infrastructure and local set-up in the region, we can deliver a truly local solution backed by international standards”. Under the contract, Atlas said it will engage local Surinamese crew and focus on cultivating homegrown talent via its acclaimed ‘Greenhand Offshore’ Program, offering organized onboarding and career opportunities for newcomers in the offshore sector. “Atlas has invested significantly in establishing legal entities and offices in Guyana, Colombia, Mexico, Suriname, and Trinidad”, Chris Boardman, General Manager at Atlas Professionals, said. “We’ve made it a priority to support our clients’ increasing activity in this region, while also ensuring that local communities benefit from the growth in the offshore industry. We thrive in environments that others may find too difficult or too complex – that’s where we add the most value”. To contact the author, email [email protected] What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new

Read More »

Vista Acquires Stake in Uncoventional Asset in Vaca Muerta, Argentina

Vista Energy S.A.B. de C.V. said its subsidiary Vista Energy Argentina S.A.U. has acquired 100 percent of the capital stock of Petronas E&P Argentina S.A. (PEPASA), from Petronas Carigali Canada B.V. and Petronas Carigali International E&P B.V. PEPASA holds a 50 percent working interest in the La Amarga Chica unconventional concession (LACh) in Vaca Muerta, Argentina, Vista said in a news release. The purchase price consists of $900 million in cash, $300 million in deferred cash payments and around 7.3 million American depositary shares (ADSs) representing Vista’s series A shares, subject to lock-up restrictions that will expire with respect to 50% of the ADSs on October 15, and with respect to the remaining 50 percent of the ADSs on April 15, 2026. The deferred cash payments will be paid 50 percent on April 15, 2029, and 50 percent on April 15, 2030, without accruing interest, Vista said. LACh spans across 46,594 acres in the black oil window of Vaca Muerta. As of December 31, 2024, it had 247 wells in production, 280 million barrels of oil equivalent (MMboe) of P1 reserves, according to the release. In the fourth quarter of 2024, LACh produced 79,543 barrels of oil equivalent per day (boepd) at 100 percent working interest, of which 71,471 barrels per day (bpd) were oil, according to the Argentine Secretary of Energy. Vista said it estimates LACh could potentially hold 400 new well locations to be drilled in its inventory. The remaining 50 percent stake in LACh is held by Argentina’s state-owned YPF S.A., which is the operator of the concession. The LACh unconventional concession expires in December 2049. “Significant oil midstream capacity is consolidated through the acquisition,” Vista said, with PEPASA having approximately 57,000 bpd transportation capacity and 48,000 bpd export dispatch capacity in several key midstream projects. Vista

Read More »

USEDC Acquires 20,000 Acres in Permian Basin

Fort Worth, Texas-based U.S. Energy Development Corporation (USEDC) said it has acquired approximately 20,000 net acres in Reeves and Ward Counties, Texas, expanding its total Permian Basin holdings. The position includes a substantial proved-producing component and multiyear drilling inventory to supplement its existing footprint in the area, the company said in a news release. The transaction price was $390 million. USEDC said it plans to run a dedicated drilling rig on the acquired acreage. “This transaction greatly enhances the overall quality and resilience of our portfolio, supplementing our reserves with additional proved producing assets, adding years of multi-bench drilling inventory, and expanding our operated economies of scale,” USEDC Chairman and CEO Jordan Jayson said. “These factors position USEDC for sustained, efficient growth and reinforce our commitment to delivering long-term value for our partners”. Concurrent with the acquisition, USEDC completed an increase in the borrowing base and commitments under its syndicated revolving credit facility led by Citibank, N.A. from $165 million to $300 million. The upsized revolving credit facility provides USEDC with significant financial flexibility to support its continued growth and has a maximum credit amount of $500 million, the company said. “The upsize of our revolving credit facility in connection with our recently completed acquisition underscores our lenders’ confidence in USEDC’s disciplined strategy and positions us to capitalize on future growth opportunities,” USEDC CFO Brandon Standifird said. “We’re pleased to have strong support from our banking group as we continue our trajectory of strategic expansion”. In February, USEDC said it planned to invest up to $1 billion in 2025, primarily in the Permian Basin. In 2024, the company deployed about $850 million in operated and non-operated oil and gas projects in the basin, evaluating over 220 opportunities and completing 29 transactions. The company said it expects the Permian Basin to

Read More »

Small data centers, big impact: How demand response can fuel the energy transition

Rachel Permut is chief business development officer at Enersponse. The rapid growth of data centers today has significantly contributed to the rise in global energy consumption, putting immense pressure on an already strained energy grid. This is only increasing; according to the Department of Energy, data centers are expected to use 6.7% to 12% of total U.S. electricity by 2028. However, data centers have an opportunity to be part of the solution, allowing us to benefit from their operations while safeguarding our energy future. Data center expansion growth is rising exponentially and has already been on a projected growth trajectory of 19% to 23% since 2023 as we head toward 2030, according to McKinsey & Co. Data center capacity in Las Vegas and Reno, Nevada, is projected to surge by approximately 953%, reaching a total demand of 3,812 MW by 2030. Phoenix’s demand for future energy will reach 5,340 MW. On the East Coast, Northern Virginia’s Data Center Alley will continue to dominate the U.S. data center landscape, requiring an eye-watering 11,077 MW of electricity by 2030. This growing demand and increased footprint are leading to higher energy prices, the risk of power outages in these already resource-constrained states of the West and Southwest, land restrictions on the East Coast and an increased reliance on fossil-fueled peaker plants. With major energy sources struggling to keep up with this heightened demand, smaller data centers and other commercial and industrial sources can play a critical role in stabilizing the grid by participating in demand response programs and implementing battery storage solutions. Win-win for smaller data centers DR programs are voluntary initiatives that incentivize energy consumers to reduce their consumption during peak demand periods. Hyperscale data centers, like Amazon’s, have announced plans to use other forms of clean energy such as nuclear power

Read More »

Dr. Kerry-Ann Adamson-Morland is championing innovation and connection at All-Energy 2025

When it comes to shaping the future of hydrogen and low-carbon energy, few voices carry more weight than Dr. Kerry-Ann Adamson-Morland. With over 25 years of industry experience, she’s a globally respected innovator, intrapreneur, and entrepreneur whose work has spanned the entire hydrogen value chain.   About partnership content Some Energy Voice online content is funded by outside parties. The revenue from this helps to sustain our independent news gathering. You will always know if you are reading paid-for material as it will be clearly labelled as “Partnership” on the site and on social media channels, This can take two different forms. “Presented by”This means the content has been paid for and produced by the named advertiser. “In partnership with”This means the content has been paid for and approved by the named advertiser but written and edited by our own commercial content team. Now, in her role as Global Head of Hydrogen at Capgemini, she brings that expertise to one of the world’s largest organisations focused on digital transformation and sustainability. This year, she’s also proudly stepping into the spotlight as an ambassador for the All-Energy Exhibition and Conference 2025. “I’m an unashamed fan of All-Energy,” she says enthusiastically. “I think it is the best energy event in the UK—and the fact that it’s free is phenomenal.” A global career rooted in hydrogen Dr. Kerry-Ann Adamson-Morland. Dr. Adamson-Morland’s journey through the hydrogen and fuel cell sector has been anything but conventional. “I’ve been in low-carbon hydrogen and fuel cells for 26 years,” she explains. “And I’ve always stuck with it because there’s so much we need to do.” Her career has seen her operate at the leading edge of innovation, with roles in Germany at the Technical University of Berlin under a Marie Curie Fellowship. She’s founded her own advisory

Read More »

Intel sells off majority stake in its FPGA business

Altera will continue offering field-programmable gate array (FPGA) products across a wide range of use cases, including automotive, communications, data centers, embedded systems, industrial, and aerospace.  “People were a bit surprised at Intel’s sale of the majority stake in Altera, but they shouldn’t have been. Lip-Bu indicated that shoring up Intel’s balance sheet was important,” said Jim McGregor, chief analyst with Tirias Research. The Altera has been in the works for a while and is a relic of past mistakes by Intel to try to acquire its way into AI, whether it was through FPGAs or other accelerators like Habana or Nervana, note Anshel Sag, principal analyst with Moor Insight and Research. “Ultimately, the 50% haircut on the valuation of Altera is unfortunate, but again is a demonstration of Intel’s past mistakes. I do believe that finishing the process of spinning it out does give Intel back some capital and narrows the company’s focus,” he said. So where did it go wrong? It wasn’t with FPGAs because AMD is making a good run of it with its Xilinx acquisition. The fault, analysts say, lies with Intel, which has a terrible track record when it comes to acquisitions. “Altera could have been a great asset to Intel, just as Xilinx has become a valuable asset to AMD. However, like most of its acquisitions, Intel did not manage Altera well,” said McGregor.

Read More »

Intelligence at the edge opens up more risks: how unified SASE can solve it

In an increasingly mobile and modern workforce, smart technologies such as AI-driven edge solutions and the Internet of Things (IoT) can help enterprises improve productivity and efficiency—whether to address operational roadblocks or respond faster to market demands. However, new solutions also come with new challenges, mainly in cybersecurity. The decentralized nature of edge computing—where data is processed, transmitted, and secured closer to the source rather than in a data center—has presented new risks for businesses and their everyday operations. This shift to the edge increases the number of exposed endpoints and creates new vulnerabilities as the attack surface expands. Enterprises will need to ensure their security is watertight in today’s threat landscape if they want to reap the full benefits of smart technologies at the edge. Bypassing the limitations of traditional network security  For the longest time, enterprises have relied on traditional network security approaches to protect their edge solutions. However, these methods are becoming increasingly insufficient as they typically rely on static rules and assumptions, making them inflexible and predictable for malicious actors to circumvent.  While effective in centralized infrastructures like data centers, traditional network security models fall short when applied to the distributed nature of edge computing. Instead, organizations need to adopt more adaptive, decentralized, and intelligent security frameworks built with edge deployments in mind.  Traditional network security typically focuses on keeping out external threats. But today’s threat landscape has evolved significantly, with threat actors leveraging AI to launch advanced attacks such as genAI-driven phishing, sophisticated social engineering attacks, and malicious GPTs. Combined with the lack of visibility with traditional network security, a cybersecurity breach could remain undetected until it’s too late, resulting in consequences extending far beyond IT infrastructures.  Next generation of enterprise security with SASE As organizations look into implementing new technologies to spearhead their business, they

Read More »

Keysight tools tackle data center deployment efficiency

Test and performance measurement vendor Keysight Technologies has developed Keysight Artificial Intelligence (KAI) to identify performance inhibitors affecting large GPU deployments. It emulates workload profiles, rather than using actual resources, to pinpoint performance bottlenecks. Scaling AI data centers requires testing throughout the design and build process – every chip, cable, interconnect, switch, server, and GPU needs to be validated, Keysight says. From the physical layer through the application layer, KAI is designed to identify weak links that degrade the performance of AI data centers, and it validates and optimizes system-level performance for optimal scaling and throughput. AI providers, semiconductor fabricators, and network equipment manufacturers can use KAI to accelerate design, development, deployment, and operations by pinpointing performance issues before deploying in production.

Read More »

U.S. Advances AI Data Center Push with RFI for Infrastructure on DOE Lands

ORNL is also the home of the Center for Artificial Intelligence Security Research (CAISER), which Edmon Begoli, CAISER founding director, described as being in place to build the security necessary by defining a new field of AI research targeted at fighting future AI security risks. Also, at the end of 2024, Google partner Kairos Power started construction of their Hermes demonstration SMR in Oak Ridge. Hermes is a high-temperature gas-cooled reactor (HTGR) that uses triso-fueled pebbles and a molten fluoride salt coolant (specifically Flibe, a mix of lithium fluoride and beryllium fluoride). This demonstration reactor is expected to be online by 2027, with a production level system becoming available in the 2030 timeframe. Also located in a remote area of Oak Ridge is the Tennessee Valley Clinch River project, where the TVA announced a signed agreement with GE-Hitachi to plan and license a BWRX-300 small modular reactor (SMR). On Integrating AI and Energy Production The foregoing are just examples of ongoing projects at the sites named by the DOE’s RFI. Presuming that additional industry power, utility, and data center providers get on board with these locations, any of the 16 could be the future home of AI data centers and on-site power generation. The RFI marks a pivotal step in the U.S. government’s strategy to solidify its global dominance in AI development and energy innovation. By leveraging the vast resources and infrastructure of its national labs and research sites, the DOE is positioning the country to meet the enormous power and security demands of next-generation AI technologies. The selected locations, already home to critical energy research and cutting-edge supercomputing, present a compelling opportunity for industry stakeholders to collaborate on building integrated, sustainable AI data centers with dedicated energy production capabilities. With projects like Oak Ridge’s pioneering SMRs and advanced AI security

Read More »

Generac Sharpens Focus on Data Center Power with Scalable Diesel and Natural Gas Generators

In a digital economy defined by constant uptime and explosive compute demand, power reliability is more than a design criterion—it’s a strategic imperative. In response to such demand, Generac Power Systems, a company long associated with residential backup and industrial emergency power, is making an assertive move into the heart of the digital infrastructure sector with a new portfolio of high-capacity generators engineered for the data center market. Unveiled this week, Generac’s new lineup includes five generators ranging from 2.25 MW to 3.25 MW. These units are available in both diesel and natural gas configurations, and form part of a broader suite of multi-asset energy systems tailored to hyperscale, colocation, enterprise, and edge environments. The product introductions expand Generac’s commercial and industrial capabilities, building on decades of experience with mission-critical power in hospitals, telecom, and manufacturing, now optimized for the scale and complexity of modern data centers. “Coupled with our expertise in designing generators specific to a wide variety of industries and uses, this new line of generators is designed to meet the most rigorous standards for performance, packaging, and after-treatment specific to the data center market,” said Ricardo Navarro, SVP & GM, Global Telecom and Data Centers, Generac. Engineering for the Demands of Digital Infrastructure Each of the five new generators is designed for seamless integration into complex energy ecosystems. Generac is emphasizing modularity, emissions compliance, and high-ambient operability as central to the offering, reflecting a deep understanding of the real-world challenges facing data center operators today. The systems are built around the Baudouin M55 engine platform, which is engineered for fast transient response and high operating temperatures—key for data center loads that swing sharply under AI and cloud workloads. The M55’s high-pressure common rail fuel system supports low NOx emissions and Tier 4 readiness, aligning with the most

Read More »

CoolIT and Accelsius Push Data Center Liquid Cooling Limits Amid Soaring Rack Densities

The CHx1500’s construction reflects CoolIT’s 24 years of DLC experience, using stainless-steel piping and high-grade wetted materials to meet the rigors of enterprise and hyperscale data centers. It’s also designed to scale: not just for today’s most power-hungry processors, but for future platforms expected to surpass today’s limits. Now available for global orders, CoolIT is offering full lifecycle support in over 75 countries, including system design, installation, CDU-to-server certification, and maintenance services—critical ingredients as liquid cooling shifts from high-performance niche to a requirement for AI infrastructure at scale. Capex Follows Thermals: Dell’Oro Forecast Signals Surge In Cooling and Rack Power Infrastructure Between Accelsius and CoolIT, the message is clear: direct liquid cooling is stepping into its maturity phase, with products engineered not just for performance, but for mass deployment. Still, technology alone doesn’t determine the pace of adoption. The surge in thermal innovation from Accelsius and CoolIT isn’t happening in a vacuum. As the capital demands of AI infrastructure rise, the industry is turning a sharper eye toward how data center operators account for, prioritize, and report their AI-driven investments. To wit: According to new market data from Dell’Oro Group, the transition toward high-power, high-density AI racks is now translating into long-term investment shifts across the data center physical layer. Dell’Oro has raised its forecast for the Data Center Physical Infrastructure (DCPI) market, predicting a 14% CAGR through 2029, with total revenue reaching $61 billion. That revision stems from stronger-than-expected 2024 results, particularly in the adoption of accelerated computing by both Tier 1 and Tier 2 cloud service providers. The research firm cited three catalysts for the upward adjustment: Accelerated server shipments outpaced expectations. Demand for high-power infrastructure is spreading to smaller hyperscalers and regional clouds. Governments and Tier 1 telecoms are joining the buildout effort, reinforcing AI as a

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »