Stay Ahead, Stay ONMINE

Invisible, autonomous and hackable: The AI agent dilemma no one saw coming

This article is part of VentureBeat’s special issue, “The cyber resilience playbook: Navigating the new era of threats.” Read more from this special issue here. Generative AI poses interesting security questions, and as enterprises move into the agentic world, those safety issues increase.  When AI agents enter workflows, they must be able to access sensitive data and documents to do their job — making them a significant risk for many security-minded enterprises. “The rising use of multi-agent systems will introduce new attack vectors and vulnerabilities that could be exploited if they aren’t secured properly from the start,” said Nicole Carignan, VP of strategic cyber AI at Darktrace. “But the impacts and harms of those vulnerabilities could be even bigger because of the increasing volume of connection points and interfaces that multi-agent systems have.” Why AI agents pose such a high security risk AI agents — or autonomous AI that executes actions on users’ behalf — have become extremely popular in just the last few months. Ideally, they can be plugged into tedious workflows and can perform any task, from something as simple as finding information based on internal documents to making recommendations for human employees to take. But they present an interesting problem for enterprise security professionals: They must gain access to data that makes them effective, without accidentally opening or sending private information to others. With agents doing more of the tasks human employees used to do, the question of accuracy and accountability comes into play, potentially becoming a headache for security and compliance teams.  Chris Betz, CISO of AWS, told VentureBeat that retrieval-augmented generation (RAG) and agentic use cases “are a fascinating and interesting angle” in security.  “Organizations are going to need to think about what default sharing in their organization looks like, because an agent will find through search anything that will support its mission,” said Betz. “And if you overshare documents, you need to be thinking about the default sharing policy in your organization.” Security professionals must then ask if agents should be considered digital employees or software. How much access should agents have? How should they be identified? AI agent vulnerabilities Gen AI has made many enterprises more aware of potential vulnerabilities, but agents could open them to even more issues. “Attacks that we see today impacting single-agent systems, such as data poisoning, prompt injection or social engineering to influence agent behavior, could all be vulnerabilities within a multi-agent system,” said Carignan.  Enterprises must pay attention to what agents are able to access to ensure data security remains strong.  Betz pointed out that many security issues surrounding human employee access can extend to agents. Therefore, it “comes down to making sure that people have access to the right things and only the right things.” He added that when it comes to agentic workflows with multiple steps, “each one of those stages is an opportunity” for hackers. Give agents an identity One answer could be issuing specific access identities to agents.  A world where models reason about problems over the course of days is “a world where we need to be thinking more around recording the identity of the agent as well as the identity of the human responsible for that agent request everywhere in our organization,” said Jason Clinton, CISO of model provider Anthropic.  Identifying human employees is something enterprises have been doing for a very long time. They have specific jobs; they have an email address they use to sign into accounts and be tracked by IT administrators; they have physical laptops with accounts that can be locked. They get individual permission to access some data. A variation of this kind of employee access and identification could be deployed to agents.  Both Betz and Clinton believe this process can prompt enterprise leaders to rethink how they provide information access to users. It could even lead organizations to overhaul their workflows.  “Using an agentic workflow actually offers you an opportunity to bound the use cases for each step along the way to the data it needs as part of the RAG, but only the data it needs,” said Betz.  He added that agentic workflows “can help address some of those concerns about oversharing,” because companies must consider what data is being accessed to complete actions. Clinton added that in a workflow designed around a specific set of operations, “there’s no reason why step one needs to have access to the same data that step seven needs.” The old-fashioned audit isn’t enough Enterprises can also look for agentic platforms that allow them to peek inside how agents work. For example, Don Schuerman, CTO of workflow automation provider Pega, said his company helps ensure agentic security by telling the user what the agent is doing.  “Our platform is already being used to audit the work humans are doing, so we can also audit every step an agent is doing,” Schuerman told VentureBeat.  Pega’s newest product, AgentX, allows human users to toggle to a screen outlining the steps an agent undertakes. Users can see where along the workflow timeline the agent is and get a readout of its specific actions.  Audits, timelines and identification are not perfect solutions to the security issues presented by AI agents. But as enterprises explore agents’ potential and begin to deploy them, more targeted answers could come up as AI experimentation continues. 

This article is part of VentureBeat’s special issue, “The cyber resilience playbook: Navigating the new era of threats.” Read more from this special issue here.

Generative AI poses interesting security questions, and as enterprises move into the agentic world, those safety issues increase. 

When AI agents enter workflows, they must be able to access sensitive data and documents to do their job — making them a significant risk for many security-minded enterprises.

“The rising use of multi-agent systems will introduce new attack vectors and vulnerabilities that could be exploited if they aren’t secured properly from the start,” said Nicole Carignan, VP of strategic cyber AI at Darktrace. “But the impacts and harms of those vulnerabilities could be even bigger because of the increasing volume of connection points and interfaces that multi-agent systems have.”

Why AI agents pose such a high security risk

AI agents — or autonomous AI that executes actions on users’ behalf — have become extremely popular in just the last few months. Ideally, they can be plugged into tedious workflows and can perform any task, from something as simple as finding information based on internal documents to making recommendations for human employees to take.

But they present an interesting problem for enterprise security professionals: They must gain access to data that makes them effective, without accidentally opening or sending private information to others. With agents doing more of the tasks human employees used to do, the question of accuracy and accountability comes into play, potentially becoming a headache for security and compliance teams. 

Chris Betz, CISO of AWS, told VentureBeat that retrieval-augmented generation (RAG) and agentic use cases “are a fascinating and interesting angle” in security. 

“Organizations are going to need to think about what default sharing in their organization looks like, because an agent will find through search anything that will support its mission,” said Betz. “And if you overshare documents, you need to be thinking about the default sharing policy in your organization.”

Security professionals must then ask if agents should be considered digital employees or software. How much access should agents have? How should they be identified?

AI agent vulnerabilities

Gen AI has made many enterprises more aware of potential vulnerabilities, but agents could open them to even more issues.

“Attacks that we see today impacting single-agent systems, such as data poisoning, prompt injection or social engineering to influence agent behavior, could all be vulnerabilities within a multi-agent system,” said Carignan. 

Enterprises must pay attention to what agents are able to access to ensure data security remains strong. 

Betz pointed out that many security issues surrounding human employee access can extend to agents. Therefore, it “comes down to making sure that people have access to the right things and only the right things.” He added that when it comes to agentic workflows with multiple steps, “each one of those stages is an opportunity” for hackers.

Give agents an identity

One answer could be issuing specific access identities to agents. 

A world where models reason about problems over the course of days is “a world where we need to be thinking more around recording the identity of the agent as well as the identity of the human responsible for that agent request everywhere in our organization,” said Jason Clinton, CISO of model provider Anthropic

Identifying human employees is something enterprises have been doing for a very long time. They have specific jobs; they have an email address they use to sign into accounts and be tracked by IT administrators; they have physical laptops with accounts that can be locked. They get individual permission to access some data.

A variation of this kind of employee access and identification could be deployed to agents. 

Both Betz and Clinton believe this process can prompt enterprise leaders to rethink how they provide information access to users. It could even lead organizations to overhaul their workflows. 

“Using an agentic workflow actually offers you an opportunity to bound the use cases for each step along the way to the data it needs as part of the RAG, but only the data it needs,” said Betz. 

He added that agentic workflows “can help address some of those concerns about oversharing,” because companies must consider what data is being accessed to complete actions. Clinton added that in a workflow designed around a specific set of operations, “there’s no reason why step one needs to have access to the same data that step seven needs.”

The old-fashioned audit isn’t enough

Enterprises can also look for agentic platforms that allow them to peek inside how agents work. For example, Don Schuerman, CTO of workflow automation provider Pega, said his company helps ensure agentic security by telling the user what the agent is doing. 

“Our platform is already being used to audit the work humans are doing, so we can also audit every step an agent is doing,” Schuerman told VentureBeat. 

Pega’s newest product, AgentX, allows human users to toggle to a screen outlining the steps an agent undertakes. Users can see where along the workflow timeline the agent is and get a readout of its specific actions. 

Audits, timelines and identification are not perfect solutions to the security issues presented by AI agents. But as enterprises explore agents’ potential and begin to deploy them, more targeted answers could come up as AI experimentation continues. 

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Fortinet speeds threat detection with improved FortiAnalyzer

The package also now integrates with FortiAI, the vendor’s genAI assistant, to better support analytics and telemetry to help security teams speed threat investigation and response, the vendor stated. “FortiAI identifies the threats that need analysis from the data collected by FortiAnalyzer, primarily collected from FortiGates. By automating the collection,

Read More »

Aryaka adds AI-powered observability to SASE platform

Nadkarni explained that Aryaka runs unsupervised machine learning models on the data to identify anomalies and outliers in the data. For example, the models may detect a sudden spike in traffic to a domain that has not been seen before. This unsupervised analysis helps surface potential issues or areas of

Read More »

Emtec and PDM selected for Ardersier Port electrical work

Haventus has commissioned Emtec Utility Services to deliver electrical infrastructure for its Ardersier Port energy transition facility. Emtec will work with building materials merchant PDM to supply cable protection systems, electric cable ducting coils and detectable warning tapes, amongst other specialist solutions. PDM has started on-site deliveries to Emtec, which will carry out Phase One at Ardersier Port throughout 2025. It is hoped that this will be extended on future phases of the project. The Ardersier Port energy transition facility in the Moray Firth is being developed as a nationally significant infrastructure facility to support the offshore wind industry. Emtec Utility Services managing director Josh Martin said: “We chose to partner with PDM on this significant infrastructure project for several reasons. The team’s geographical footprint across Scotland and levels of stock held were important – not least of all as there’s a branch in Inverness nearby to Ardersier Port. The wider access to other parts of the Huws Gray group of companies further supported this. “The strength of our relationship with PDM is also really important. We can be assured that they provide sound guidance on the most appropriate product to use and are always quick to find solutions for us.” Haventus previously received a £100 million joint credit facility from the Scottish National Investment Bank and UK Infrastructure Bank (UKIB) to develop its Ardersier Port facility. Having served the oil and gas industry since the 70s and 80s the original facility closed its doors in 2001. Now Ardersier is being redeveloped to captialise on the 28GW ScotWind projects and the UK’s wider ambition of 50GW of offshore wind by 2030. Work on the facility is scheduled for completion this year. Speaking on behalf of PDM, managing director Eddie Hernon added: “There is a growing urgency to move away from

Read More »

UK Oil & Gas raises £400,000 for hydrogen storage project

UK Oil & Gas (AIM:UKOG) has raised funds to push the development of its South Dorset hydrogen storage project as it eyes bidding into the UK’s first hydrogen allocation round. The group brought in £400,431 through a private placing of new shares worth 0.0102 pence each to a small number of professional investors. The placing follows UK Oil & Gas’s recent meetings with representatives of the UK government’s Hydrogen Storage Business Model (HSBM) team in recent months. In addition to developing the South Dorset project, the funds will help the company participate in the first HSBM Procurement Round, scheduled for later this year. The company recently received a preliminary project design the underground hydrogen storage facility located west of Weymouth in Dorset. The hydrogen will be stored in 24 salt caverns at a depth of around 1,330m below the surface, with the capacity to hold 1.01bcm of hydrogen. The site lies near the planned H2 Connect hydrogen trunk pipeline, designed to connect South Dorset to the UK hydrogen transmission pipeline system and the main hydrogen clusters in the South, East Coast and Northwest. The funds will help create a conceptual design and the preliminary metrics of a joint venture green hydrogen generation and import project at the nearby deepwater Portland Port. This will provide import capacity for Middle Eastern green hydrogen carrier fluids and associated material green hydrogen production for transmission into the South Dorset storage site and onwards to the wider UK via SGN’s H2 Connect pipeline. Additionally, UK Oil & Gas is continuing discussions with several energy infrastructure investors regarding joint venture participation in its South Dorset and Yorkshire projects. UKOG chief executive Stephen Sanderson commented: “These funds are aimed squarely at strengthening the company’s ability to make a competitive bid for government revenue support in the forthcoming

Read More »

Cargill, Hafnia Launch Bunkering Joint Venture

Cargill Inc. has joined forces with shipping company Hafnia to launch a joint venture named Seascale Energy, which initially represents 7.5 million metric tons in bunkering volume. According to a joint media release, the two companies will combine Cargill’s existing bunker business Pure Marine Fuels and Hafnia’s Bunker Alliance. The joint venture will combine the strengths of two global players to provide enhanced cost efficiencies, transparency, and access to sustainable fuel innovations, the two companies said. By consolidating bunker purchasing, it ensures competitive pricing and tailored solutions, backed by a global port network for consistent, high-quality fuel supply, they said. “Cargill and Hafnia’s global reach and trading strength, coupled with maritime operational excellence, create a first-class solution for bunker management”, Jan Dieleman, President of Cargill’s Ocean Transportation business, said. “Our vision is to lead the energy transition in shipping, unlocking value for our stakeholders while addressing industry challenges around transparency, quality, and decarbonization. Together, we are shaping a more sustainable future for marine fuel procurement”. The companies said the collaboration provides shipowners and charterers enhanced visibility and scale, allowing them to obtain favorable agreements and assess performance. Customized procurement services will lower internal expenses, allowing clients to redirect resources toward their primary operations, they said. “Seascale Energy represents our shared vision to simplify and innovate the increasing complexities in the bunkering segment”, Hafnia CEO Mikael Skov said. “As one of the largest services of its kind, led by two large-scale fuel users, we are committed to improving efficiency and addressing industry challenges to benefit our stakeholders across the maritime sector”. Seascale Energy will leverage data-driven insights for optimized decision-making and expertise in evolving fuel regulations, the companies said. It will have two CEOs in Olivier Josse and Peter Grünwaldt. Business operations are set to begin in the second quarter of

Read More »

Trial turns Scottish paper mill carbon emissions into detergent

Industry partners have launched a pioneering trial to capture carbon pollution from a paper mill in North Ayrshire and use it to make household detergents. The Caledonian papermill owned by Helsinki forest giant UPM is part of an Innovate UK-funded collaboration of 17 organisations in one of Scotland’s first carbon capture and utilisation (CCU) projects. Carbon dioxide from the plant has started being captured using technology developed by Scottish tech start-up, CCU International (CCUI). Batches of CO2 are then sent to researchers at the University of Sheffield, where they begin the next stage of the process to be converted into ingredients for household detergents. Just over 5% of the world’s fossil fuel is currently used to make plastics, textiles and cleaning products. Findings from the Flue2Chem project will inform industry and the government on the technical feasibility of the processes as well as the economic impacts for creating a new supply chain. In partnership with Scottish Enterprise, the Caledonian plant hosted representatives from a range of organisations including Irvine-based Ardagh Glass as well as Diageo , GSK and the Scotch Whisky Association to share learnings to date. Waste product to ‘valuable resource’ UPM development manager Andrew Johnston said: “Investigating an alternative domestic source of carbon for consumer goods allows us to contribute to net zero targets and explore innovative new ways of doing business. When Scottish Enterprise asked if we could share our learnings with local businesses, we were very pleased to get involved.” “No single company can do this alone and the level of collaboration between our seventeen partners has been unprecedented as we test out a whole new potential supply chain.” Beena Sharma, CEO and co-founder of CCUI, added: “This milestone marks a significant step forward in demonstrating how carbon capture and utilisation can help decarbonise everyday products.

Read More »

Green Volt looks toward delivery in 2025

After rapidly progressing through the approval process last year, the Green Volt floating wind farm aims to start delivering the project in 2025. Having pushed through the regulatory process and received approvals, a route to market and grid connection, Green Volt project director at Flotation Energy Matthew Green tells Energy Voice “it puts the onus back on the project to deliver”. Green Volt, being developed by Flotation and Vargronn, will deploy up to 35 wind turbines on floating platforms off Aberdeenshire’s east coast near Peterhead. Once completed in 2029, it will be Europe’s largest floating wind farm at 560 MW, and the first commercial-scale development to reach operations. For the £2.5 billion project, 2024 was a landmark year. The project received its final approval in April, before securing a contract for difference (CfD) in September’s Allocation Round 6 (AR6). The project received an offtake agreement for 400 MW at a strike price of £139.93 per MWh. This made it the only floating wind project to secure a contract in the auction, and the first development from the Innovation and Targeted Oil and Gas (INTOG) process to win a CfD. “The CfD win came on the back of auction concerns, not just in the UK with the previous AR5, but globally,” Green says. © Supplied by Flotation EnergyGreen Volt project director Matthew Green. “A lot of auctions were stuttering along and there was a cloud of pessimism with respect to offshore wind targets being met and fixed-bottom projects progressing against increasing inflation.” In addition, the developers awarded two sets of front-end engineering and design (FEED) contracts for the project in late 2024. One of the contracts went to Aker Solutions and ABB covering the topside and jacket and support for the HV voltage system, telecommunication, substation automation and hardware delivery. The

Read More »

CNOOC Starts Production in Luda 5-2 North Phase 2 in Bohai Sea

CNOOC Ltd. has achieved first oil in the second-phase development of the Luda 5-2 North field in the Bohai Sea, the fourth project it announced to have been started up in China this year. Producing heavy crude, Luda 5-2 North phase 2 is expected to reach about 6,700 barrels of oil equivalent a day (boed) in peak production next year, CNOOC Ltd. said in a press release. Luda 5-2 North phase 1 went onstream some three years ago as the first Chinese oilfield to produce from super-heavy oil reservoirs through thermal recovery, according to CNOOC Ltd., majority-owned by the state’s China National Offshore Oil Corp. (CNOOC). It gave a peak production of around 8,200 barrels per day (bpd) of crude when it announced start-up April 23, 2022. Phase 2 is in the central part of the Bohai Sea in the northeast of mainland China. It has an average water depth of approximately 30 meters (98.43 feet), according to CNOOC Ltd. The oil and gas exploration and production company built a new auxiliary production platform and one production wellhead platform. CNOOC Ltd. eyes to develop 29 wells, of which 28 are for production and one is for water access. “CNOOC Limited made major technological breakthroughs in this project and significantly enhanced the development efficiency of offshore super heavy oil”, the 100 percent owner said. “Through optimized Jet Pump Injection-Production Technology, the project realized efficient and economic development of heavy crude, which could further enhance the Company’s energy supply capacity”, CNOOC Ltd. added. This is CNOOC Ltd.’s second start-up announcement for the Bohai Sea this year. In total, it has announced four start-ups in China and one overseas in 2025. Bozhong 26-6, in Bohai Bay, could reach 22,300 boed in peak production this year, CNOOC Ltd. said February 7 announcing the start-up

Read More »

Ireland says there will be no computation without generation

Stanish said that, in 2023, she wrote a paper that predicted “by 2028, more than 70% of multinational enterprises will alter their data center strategies due to limited energy supplies and data center moratoriums, up from only about 5% in 2023. It has been interesting watching this trend evolve as expected, with Ireland being a major force in this conversation since the boycotts against data center growth started a few years ago.” Fair, equitable, and stable electricity allocation, she said, “means that the availability of electricity for digital services is not guaranteed in the future, and I expect these policies, data center moratoriums, and regional rejections will only continue and expand moving forward.” Stanish pointed out that this trend is not just occurring in Ireland. “Many studies show that, globally, enterprises’ digital technologies are consuming energy at a faster rate than overall growth in energy supply (though, to be clear, these studies mostly assume a static position on energy efficiency of current technologies, and don’t take into account potential for nuclear or hydrogen to assuage some of these supply issues).” If taken at face value, she said, this means that a lack of resources could cause widespread electricity shortages in data centers over the next several years. To mitigate this, Stanish said, “so far, data center moratoriums and related constraints (including reduced tax incentives) have been enacted in the US (specifically Virginia and Georgia), Denmark, Singapore, and other countries, in response to concerns about the excessive energy consumption of IT, particularly regarding compute-intense AI workloads and concerns regarding an IT energy monopoly in certain regions. As a result, governments (federal, state, county, etc.) are working to ensure that consumption does not outpace capacity.” Changes needed In its report, the CRU stated, “a safe and secure supply of energy is essential

Read More »

Perspective: Can We Solve the AI Data Center Power Crisis with Microgrids?

President Trump announced a$500 billion private sector investment in the nation’s Artificial Intelligence (AI) infrastructure last month. The investment will come from The Stargate Project, a joint venture between OpenAI, SoftBank, Oracle and MGX, which intends to build 20 new AI data centers in the U.S in the next four to five years. The Stargate Project committed$100 billion for immediate deployment and construction has already begun on its first data center in Texas. At approximately a half a million square feet each, the partners say these new facilities will cement America’s leadership in AI, create jobs and stimulate economic growth. Stargate is not the only game in town, either. Microsoft is expected to invest$80 billion in AI data center development in 2025, with Google, AWS and Meta also spending big. While all this investment in AI infrastructure is certainly exciting, experts say there’s one lingering question that’s yet to be answered and it’s a big one: How are we going to power all these AI data centers? This will be one of the many questions tackled duringMicrogrid Knowledge’s annual conference, which will be held in Texas April 15-17 at the Sheraton Dallas. “Powering Data Centers: Collaborative Microgrid Solutions for a Growing Market” will be one of the key sessions on April 16. Industry experts will gather to discuss how private entities, developers and utilities can work together to deploy microgrids and distributed energy technologies that address the data center industry’s power needs. The panel will share solutions, technologies and strategies that will favorably position data centers in the energy queue. In advance of this session, we sat down with two microgrid experts to learn more about the challenges facing the data center industry and how microgrids can address the sector’s growing energy needs. We spoke with Michael Stadler, co-founder and

Read More »

Data Center Tours: Iron Mountain VA-1, Manassas, Virginia

Iron Mountain Northern Virginia Overview Iron Mountain’s Northern Virginia data centers VA-1 through VA-7 are situated on a 142-acre highly secure campus in Prince William County, Virginia. Located at 11680 Hayden Road in Manassas, Iron Mountain VA-1 spans 167,958 sq. ft. and harbors 12.4 MW of total capacity to meet colocation needs. The 36 MW VA-2 facility stands nearby. The total campus features a mixture of single and multi-tenant facilities which together provide more than 2,000,000 SF of highly efficient green colocation space for enterprises, federal agencies, service providers and hyperscale clouds.  The company notes that its Manassas campus offers tax savings compared to Ashburn and exceptional levels of energy-efficiency as well as a diverse and accessible ecosystem of cloud, network and other service providers.  Iron Mountain’s Virginia campus has 9 total planned data centers, with 5 operational facilities to date and two more data centers coming soon. VA-2 recently became the first data center in the United States to achieve DCOS Maturity Level 3.    As we continued the tour, Kinra led the way toward the break room, an area where customers can grab coffee or catch up on work. Unlike the high-end aesthetic of some other colocation providers, Iron Mountain’s approach is more practical and focused on functionality. At the secure shipping and receiving area, Kinra explained the process for handling customer equipment. “This is where our customers ship their equipment into,” he said. “They submit a ticket, send their shipments in, and we’ll take it, put it aside for them, and let them know when it’s here. Sometimes they ask us to take it to their environment, which we’ll do for them via a smart hands ticket.” Power Infrastructure and Security Measures The VA-1 campus is supported by a single substation, providing the necessary power for its growing

Read More »

Land and Expand: DPO, Microsoft, JLL and BlackChamber, Prologis, Core Scientific, Overwatch Capital

Land and Expand is a periodic feature at Data Center Frontier highlighting the latest data center development news, including new sites, land acquisitions and campus expansions. Here are some of the new and notable developments from hyperscale and colocation data center developers and operators about which we’ve been reading lately. DPO to Develop $200 Million AI Data Center in Wisconsin Rapids; Strategic Partnership with Billerud’s CWPCo Unlocks Hydroelectric Power for High-Density AI Compute Digital Power Optimization (DPO) is moving forward with plans to build a $200 million high-performance computing (HPC) data center in Wisconsin Rapids, Wisconsin. The project, designed to support up to 20 megawatts (MW) of artificial intelligence (AI) computing, leverages an innovative partnership with Consolidated Water Power Company (CWPCo), a subsidiary of global packaging leader Billerud. DPO specializes in developing and operating data centers optimized for power-dense computing. By partnering with utilities and independent power producers, DPO colocates its facilities at energy generation sites, ensuring direct access to sustainable power for AI, HPC, and blockchain computing. The company is privately held. Leveraging Power Infrastructure for Speed-to-Energization CWPCo, a regulated utility subsidiary, has operated hydroelectric generation assets since 1894, reliably serving industrial and commercial customers in Wisconsin Rapids, Biron, and Stevens Point. Parent company Billerud is a global leader in high-performance packaging materials, committed to sustainability and innovation. The company operates nine production facilities across Sweden, the USA, and Finland, employing 5,800 people in over 19 countries.  The data center will be powered by CWPCo’s renewable hydroelectric assets, tapping into the utility’s existing 32 megawatts of generation capacity. The partnership grants DPO a long-term land lease—extending up to 50 years—alongside interconnection rights to an already-energized substation and a firm, reliable power supply. “AI infrastructure is evolving at an unprecedented pace, and access to power-dense sites is critical,” said Andrew

Read More »

Data center spending to top $1 trillion by 2029 as AI transforms infrastructure

His projections account for recent advances in AI and data center efficiency, he says. For example, the open-source AI model from Chinese company DeepSeek seems to have shown that an LLM can produce very high-quality results at a very low cost with some clever architectural changes to how the models work. These improvements are likely to be quickly replicated by other AI companies. “A lot of these companies are trying to push out more efficient models,” says Fung. “There’s a lot of effort to reduce costs and to make it more efficient.” In addition, hyperscalers are designing and building their own chips, optimized for their AI workloads. Just the accelerator market alone is projected to reach $392 billion by 2029, Dell’Oro predicts. By that time, custom accelerators will outpace commercially available accelerators such as GPUs. The deployment of dedicated AI servers also has an impact on networking, power and cooling. As a result, spending on data center physical infrastructure (DCPI) will also increase, though at a more moderate pace, growing by 14% annually to $61 billion in 2029.  “DCPI deployments are a prerequisite to support AI workloads,” says Tam Dell’Oro, founder of Dell’Oro Group, in the report. The research firm raised its outlook in this area due to the fact that actual 2024 results exceeded its expectations, and demand is spreading from tier one to tier two cloud service providers. In addition, governments and tier one telecom operators are getting involved in data center expansion, making it a long-term trend.

Read More »

The Future of Property Values and Power in Virginia’s Loudoun County and ‘Data Center Alley’

Loudoun County’s FY 2026 Proposed Budget Is Released This week, Virginia’s Loudoun County released its FY 2026 Proposed Budget. The document notes how data centers are a major driver of revenue growth in Loudoun County, contributing significantly to both personal and real property tax revenues. As noted above, data centers generate almost 50% of Loudoun County property tax revenues. Importantly, Loudoun County has now implemented measures such as a Revenue Stabilization Fund (RSF) to manage the risks associated with this revenue dependency. The FY 2026 budget reflects the strong growth in data center-related revenue, allowing for tax rate reductions while still funding critical services and infrastructure projects. But the county is mindful of the potential volatility in data center revenue and is planning for long-term fiscal sustainability. The FY 2026 Proposed Budget notes how Loudoun County’s revenue from personal property taxes, particularly from data centers, has grown significantly. From FY 2013 to FY 2026, revenue from this source has increased from $60 million to over $800 million. Additionally, the county said its FY 2026 Proposed Budget benefits from $150 million in new revenue from the personal property tax portfolio, with $133 million generated specifically from computer equipment (primarily data centers). The county said data centers have also significantly impacted the real property tax portfolio. In Tax Year (TY) 2025, 73% of the county’s commercial portfolio is composed of data centers. The county said its overall commercial portfolio experienced a 50% increase in value between TY 2024 and TY 2025, largely driven by the appreciation of data center properties. RSF Meets Positive Economic Outlook The Loudoun County Board of Supervisors created the aformentioned Revenue Stabilization Fund (RSF) to manage the risks associated with the county’s reliance on data center-related revenue. The RSF targets 10% of data center-related real and personal property tax

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »