
The chain analogy is critical here. Realistic uses of AI agents will require core database access; what can possibly make an AI business case that isn’t tied to a company’s critical data? The four critical elements of these applications—the agent, the MCP server, the tools, and the data— are all dragged along with each other, and traffic on the network is the linkage in the chain.
How much traffic is generated? Here, enterprises had another surprise. Enterprises told me that their initial view of their AI hosting was an “AI cluster” with a casual data link to their main data center network. With AI agents, they now see smaller AI servers actually installed within their primary data centers, and all the traffic AI creates, within the model and to and from it, now flows on the data center network.
Vendors who told enterprises that AI networking would have a profound impact are proving correct. You can run a query or perform a task with an agent and have that task parse an entire database of thousands or millions of records. Someone not aware of what an agent application implies in terms of data usage can easily create as much traffic as a whole week’s normal access-and-update would create. Enough, they say, to impact network capacity and the QoE of other applications. And, enterprises remind us, if that traffic crosses in/out of the cloud, the cloud costs could skyrocket. About a third of the enterprises said that issues with AI agents generated enough traffic to create local congestion on the network or a blip in cloud costs large enough to trigger a financial review.
MCP tool use by agents is also a major security and governance headache. Enterprises point out that MCP standards haven’t always required strong authentication, and they also say that since a tool can actually update things, it’s possible for a malformed (or hacked) tool to contaminate, fabricate, or delete data. To avoid this, enterprises recommend that AI agents not have access to tools that can update data or take action in the real world, unless there’s considerable oversight into tool and agent design to ensure the agents don’t go rogue.
Review and design are the key to controlling the other issues, too. Traffic issues can be mitigated by careful placement of AI agent models. Since AI agents are less demanding than the huge LLMs used for online generative AI, so you can distribute the agent hosts, even rack them with traditional servers, including the servers that control the databases the agents will use. It does mean, say a majority of enterprises, that the data center network topology and capacity should be reviewed to ensure it can handle the additional traffic AI will generate. None of the enterprises thought AI agents would require InfiniBand versus Ethernet, though, which is good news for enterprise data center network planners—and vendors.
Enterprises told me that their initial view of their AI hosting was an “AI cluster” with a casual data link to their main data center network. With AI agents, they now see smaller AI servers actually installed within their primary data centers, and all the traffic AI creates, within the model and to and from it, now flows on the data center network. So vendors who told enterprises that AI networking would have a profound impact are proving correct.