
Fetcher bots, which retrieve content in real time when users make queries to AI assistants, show different concentration patterns. OpenAI’s ChatGPT and related bots generated 68% of fetcher bot requests. In some cases, fetcher bot request volumes exceeded 39,000 requests per minute to individual sites.
AI agents check multiple websites when processing queries, generating higher traffic volumes per interaction than browser-based users. That traffic runs through Fastly’s network for customers using the platform for content delivery.
Beyond traffic volume, Fastly is seeing AI workloads running directly on its edge compute platform. Compton cited examples including storage of large training datasets and customers using edge compute for inference and other AI-related processing tasks.
Customer strategy shifts from blocking to optimization
The approach large media companies are taking toward AI agent traffic has evolved over the past six months. Compton said the discussion with media customers has shifted from “‘how do you block it?’ to a much more nuanced and sophisticated conversation now, about ‘how do you optimize for it?’”
Media companies want their content to remain relevant to AI models and chatbots but need tools to manage how that access happens and enforce existing content licensing agreements. Fastly developed AI bot mitigation technology to address this requirement, allowing customers to permit beneficial bots while blocking harmful ones.
The company also became one of the first edge providers to support the Really Simple Licensing (RSL) protocol. Fastly is working with large media customers as design partners to refine how the protocol gets implemented at the edge. “It was an industry-developed protocol to essentially enforce content rights agreements related to AI models,” Compton said.





















