
Threat actors may not only be stealing AI access from fully developed applications, the researchers added. A developer trying to prototype an app, who, through carelessness, doesn’t secure a server, could be victimized through credential theft as well.
Joseph Steinberg, a US-based AI and cybersecurity expert, said the report is another illustration of how new technology like artificial intelligence creates new risks and the need for new security solutions beyond the traditional IT controls.
CSOs need to ask themselves if their organization has the skills needed to safely deploy and protect an AI project, or whether the work should be outsourced to a provider with the needed expertise.
Mitigation
Pillar Security said CSOs with externally-facing LLMs and MCP servers should:
- enable authentication on all LLM endpoints. Requiring authentication eliminates opportunistic attacks. Organizations should verify that Ollama, vLLM, and similar services require valid credentials for all requests;
- audit MCP server exposure. MCP servers must never be directly accessible from the internet. Verify firewall rules, review cloud security groups, confirm authentication requirements;
- block known malicious infrastructure. Add the 204.76.203.0/24 subnet to deny lists. For the MCP reconnaissance campaign, block AS135377 ranges;
- implement rate limiting. Stop burst exploitation attempts. Deploy WAF/CDN rules for AI-specific traffic patterns;
- audit production chatbot exposure. Every customer-facing chatbot, sales assistant, and internal AI agent must implement security controls to prevent abuse.
Don’t give up
Despite the number of news stories in the past year about AI vulnerabilities, Meghu said the answer is not to give up on AI, but to keep strict controls on its usage. “Do not just ban it, bring it into the light and help your users understand the risk, as well as work on ways for them to use AI/LLM in a safe way that benefits the business,” he advised.
“It is probably time to have dedicated training on AI use and risk,” he added. “Make sure you take feedback from users on how they want to interact with an AI service and make sure you support and get ahead of it. Just banning it sends users into a shadow IT realm, and the impact from this is too frightening to risk people hiding it. Embrace and make it part of your communications and planning with your employees.”





















