
- ClamAV can now detect malicious code in AI models: “We are releasing this capability to the world. For free. In addition to its coverage of traditional malware, ClamAV can now detect deserialization risks in common model file formats such as .pt and .pkl (in milliseconds, not minutes). This enhanced functionality is available today for everyone using ClamAV,” Anderson and Fordyce wrote.
- ClamAV is focused on AI risk in VirusTotal: “ClamAV is the only antivirus engine to detect malicious models in both Hugging Face and VirusTotal – a popular threat intelligence platform that will scan uploaded models.”
Prior Cisco-Hugging Face collaborations
An earlier tie-in between Cisco’s Foundation AI and Hugging Face helped produce Cerberus, an AI supply chain security analysis model. Cerberus analyzes models as they enter Hugging Face and shares the results in standardized threat feeds that Cisco Security products can use to build and enforce access policies for the AI supply chain, according to a blog from Nathan Chang, product manager with the Foundation AI team.
Cerberus technology is also integrated with Cisco Secure Endpoint and Secure Email to enable automatic blocking of known malicious files during read/write/modify operations as well as email attachments containing malicious AI Supply Chain Security artifacts as attachments. Integration with Cisco Secure Access Secure Web Gateway enables Cerberus to block downloads of potentially compromised AI models and block downloads of models from non-approved sources, according to Chang.
“Users of Cisco Secure Access can configure how to provide access to Hugging Face repositories, block access to potential threats in AI models, block AI models with risky licenses, and enforce compliance policies on AI models that originate from sensitive organizations or politically sensitive regions,” Anderson and Fordyce wrote.
Cisco Foundation AI
When Cisco introduced Foundation AI back in April, Jeetu Patel, executive vice president and chief product officer for Cisco, described it as a “a new team of top AI and security experts focused on accelerating innovation for cyber security teams.” Patel highlighted the release of the industry’s first open weight reasoning model built specifically for security:
“The Foundation AI Security model is an 8-billion parameter, open weight LLM that’s designed from the ground up for cybersecurity. The model was pre-trained on carefully curated data sets that capture the language, logic, and real-world knowledge and workflows that security professionals work with every day,” Patel wrote in a blog post at the group’s introduction.
Customers can use the model as their own AI security base or integrate it with their own closed-source model depending on their needs, Patel stated at the time. “And that reasoning framework basically enables you to take any base model, then make that into an AI reasoning model.”