HiddenLayer also said it found six further Hugging Face repositories containing virtually identical loader logic that shared infrastructure with the cited attack.
The case follows other warnings about malicious AI models on Hugging Face, including poisoned AI SDKs and fake OpenClaw installers. The common thread is that attackers are treating AI development workflows as a route into normally secure environments. AI repositories often contain executable code, setup instructions, dependency files, notebooks, and scripts, and its these peripheral elements that cause the problems, rather than the models themselves.
Sakshi Grover, senior research manager for cybersecurity services at IDC, said traditional SCA was designed to inspect dependency manifests, libraries, and container images. It is less effective at identifying malicious loader logic in AI repositories. They also cited IDC’s November 2025 FutureScape report, which contained the call that by 2027, 60% of agentic AI systems should have a bill of materials. This would help companies track which AI artefacts they use, their source, which versions were approved, and whether they contain executable components.
Read the full article here