Code Defence Cyber security

Typosquatted OpenAI Privacy Filter repo on Hugging Face delivers Rust-based infostealer

A malicious repository masquerading as a legitimate privacy tool from a leading AI laboratory trended on Hugging Face, leading to hundreds of thousands of unauthorized downloads. This supply chain attack demonstrates the growing risk of typosquatted models and poisoned loaders in the AI development ecosystem.

The repository, named Open-OSS/privacy-filter, copied the description and branding of the @[OpenAI](urn:li:organization:10011504) Privacy Filter released in April 2026. However, the repository included a malicious loader.py file that fetched and executed a Rust-based information stealer on Windows systems. Before being disabled, the repo achieved 244,000 downloads, targeting developers seeking to implement PII redaction in their applications.

The targeting of privacy-focused developers is a strategic move, as these users often have access to sensitive unstructured text and PII. This incident highlights that the AI supply chain is subject to the same typosquatting and social engineering risks as traditional package managers like npm or PyPI.

– Audit all Hugging Face models and libraries used in internal development projects to ensure they originate from verified organization accounts.

– Implement strict software composition analysis ❨SCA❩ for all AI-related dependencies and model loaders.

– Instruct development teams to verify repository names and maintainer identities against official vendor announcements before integration.

– Monitor developer workstations for anomalous outbound network traffic or the execution of unauthorized Rust-based binaries.

Supply chain security in the AI era requires verifying the identity of the model as strictly as the integrity of the code. #CodeDefence #SupplyChain #HuggingFace #OpenAI #Malware
/

Scroll to Top