A malicious Python package named “pytorch-optimize” has been discovered on the PyPI registry. The package mimics legitimate optimization tools but contains a backdoor designed to steal proprietary machine learning models and environment variables (like AWS/Hugging Face keys) from data scientist workstations.
Business Impact
Intellectual property theft is the primary goal here. AI models are often a company’s most valuable asset. This attack specifically targets the developers and data scientists who build these models, bypassing traditional perimeter security to exfiltrate core IP.
Why It Happened
The attack uses “typosquatting” and “starjacking” (faking GitHub stars) to appear legitimate. Developers installing it to optimize their code inadvertently grant the attacker access to their training environment.
Recommended Executive Action
Direct your data science teams to audit their Python environments immediately. Block the installation of “pytorch-optimize.” Implement a curated private repository for Python packages and mandate vetting for any new AI/ML libraries.
Hashtags: #SupplyChain #Python #PyPI #AI #MachineLearning #Malware #DataScience #InfoSec
