Researchers have confirmed a new variant of the “Morris II” AI worm that now targets enterprise code repositories (like GitHub Copilot and GitLab Duo) using adversarial self-replicating prompts. The worm tricks the AI into automatically committing and propagating malicious code into active development branches.
Business Impact
This is a catastrophic threat to software integrity and supply chain security. A single infected code prompt can contaminate hundreds of projects. The worm enables the automated, stealthy creation of backdoors in production software, bypassing human review and static analysis tools.
Why It Happened
The worm exploits the “trust” given to AI coding assistants and their ability to autonomously modify code. The AI interprets the embedded malicious prompt as a helpful instruction, not a command to be filtered.
Recommended Executive Action
Implement strict governance on AI code commit permissions. Mandate that AI tools never commit code directly to production or critical development branches. Introduce security review gates that specifically look for prompt-injected code patterns.
Hashtags: #AI #MorrisII #CodeSecurity #SupplyChain #LLM #DevSecOps #CyberSecurity #InfoSec
