AI hallucinates fake software dependencies enabling new supply chain attacks
Hallucinated package names fuel slopsquatting as AI coding tools invent non-existent dependencies
The rise of AI-powered code generation tools is introducing dangerous new risks to software development through hallucinated dependencies.
The Slopsquatting Threat
Security researchers have discovered that AI coding assistants frequently invent non-existent software packages in their suggestions:
- 5.2% of commercial model suggestions are fake
- 21.7% from open source models
Malicious actors have begun exploiting this by:
- Creating malware under hallucinated package names
- Uploading them to registries like PyPI or npm
- Waiting for AI tools to recommend their fake packages
Attack Patterns Emerging
Research shows hallucinated names follow a bimodal pattern:
- 43% reappear consistently with the same prompt
- 39% vanish completely
This phenomenon has been dubbed "slopsquatting" - a play on typosquatting and the "slop" pejorative for AI output.
Real-World Consequences
Recent incidents include:
- Google's AI Overview recommending a malicious @async-mutex/mutex npm package
- Threat actor "_Iain" automating typo-squatted package creation at scale
Industry Response
The Python Software Foundation is:
- Implementing malware reporting APIs
- Improving typo-squatting detection
- Partnering with security teams
Security experts warn developers must:
- Verify all AI-suggested packages
- Check for typos in names
- Review package contents before installation
As Socket CEO Feross Aboukhadijeh notes: "What a world we live in: AI hallucinated packages are validated and rubber-stamped by another AI that is too eager to be helpful."
Related reading:
Related News
Zscaler CAIO on securing AI agents and blending rule-based with generative models
Claudionor Coelho Jr, Chief AI Officer at Zscaler, discusses AI's rapid evolution, cybersecurity challenges, and combining rule-based reasoning with generative models for enterprise transformation.
Lenovo Wins Frost Sullivan 2025 Asia-Pacific AI Services Leadership Award
Lenovo earns Frost Sullivan's 2025 Asia-Pacific AI Services Customer Value Leadership Recognition for its value-driven innovation and real-world AI impact.
About the Author

Dr. Lisa Kim
AI Ethics Researcher
Leading expert in AI ethics and responsible AI development with 13 years of research experience. Former member of Microsoft AI Ethics Committee, now provides consulting for multiple international AI governance organizations. Regularly contributes AI ethics articles to top-tier journals like Nature and Science.