AI hallucinates fake software dependencies enabling new supply chain attacks
Hallucinated package names fuel slopsquatting as AI coding tools invent non-existent dependencies
The rise of AI-powered code generation tools is introducing dangerous new risks to software development through hallucinated dependencies.
The Slopsquatting Threat
Security researchers have discovered that AI coding assistants frequently invent non-existent software packages in their suggestions:
- 5.2% of commercial model suggestions are fake
- 21.7% from open source models
Malicious actors have begun exploiting this by:
- Creating malware under hallucinated package names
- Uploading them to registries like PyPI or npm
- Waiting for AI tools to recommend their fake packages
Attack Patterns Emerging
Research shows hallucinated names follow a bimodal pattern:
- 43% reappear consistently with the same prompt
- 39% vanish completely
This phenomenon has been dubbed "slopsquatting" - a play on typosquatting and the "slop" pejorative for AI output.
Real-World Consequences
Recent incidents include:
- Google's AI Overview recommending a malicious @async-mutex/mutex npm package
- Threat actor "_Iain" automating typo-squatted package creation at scale
Industry Response
The Python Software Foundation is:
- Implementing malware reporting APIs
- Improving typo-squatting detection
- Partnering with security teams
Security experts warn developers must:
- Verify all AI-suggested packages
- Check for typos in names
- Review package contents before installation
As Socket CEO Feross Aboukhadijeh notes: "What a world we live in: AI hallucinated packages are validated and rubber-stamped by another AI that is too eager to be helpful."
Related reading:
Related News
Korean AI Startups Lead Global AI Infrastructure Innovation
CB Insights highlights Korean startups Dnotitia, VESSL AI, and Upstage as key players in shaping the future of AI infrastructure, signaling Korea's growing influence in the global AI landscape.
Genesys Adobe NiCE Lead AI Push in CX Automation
Genesys launches agentic AI tools, Adobe rolls out AI agents, NiCE completes Cognigy acquisition, and SoundHound AI buys Interactions to enhance customer experience automation.
About the Author

Dr. Lisa Kim
AI Ethics Researcher
Leading expert in AI ethics and responsible AI development with 13 years of research experience. Former member of Microsoft AI Ethics Committee, now provides consulting for multiple international AI governance organizations. Regularly contributes AI ethics articles to top-tier journals like Nature and Science.