AI hallucinates fake software dependencies enabling new supply chain attacks
Hallucinated package names fuel slopsquatting as AI coding tools invent non-existent dependencies
The rise of AI-powered code generation tools is introducing dangerous new risks to software development through hallucinated dependencies.
The Slopsquatting Threat
Security researchers have discovered that AI coding assistants frequently invent non-existent software packages in their suggestions:
- 5.2% of commercial model suggestions are fake
- 21.7% from open source models
Malicious actors have begun exploiting this by:
- Creating malware under hallucinated package names
- Uploading them to registries like PyPI or npm
- Waiting for AI tools to recommend their fake packages
Attack Patterns Emerging
Research shows hallucinated names follow a bimodal pattern:
- 43% reappear consistently with the same prompt
- 39% vanish completely
This phenomenon has been dubbed "slopsquatting" - a play on typosquatting and the "slop" pejorative for AI output.
Real-World Consequences
Recent incidents include:
- Google's AI Overview recommending a malicious @async-mutex/mutex npm package
- Threat actor "_Iain" automating typo-squatted package creation at scale
Industry Response
The Python Software Foundation is:
- Implementing malware reporting APIs
- Improving typo-squatting detection
- Partnering with security teams
Security experts warn developers must:
- Verify all AI-suggested packages
- Check for typos in names
- Review package contents before installation
As Socket CEO Feross Aboukhadijeh notes: "What a world we live in: AI hallucinated packages are validated and rubber-stamped by another AI that is too eager to be helpful."
Related reading:
Related News
IBM Launches New AI Agents for Oracle Fusion Applications
IBM introduces three new AI agents for Oracle Fusion Cloud Applications, automating enterprise tasks and enhancing operational efficiency.
Heidi Health secures 65M Series B funding for AI medical scribe
Heidi Health raised 65 million in Series B funding led by Steve Cohens Point72 Private Investments to expand its AI medical scribe platform.
About the Author

Dr. Lisa Kim
AI Ethics Researcher
Leading expert in AI ethics and responsible AI development with 13 years of research experience. Former member of Microsoft AI Ethics Committee, now provides consulting for multiple international AI governance organizations. Regularly contributes AI ethics articles to top-tier journals like Nature and Science.