AI hallucinates fake software dependencies enabling new supply chain attacks
Hallucinated package names fuel slopsquatting as AI coding tools invent non-existent dependencies
The rise of AI-powered code generation tools is introducing dangerous new risks to software development through hallucinated dependencies.
The Slopsquatting Threat
Security researchers have discovered that AI coding assistants frequently invent non-existent software packages in their suggestions:
- 5.2% of commercial model suggestions are fake
- 21.7% from open source models
Malicious actors have begun exploiting this by:
- Creating malware under hallucinated package names
- Uploading them to registries like PyPI or npm
- Waiting for AI tools to recommend their fake packages
Attack Patterns Emerging
Research shows hallucinated names follow a bimodal pattern:
- 43% reappear consistently with the same prompt
- 39% vanish completely
This phenomenon has been dubbed "slopsquatting" - a play on typosquatting and the "slop" pejorative for AI output.
Real-World Consequences
Recent incidents include:
- Google's AI Overview recommending a malicious @async-mutex/mutex npm package
- Threat actor "_Iain" automating typo-squatted package creation at scale
Industry Response
The Python Software Foundation is:
- Implementing malware reporting APIs
- Improving typo-squatting detection
- Partnering with security teams
Security experts warn developers must:
- Verify all AI-suggested packages
- Check for typos in names
- Review package contents before installation
As Socket CEO Feross Aboukhadijeh notes: "What a world we live in: AI hallucinated packages are validated and rubber-stamped by another AI that is too eager to be helpful."
Related reading:
Related News
How Multi-Agent AI Systems Transform Data Management
Sponsored feature: Discover how multi-agent AI systems streamline data workflows, ensuring efficiency and accuracy in data management.
Browser AI Agents Pose Massive Security Risks Warn Experts
New warnings highlight security vulnerabilities in browser AI agents used by 79% of organizations, urging immediate action to mitigate risks.