AI hallucinates fake software dependencies enabling new supply chain attacks
Hallucinated package names fuel slopsquatting as AI coding tools invent non-existent dependencies
AI Hallucinations Fuel New Software Supply Chain Attacks
The rise of AI-powered code generation tools is introducing dangerous new risks to software development through hallucinated dependencies.
The Slopsquatting Threat
Security researchers have discovered that AI coding assistants frequently invent non-existent software packages in their suggestions:
- 5.2% of commercial model suggestions are fake
- 21.7% from open source models
Malicious actors have begun exploiting this by:
- Creating malware under hallucinated package names
- Uploading them to registries like PyPI or npm
- Waiting for AI tools to recommend their fake packages
Attack Patterns Emerging
Research shows hallucinated names follow a bimodal pattern:
- 43% reappear consistently with the same prompt
- 39% vanish completely
This phenomenon has been dubbed "slopsquatting" - a play on typosquatting and the "slop" pejorative for AI output.
Real-World Consequences
Recent incidents include:
- Google's AI Overview recommending a malicious @async-mutex/mutex npm package
- Threat actor "_Iain" automating typo-squatted package creation at scale
Industry Response
The Python Software Foundation is:
- Implementing malware reporting APIs
- Improving typo-squatting detection
- Partnering with security teams
Security experts warn developers must:
- Verify all AI-suggested packages
- Check for typos in names
- Review package contents before installation
As Socket CEO Feross Aboukhadijeh notes: "What a world we live in: AI hallucinated packages are validated and rubber-stamped by another AI that is too eager to be helpful."
Related reading:
Related News
IBM focuses on streamlining workflows with AI agents and automation
At IBM's Put AI To Work event, the company emphasized its broader vision of integrating AI agents and automation into existing workflows, regardless of the platform.
Oklahoma Embraces Autonomous AI for Cybersecurity Defense
Oklahoma's Chief Information Security Officer Michael Toland discusses the state's shift toward allowing AI agents to make independent cybersecurity decisions.