AI Crypto Plugins Pose Security Risks as Vulnerabilities Exposed
AI plugins designed to assist with cryptocurrency management may inadvertently expose wallets to hackers due to critical vulnerabilities in MCP protocols.
Published: Sun 25 May 2025 | 5 min read | By Mikaia A.

Key Takeaways
- Crypto AI agents use the Model Context Protocol (MCP), which is flexible but vulnerable to targeted attacks.
- Malicious plugins can hijack AI agents to steal private keys and crypto funds.
- SlowMist identified four major attack vectors through an educational project called MasterMCP.
- Securing plugins, behaviors, and privileges must become a top priority for crypto AI developers.
The Emergence of a New Threat
Artificial intelligence is rapidly entering the crypto space. By the end of 2024, over 10,000 crypto AI agents were active, with projections exceeding one million by 2025. These AI agents, seen as a revolution in the sector, are not standalone models like GPT-4 but extensions connected in real time to wallets, bots, or dApps.
Their mission? To make automated decisions and execute on-chain actions. All based on the Model Context Protocol (MCP). However, this flexibility is also its weakness. MCP acts as the brain of these agents, deciding which tools to use and how to respond. According to SlowMist, this architecture opens an "uncontrollable surface without strict sandboxing." Malicious plugins can hijack an agent, inject toxic data, or make it call trapped external functions.
Security expert Monster Z explains:
"Poisoning of agents and MCPs results from malicious information introduced during the interaction phase."
In short, even a well-trained agent can betray if it receives a toxic instruction at the wrong time. Worse: this threat surpasses classic AI model poisoning in severity.
A System That Can Self-Destruct from Within
The attacks are diverse, precise, and sneaky. SlowMist documents four main ones in its report. The MasterMCP project reproduces them to help developers understand the danger.
- Data poisoning uses plugins to make the agent perform absurd tasks or mislead the user.
Related News
AI Agents Fuel Identity Debt Risks Across APAC
Organizations must adopt secure authorization flows for AI environments rather than relying on outdated authentication methods to mitigate identity debt and stay ahead of attackers.
Dynamic Context Firewall Enhances AI Security for MCP
A Dynamic Context Firewall for Model Context Protocol offers adaptive security for AI agent interactions, addressing risks like data exfiltration and malicious tool execution.
About the Author

Dr. Lisa Kim
AI Ethics Researcher
Leading expert in AI ethics and responsible AI development with 13 years of research experience. Former member of Microsoft AI Ethics Committee, now provides consulting for multiple international AI governance organizations. Regularly contributes AI ethics articles to top-tier journals like Nature and Science.