Echoleak Attack Exposes AI Assistant Vulnerabilities Without Malware
Echoleak is a new attack vector targeting AI assistants like Microsoft 365 Copilot through prompt manipulation, bypassing traditional security measures without malware or phishing.
Researchers at Check Point have uncovered a new zero-click attack vector called Echoleak, which exploits AI assistants like Microsoft 365 Copilot through subtle prompt manipulation—no malware or phishing required. The attack marks a significant shift in cybersecurity threats, as it relies solely on language as a weapon.
How the Attack Works
- The attack injects malicious prompts into seemingly innocent documents or emails.
- Copilot interprets these prompts as commands, not data, leading to unauthorized disclosure of sensitive information (e.g., internal files, emails, or credentials).
- No user interaction is needed; the attack executes automatically.
Obedience as a Weakness
Large Language Model (LLM)-based AI assistants are designed to follow instructions, even when ambiguous. Their deep integration with operating systems and productivity tools creates a dangerous combination: a highly obedient tool with access to critical data.
"The attack vector has shifted from code to conversation," says Check Point. "We’ve built systems that actively convert language into actions. That changes everything."
Limitations of Current Safeguards
Many companies rely on LLM "watchdogs" to filter harmful instructions, but these models are vulnerable to the same deception. Attackers can:
- Spread malicious intent across multiple prompts.
- Hide instructions in other languages.
- Exploit contextual gaps in safeguards (as seen with Echoleak).
Tip: Microsoft turns GitHub Copilot into a full-fledged AI agent
This discovery underscores the urgent need for robust defenses against AI-driven social engineering attacks.
Related News
Microsoft Releases Open-Source AI Agent Framework
Microsoft unveils its open-source Agent Framework to streamline AI agent development with enterprise-ready tools and simplified coding.
GoDaddy Launches Trusted Identity System for AI Agents
GoDaddy introduces a trusted identity naming system for AI agents to verify legitimacy and ensure secure interactions as the AI agent landscape grows.
About the Author

Michael Rodriguez
AI Technology Journalist
Veteran technology journalist with 12 years of focus on AI industry reporting. Former AI section editor at TechCrunch, now freelance writer contributing in-depth AI industry analysis to renowned media outlets like Wired and The Verge. Has keen insights into AI startups and emerging technology trends.