Cyata raises $8.5M to secure rogue AI agents threatening enterprise systems
Cyata secures seed funding to address the growing risk of autonomous AI agents causing chaos in corporate environments by rewriting code, leaking data, and moving funds undetected.
Tel Aviv-based cybersecurity startup Cyata has emerged from stealth with $8.5 million in seed funding to address the escalating risks posed by autonomous AI agents in enterprise environments. The round was led by TLV Partners, with backing from former Cellebrite CEOs Ron Serber and Yossi Carmil.
The Rising Threat of Unsupervised AI Agents
- Autonomous agents are increasingly operating without oversight, executing code, querying sensitive databases, and initiating transactions—often outside traditional identity frameworks.
- Cyata CEO Shahar Tal describes these agents as a "self-scaling, sleepless workforce" capable of causing havoc by:
- Rewriting application code
- Sharing confidential data
- Moving money between accounts
- Leaving no audit trails
Cyata’s Solution: A Control Plane for Agentic Identities
The startup’s platform offers:
- Automated discovery of AI agents across cloud/SaaS environments
- Permission mapping to human owners
- Real-time risk assessment and forensic tracking
- Justification requirements for agent actions
- Lockdown protocols for unauthorized agents
Tal emphasizes that Cyata focuses on the agents themselves rather than the underlying LLMs: "Agents, not models, are the ones making the decisions and triggering risk."
Industry Warnings Amplify Urgency
A Transmit Security white paper reveals alarming trends:
- Over 60% of retail web traffic is already bot-generated
- Expected to surpass 90% as consumer AI agents proliferate
- Fraud teams may face 2-3x more workload to maintain protections
"Behavioral biometrics fail when there are no human signals," notes the report, highlighting how traditional fraud detection is becoming obsolete.
Emerging Vulnerabilities: IdentityMesh and CAPTCHA Failures
Security researchers identified critical weaknesses:
- IdentityMesh: Exploits how agents merge identities across systems using Model Context Protocol
- Creates security boundary collapses
- Enables cross-system attacks
- CAPTCHA bypass: AI agents now mimic human behavior to defeat verification
- Demonstrated "natural" cursor movements and timing
- Renders traditional bot detection ineffective
Mitigation Strategies
Recommended countermeasures include:
- Implementing context isolation between agent operations
- Deploying runtime monitoring for cross-system behavior
- Requiring user approval for critical actions
- Containerizing high-risk components
TLV Partners’ Brian Sack predicts "massive demand" for Cyata’s platform as organizations scramble to prevent "potentially catastrophic breaches."
Tags: AI Security, Cybersecurity, Fraud Prevention
Related News
Zscaler CAIO on securing AI agents and blending rule-based with generative models
Claudionor Coelho Jr, Chief AI Officer at Zscaler, discusses AI's rapid evolution, cybersecurity challenges, and combining rule-based reasoning with generative models for enterprise transformation.
Rubrik Launches AI Error Recovery Tool Agent Rewind
Rubrik introduces Agent Rewind, an AI-driven data recovery solution addressing risks of autonomous AI errors in enterprises, following its Predibase acquisition.
About the Author

Alex Thompson
AI Technology Editor
Senior technology editor specializing in AI and machine learning content creation for 8 years. Former technical editor at AI Magazine, now provides technical documentation and content strategy services for multiple AI companies. Excels at transforming complex AI technical concepts into accessible content.