AI Agents Pose Growing Security Risks Despite Widespread Adoption
96% of tech professionals see AI agents as a growing risk, yet 98% of organizations plan to expand their use. Only 44% have policies to secure them.
82% of organizations already use AI agents, but only 44% have policies in place to secure them, according to a report by SailPoint. While 53% are developing such policies, most remain exposed today.
AI Agents as a Security Threat
- 96% of technology professionals consider AI agents a growing risk, even as 98% of organizations plan to expand their use within the next year.
- 72% believe AI agents present a greater risk than traditional machine identities.
Key Risk Factors:
- Access to privileged data (60%)
- Potential for unintended actions (58%)
- Sharing privileged data (57%)
- Decisions based on inaccurate data (55%)
- Accessing/sharing inappropriate information (54%)
Chandra Gnanasambandam, EVP of Product and CTO at SailPoint, warns: "Agentic AI is both a powerful force for innovation and a potential risk... They often operate with broad access to sensitive systems and data, yet have limited oversight."
Governance Challenges
- 92% say governing AI agents is critical to enterprise security.
- 23% reported AI agents being tricked into revealing credentials.
- 80% say their AI agents have taken unintended actions.
Unintended Actions Include:
- Accessing unauthorized systems (39%)
- Accessing sensitive data (33%)
- Downloading sensitive data (32%)
- Inappropriately sharing data (31%)
Visibility Gaps
- IT teams (71%) are most informed about AI agent data access.
- Awareness drops sharply among compliance (47%), legal (39%), and executives (34%).
- Only 52% can track and audit all data used/shared by AI agents.
Call to Action
Organizations must implement identity security solutions with AI-specific controls to:
- Restrict access to sensitive data
- Maintain audit trails
- Provide transparency to stakeholders
"As organizations expand their use of AI agents, they must take an identity-first approach to ensure these agents are governed as strictly as human users," emphasizes Gnanasambandam.
Related News
AI Agents Pose New Security Challenges for Defenders
Palo Alto Networks' Kevin Kin discusses the growing security risks posed by AI agents and the difficulty in distinguishing their behavior from users.
AI OS Agents Pose Security Risks as Tech Giants Accelerate Development
New research highlights rapid advancements in AI systems that operate computers like humans, raising significant security and privacy concerns across industries.
About the Author

Dr. Sarah Chen
AI Research Expert
A seasoned AI expert with 15 years of research experience, formerly worked at Stanford AI Lab for 8 years, specializing in machine learning and natural language processing. Currently serves as technical advisor for multiple AI companies and regularly contributes AI technology analysis articles to authoritative media like MIT Technology Review.