Securing AI Agents A New Identity Challenge
Rotem Zach VP of Innovation at Silverfort explains why AI agents differ from nonhuman identities and how to secure them in digital environments
AI agents are transforming enterprise operations by autonomously executing tasks and making decisions. However, their unique nature poses significant security challenges, as explained by Rotem Zach, VP of Innovation at Silverfort. Unlike traditional non-human identities (NHIs), AI agents are dynamic, autonomous, and capable of learning, making them a new category of identity with distinct risks.
What Sets AI Agents Apart
AI agents differ from NHIs in several key ways:
- Autonomy: They can reason, adapt, and make decisions independently.
- Dynamic Behavior: Their actions are context-driven and unpredictable.
- Risk Profile: Their autonomy introduces risks akin to human users, such as unintended actions or misinterpretations.
Traditional NHIs, like service accounts or OAuth tokens, are static and predictable, designed for repetitive tasks. In contrast, AI agents can interact with critical systems (e.g., email, CRM, customer data) in ways that are hard to foresee.
Key Differences Between NHIs and AI Agents
- Purpose and Behavior: AI agents interpret goals and take actions, while NHIs follow rigid instructions.
- Risk Profile: AI agents can cause harm through unintended actions, whereas NHIs pose risks primarily through misuse or overprivileged access.
- Lifecycle Management: AI agents evolve over time, requiring dynamic security measures, while NHIs follow a more straightforward lifecycle.
The Path Forward
To harness the potential of AI agents, organizations must adopt new security frameworks that address their unique challenges. Silverfort is developing solutions to monitor and control AI agent access, ensuring secure adoption.
Rotem Zach leads Silverfort’s research team and brings expertise from elite cybersecurity roles in the Israel Defense Forces.
For more on AI agent security, visit Silverfort’s platform.
Related News
GoDaddy Launches Trusted Identity System for AI Agents
GoDaddy introduces a trusted identity naming system for AI agents to verify legitimacy and ensure secure interactions as the AI agent landscape grows.
Balancing AI and Human Workflows for Secure Automation
Learn how leading security teams blend AI and human workflows to avoid fragility and compliance issues in this Tines webinar.
About the Author

Dr. Lisa Kim
AI Ethics Researcher
Leading expert in AI ethics and responsible AI development with 13 years of research experience. Former member of Microsoft AI Ethics Committee, now provides consulting for multiple international AI governance organizations. Regularly contributes AI ethics articles to top-tier journals like Nature and Science.