Securing AI Agents A New Identity Challenge
Rotem Zach VP of Innovation at Silverfort explains why AI agents differ from nonhuman identities and how to secure them in digital environments
AI agents are transforming enterprise operations by autonomously executing tasks and making decisions. However, their unique nature poses significant security challenges, as explained by Rotem Zach, VP of Innovation at Silverfort. Unlike traditional non-human identities (NHIs), AI agents are dynamic, autonomous, and capable of learning, making them a new category of identity with distinct risks.
What Sets AI Agents Apart
AI agents differ from NHIs in several key ways:
- Autonomy: They can reason, adapt, and make decisions independently.
- Dynamic Behavior: Their actions are context-driven and unpredictable.
- Risk Profile: Their autonomy introduces risks akin to human users, such as unintended actions or misinterpretations.
Traditional NHIs, like service accounts or OAuth tokens, are static and predictable, designed for repetitive tasks. In contrast, AI agents can interact with critical systems (e.g., email, CRM, customer data) in ways that are hard to foresee.
Key Differences Between NHIs and AI Agents
- Purpose and Behavior: AI agents interpret goals and take actions, while NHIs follow rigid instructions.
- Risk Profile: AI agents can cause harm through unintended actions, whereas NHIs pose risks primarily through misuse or overprivileged access.
- Lifecycle Management: AI agents evolve over time, requiring dynamic security measures, while NHIs follow a more straightforward lifecycle.
The Path Forward
To harness the potential of AI agents, organizations must adopt new security frameworks that address their unique challenges. Silverfort is developing solutions to monitor and control AI agent access, ensuring secure adoption.
Rotem Zach leads Silverfort’s research team and brings expertise from elite cybersecurity roles in the Israel Defense Forces.
For more on AI agent security, visit Silverfort’s platform.
Related News
Zscaler CAIO on securing AI agents and blending rule-based with generative models
Claudionor Coelho Jr, Chief AI Officer at Zscaler, discusses AI's rapid evolution, cybersecurity challenges, and combining rule-based reasoning with generative models for enterprise transformation.
Rubrik Launches AI Error Recovery Tool Agent Rewind
Rubrik introduces Agent Rewind, an AI-driven data recovery solution addressing risks of autonomous AI errors in enterprises, following its Predibase acquisition.
About the Author

Dr. Lisa Kim
AI Ethics Researcher
Leading expert in AI ethics and responsible AI development with 13 years of research experience. Former member of Microsoft AI Ethics Committee, now provides consulting for multiple international AI governance organizations. Regularly contributes AI ethics articles to top-tier journals like Nature and Science.