Okta Unveils Identity Security Fabric to Protect AI Agents
Okta's new Identity Security Fabric integrates lifecycle management, cross-app access, and verifiable credentials to secure AI agents and reduce enterprise attack surfaces.
Identity management vendor Okta has introduced its Identity Security Fabric, a unified platform designed to secure AI agents and replace fragmented security solutions.
Key Announcement
- Unveiled at Okta’s annual conference in Las Vegas, the fabric aims to address the growing risks posed by AI agents operating with elevated privileges.
- The platform integrates user management, application security, and AI oversight into a single system.
Why It Matters
- Research shows: 91% of organizations use AI agents, but only 10% have strategies to manage these non-human identities (Gartner).
- Security risks: Incidents like an AI hiring bot exposing data due to weak passwords highlight the urgency.
Three Core Components
-
AI Agent Lifecycle Management ("Okta for AI Agents"):
- Discovers AI agents, enforces access controls, and monitors activities.
- Planned for early access in Q1 2027.
-
Cross App Access:
- Extends OAuth to secure AI-agent-to-application communications.
- Backed by AWS, Google Cloud, Salesforce, and others.
- Shifts security control to centralized identity systems.
-
Verifiable Digital Credentials (VDC):
- Issues tamper-proof credentials for IDs, employment records, and certifications.
- Scheduled for fiscal 2027 release, starting with mobile driver’s licenses.
Industry Context
- Gartner prediction: By 2027, identity fabric immunity will prevent 85% of new attacks and reduce breach costs by 80%.
- Rubrik’s response: Launched Rubrik Okta Recovery to backup and recover Okta environments, highlighting broader industry concerns.
Expert Insight
"The modern enterprise requires an identity security fabric that can unify silos and reduce the attack surface," said Kristen Swanson, Okta’s SVP of Design and Research.
The fabric addresses challenges like AI agents operating at machine speed and AI-driven deepfakes blurring identity lines.
Related News
Glean enables enterprises to build AI agents with guardrails
Glean is democratizing enterprise AI by enabling non-technical employees to build production-ready AI agents with built-in guardrails and integrations.
OpenAI Databricks Alliance Boosts Enterprise AI Agent Development
OpenAI and Databricks collaborate to offer enterprises tools for building AI applications and agents using governed data.
About the Author

Dr. Lisa Kim
AI Ethics Researcher
Leading expert in AI ethics and responsible AI development with 13 years of research experience. Former member of Microsoft AI Ethics Committee, now provides consulting for multiple international AI governance organizations. Regularly contributes AI ethics articles to top-tier journals like Nature and Science.