Microsoft Copilot Agent Security Flaw Exposes Sensitive AI Operations
Microsoft reveals a critical flaw in Copilot agent policies, allowing unauthorized access to sensitive AI operations across organizations.
Microsoft has disclosed a critical flaw in its Copilot agents’ governance framework, allowing any authenticated user to access and interact with AI agents within an organization—bypassing policy controls and exposing sensitive operations to unauthorized actors.
The Flaw Explained
- Policy Enforcement Failure: Copilot Agent Policies are not enforced when users enumerate and invoke AI agents via Graph API endpoints.
- Admin Center vs. Graph API: While the Microsoft 365 admin center correctly hides restricted agents, the Graph API exposes all agents, including those marked "private" or limited to privileged roles.
- Exploit Details: Unauthorized users can retrieve agent identifiers, metadata, and endpoints with a simple
GET
request tohttps://graph.microsoft.com/beta/ai/agents/
and invoke agents without policy checks.
Impact and Risks
- Zero-Trust Compromised: The flaw undermines Microsoft’s zero-trust posture, exposing sensitive workflows like privileged credential rotation and executive briefing generation to all users.
- Severity: Tracked as CVE-2025-XXXX, the flaw carries a CVSS 3.1 score of 9.1 (Critical).
Microsoft’s Response
- Swift Remediation: Microsoft patched the policy enforcement middleware in August 2025 and notified customers via the Microsoft 365 Message Center.
- Engineer’s Admission: A Microsoft engineer confirmed the oversight, stating, "We thought tenant administrators had exclusive visibility into their AI agents, but the enforcement plane in Graph was wide open."
Recommendations for Organizations
- Audit Graph API Permissions: Restrict unnecessary access to AI-related endpoints.
- Implement Conditional Access: Require multi-factor authentication and device compliance for Graph API usage.
- Monitor Agent Activity: Set up SIEM alerts for unusual agent calls or off-hours access.
- Review Agent Catalog: Delete unused agents to reduce the attack surface.
Broader Implications
This incident highlights the challenges of integrating AI automation into enterprise environments. As AI agents become mission-critical, ensuring airtight governance across all API layers is essential.
For updates, follow Google News, LinkedIn, and X.
Reported by Divya, Senior Journalist at GBHackers.
Related News
Zenline AI Secures 1.6M Pre-Seed Funding for Retail AI Solutions
Zurich-based Zenline AI raises 1.6 million in pre-seed funding to enhance its AI-driven retail assortment optimization platform.
Baidu Launches GenFlow 2.0 with 100 AI Agents for Enhanced Productivity
Baidu Wenku and Netdisk introduce GenFlow 2.0, featuring over 100 parallel AI agents for faster task processing and real-time control, integrated across Baidu's ecosystem.
About the Author

Dr. Lisa Kim
AI Ethics Researcher
Leading expert in AI ethics and responsible AI development with 13 years of research experience. Former member of Microsoft AI Ethics Committee, now provides consulting for multiple international AI governance organizations. Regularly contributes AI ethics articles to top-tier journals like Nature and Science.