Preparing for the Compliance Challenges of Agentic AI
Artificial intelligence keeps improving at all sorts of things including how to challenge corporate ethics and compliance programs. Even while you may still be struggling to tame the risks of generative AI, its more powerful cousin is already coming up fast: agentic AI.
Artificial intelligence (AI) continues to evolve, presenting new challenges for corporate ethics and compliance programs. While many organizations are still grappling with the risks of generative AI, a more advanced form—agentic AI—is rapidly emerging.
What is Agentic AI?
Agentic AI refers to AI "agents" capable of acting independently to achieve goals. These agents can devise strategies, learn from experiences, and even collaborate with other AI systems to solve complex tasks. While this technology offers benefits like automating personal tasks (e.g., booking concert tickets or managing home temperatures), its corporate applications introduce significant risks.
Corporate Risks of Agentic AI
- Supply Chain Management: AI agents could autonomously order supplies based on inventory levels, raising concerns about supplier selection, forced labor, cybersecurity, and sanctions compliance.
- Customer Service: AI-powered customer service bots might make unauthorized promises, as seen in a recent incident involving Air Canada where a chatbot provided incorrect refund policies.
- HR Functions: AI agents screening job applicants could inadvertently introduce bias, leading to discrimination risks and transparency issues.
Managing Agentic AI Risks
To mitigate these risks, companies should:
- Establish Governance Frameworks: Ensure AI adoption is centrally managed to prevent uncontrolled experimentation by employees.
- Conduct Risk Assessments: Evaluate AI use cases against existing laws (e.g., EU AI Act) and broader regulations like anti-discrimination and privacy laws.
- Implement Controls: Validate input data, audit AI outputs, and train employees on responsible AI usage.
Key Challenges
- Task Evaluation: Deciding which tasks to delegate to AI agents—should they handle mission-critical operations like marketing or inventory management?
- Agent Selection: Determining whether to use in-house-developed AI agents or third-party solutions, and how to govern collaborations between AI agents.
- Explainability: Ensuring AI agents can justify their decisions, especially when facing regulatory scrutiny.
Human-Agent Interactions
Treating AI agents like third-party contractors can help. Compliance programs should focus on:
- Employee Training: Educating staff on the risks of agentic AI.
- Policy Development: Defining approved AI agents and usage scenarios.
- Accountability: Enforcing policies to hold employees responsible for AI agent outcomes.
Ultimately, the success of agentic AI hinges on human oversight and ethical awareness. As AI integration grows, compliance programs must adapt to address these evolving challenges.
Related News
Zscaler CAIO on securing AI agents and blending rule-based with generative models
Claudionor Coelho Jr, Chief AI Officer at Zscaler, discusses AI's rapid evolution, cybersecurity challenges, and combining rule-based reasoning with generative models for enterprise transformation.
Human-AI collaboration boosts customer support satisfaction
AI enhances customer support when used as a tool for human agents, acting as a sixth sense or angel on the shoulder, according to Verizon Business study.
About the Author

Alex Thompson
AI Technology Editor
Senior technology editor specializing in AI and machine learning content creation for 8 years. Former technical editor at AI Magazine, now provides technical documentation and content strategy services for multiple AI companies. Excels at transforming complex AI technical concepts into accessible content.