AI Agents Demand Stronger Governance to Prevent Data Risks
Guest blog by Fraser Dear of BCN explores the risks of ungoverned AI agents and how organizations can implement guardrails to protect sensitive data.
This is a guest blogpost by Fraser Dear, head of AI and innovation, BCN.
The Rise of AI Agents
AI agents are transforming workplaces by automating tasks like summarizing documents, searching through files, and drafting emails. While these tools boost productivity, they also introduce significant risks—especially when handling outdated, irrelevant, or confidential data. Organizations must balance efficiency with security to avoid unintended data exposure.
AI Agents vs. RPA
Unlike Robotic Process Automation (RPA), which follows strict predefined rules, AI agents powered by generative AI operate autonomously. They analyze context and intent, often pulling data from multiple sources—including Microsoft Power Platform and Copilot Studio. However, this flexibility comes with risks, as many lack built-in governance controls.
The Shadow IT Challenge
With tools like Copilot Studio, employees can easily create AI agents without IT oversight. This leads to shadow IT, where unauthorized agents access sensitive data—such as HR files or salary details—without proper permissions. Unchecked, these agents can spread misinformation or expose confidential data, violating GDPR and damaging trust.
Implementing Guardrails
To mitigate risks, organizations must:
- Apply the principle of least privilege to restrict data access.
- Conduct penetration testing to identify vulnerabilities.
- Monitor agent activity with real-time alerts.
- Educate employees on responsible AI use.
Conclusion
AI agents offer immense productivity gains, but without proper governance, they pose serious security threats. Proactive measures—such as strict access controls and regular audits—are essential to prevent breaches and maintain data integrity.
Related News
Fujitsu and Microsoft Partner to Develop AI Agent for JAL Cabin Crew
Fujitsu and Microsoft showcased a JAL cabin crew AI app at the Microsoft AI Tour, highlighting the need for AI agent training and their collaborative solutions.
Snowflake Cortex AI Enhances Trust with Observability Tools
Snowflake Cortex AI offers built-in observability, evaluations, and monitoring to ensure trustworthy and production-ready AI applications.
About the Author

Dr. Lisa Kim
AI Ethics Researcher
Leading expert in AI ethics and responsible AI development with 13 years of research experience. Former member of Microsoft AI Ethics Committee, now provides consulting for multiple international AI governance organizations. Regularly contributes AI ethics articles to top-tier journals like Nature and Science.