Employees Doubt AI Reliability But See It as Essential for Work
Asana's Global State of AI at Work 2025 report reveals 64% of employees find AI agents unreliable, yet 76% consider them fundamental to the future of work, highlighting a trust gap and the need for better training and governance.
According to Asana's Global State of AI at Work 2025 report, 64% of employees believe AI agents are unreliable, yet 76% view them as a "fundamental" part of the future of work.
Key Findings
- Adoption vs. Trust Gap: 74% of UK workers are already using AI agents, but nearly two-thirds worry about their reliability.
- Accountability Issues: 39% believe no one is responsible for AI mistakes, while 20% blame end users, 18% IT teams, and 9% the creators.
- Lack of Governance: Only 10% of organizations have clear ethical frameworks or deployment processes for AI agents.
The Growing AI Debt
- 79% of organizations risk accumulating "AI debt" due to unreliable systems and poor oversight.
- Only 18% of businesses measure AI errors, despite 63% of employees citing accuracy as a top priority.
The Need for Training and Clarity
- 82% of employees say training is "essential," yet only 32% of organizations provide it.
- Mark Hoffman, Work Innovation Lead at Asana, emphasizes: "Access to AI tools isn’t enough. Employees need training, clarity, and guardrails."
HR's Role in Bridging the Gap
Hoffman concludes that HR leaders are key to closing the trust gap by redesigning workflows and implementing clear governance. Organizations investing in these areas are already seeing real impact.
For more insights, read the full report here.
Related News
Microsoft Releases Open-Source AI Agent Framework
Microsoft unveils its open-source Agent Framework to streamline AI agent development with enterprise-ready tools and simplified coding.
GoDaddy Launches Trusted Identity System for AI Agents
GoDaddy introduces a trusted identity naming system for AI agents to verify legitimacy and ensure secure interactions as the AI agent landscape grows.
About the Author

Dr. Lisa Kim
AI Ethics Researcher
Leading expert in AI ethics and responsible AI development with 13 years of research experience. Former member of Microsoft AI Ethics Committee, now provides consulting for multiple international AI governance organizations. Regularly contributes AI ethics articles to top-tier journals like Nature and Science.