Study reveals AI coding tools slow experienced developers by 19%
Experienced developers take 19% longer to complete tasks with AI tools despite believing they are faster, according to a new study.
A new study by Model Evaluation & Threat Research (METR) reveals that experienced developers take 19% longer to complete tasks when using AI coding assistants like Cursor Pro and Claude. The research challenges the widespread belief that AI tools universally boost productivity in software development.
Key Findings
- Perception vs. Reality: Developers predicted AI would reduce their task completion time by 24%, but even after experiencing a slowdown, they still believed AI improved their productivity by 20%.
- Controlled Testing: The study tracked 16 seasoned open-source developers working on mature repositories (averaging 1M+ lines of code) in a randomized controlled trial (RCT). Tasks took 19% longer with AI tools.
- Low Acceptance Rate: Developers accepted less than 44% of AI-generated code suggestions, with 75% reading every line of AI output and 56% making major modifications.
Industry Implications
- Google’s DORA Report: Aligns with METR’s findings, showing a 1.5% dip in delivery speed and a 7.2% drop in system stability for every 25% increase in AI adoption.
- Contradictory Studies: Earlier research from MIT, Princeton, and UPenn found developers completed tasks 55.8% faster with GitHub Copilot, but these studies used simpler tasks.
Why the Slowdown?
- Contextual Challenges: AI tools struggle with large, mature codebases and intricate dependencies.
- Trust Deficit: Developers spend significant time reviewing and modifying AI-generated code.
- Cognitive Load: While AI reduces some mental strain, it introduces new complexities.
Strategic Takeaways
- Developer Satisfaction ≠ Productivity: Organizations must differentiate between improved coding experience and actual output speed.
- Structured Evaluation: Enterprises need rigorous frameworks to measure AI’s impact, including downstream effects like code churn and peer review cycles.
Despite the slowdown, 69% of participants continued using AI tools, suggesting value beyond pure speed. The study underscores the need for better AI integration and realistic expectations in software development.
For more details, read the full study here.
Related News
Lenovo Wins Frost Sullivan 2025 Asia-Pacific AI Services Leadership Award
Lenovo earns Frost Sullivan's 2025 Asia-Pacific AI Services Customer Value Leadership Recognition for its value-driven innovation and real-world AI impact.
Baidu Wenku GenFlow 2.0 Revolutionizes AI Agents with Multi-Agent Architecture
Baidu Wenku's GenFlow 2.0 introduces a multi-agent system for parallel task processing, integrating with Cangzhou OS to enhance efficiency and redefine AI workflows.
About the Author

Dr. Lisa Kim
AI Ethics Researcher
Leading expert in AI ethics and responsible AI development with 13 years of research experience. Former member of Microsoft AI Ethics Committee, now provides consulting for multiple international AI governance organizations. Regularly contributes AI ethics articles to top-tier journals like Nature and Science.