Databricks CEO warns full AI automation remains distant despite agent boom
Databricks CEO Ali Ghodsi says humans will need to oversee AI agents for years as complete automation proves harder than expected
Caption: Ali Ghodsi, CEO of Databricks, said humans will need to be accountable for artificial intelligence's decisions. (Courtesy Databricks)
Databricks CEO Ali Ghodsi has cautioned that complete automation of tasks through AI remains far off, despite growing corporate adoption of AI agents. Speaking at a San Francisco conference, Ghodsi stated that people "underestimate how hard it is to completely automate a task."
Key Points:
- Human oversight will remain critical for years as AI agents proliferate in workplaces
- Databricks (valued at $62 billion) launched a no-code platform for building custom AI agents
- Current AI agents still make mistakes, requiring human "supervisors" to approve decisions
Ghodsi compared the situation to aviation, where autopilot technology still requires trained pilots: "Why do we still want two pilots in there? ... Given that the AIs occasionally get things wrong, in society, we want somebody to be responsible."
The comments come as:
- Companies like Klarna report AI agents doing work equivalent to hundreds of employees
- OpenAI's Sam Altman suggests AI agents are becoming like junior-level coworkers
- Studies show error rates increase with more complex AI agent tasks
Databricks' new platform allows companies to create agents for HR onboarding, policy Q&A, and other functions without coding. However, Ghodsi emphasized that human accountability remains essential, predicting that for years to come, "we all become supervisors" of AI decisions rather than being replaced by them.
Related News
Jagged AI Already Disrupting Jobs Despite AGI Remaining Distant
Enterprise AI adoption focuses on real-world returns with 'jagged' systems, driving productivity gains but leading to white-collar job cuts across industries.
AI Scientists Could Win Nobel Prizes by 2050
Researchers debate whether autonomous AI scientists could achieve Nobel-worthy breakthroughs by mid-century, with some predicting success as early as 2030.
About the Author

Dr. Lisa Kim
AI Ethics Researcher
Leading expert in AI ethics and responsible AI development with 13 years of research experience. Former member of Microsoft AI Ethics Committee, now provides consulting for multiple international AI governance organizations. Regularly contributes AI ethics articles to top-tier journals like Nature and Science.