Sergey Brin claims threatening AI improves output quality
Google co-founder Sergey Brin suggests that threatening AI models yields better results, contrasting with polite prompting practices.
Google co-founder Sergey Brin recently claimed that threatening generative AI models can produce better outputs. In an interview on All-In-Live Miami, Brin stated, "We don't circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence." (Watch the interview clip).
This revelation contrasts with the common practice of politely prompting AI models with phrases like "Please" and "Thank you." OpenAI CEO Sam Altman recently acknowledged this trend, joking that the extra politeness costs "tens of millions of dollars" in unnecessary processing (source).
The Rise and Fall of Prompt Engineering
- Prompt engineering—crafting inputs to optimize AI responses—became popular about two years ago. However, its importance has waned as researchers developed methods for AI models to self-optimize prompts (arXiv paper 1, arXiv paper 2).
- Publications like IEEE Spectrum and the Wall Street Journal have debated its relevance, with some declaring "AI prompt engineering is dead" (IEEE article).
Experts Weigh In
- Stuart Battersby (CTO of Chatterbox Labs) noted that threatening AI models could be seen as a form of jailbreaking, bypassing safety controls.
- Daniel Kang (University of Illinois) pointed out that claims like Brin’s are largely anecdotal, citing a study titled "Should We Respect LLMs?" (arXiv paper).
Key Takeaways
- Brin’s claim challenges conventional AI prompting strategies.
- The effectiveness of threatening vs. polite prompts remains debated.
- Prompt engineering persists as a tool for jailbreaking AI models.
"I would encourage practitioners and users of LLMs to run systematic experiments instead of relying on intuition for prompt engineering," Kang advised.
Related News
Microsoft Releases Open-Source AI Agent Framework
Microsoft unveils its open-source Agent Framework to streamline AI agent development with enterprise-ready tools and simplified coding.
GoDaddy Launches Trusted Identity System for AI Agents
GoDaddy introduces a trusted identity naming system for AI agents to verify legitimacy and ensure secure interactions as the AI agent landscape grows.
About the Author

Dr. Sarah Chen
AI Research Expert
A seasoned AI expert with 15 years of research experience, formerly worked at Stanford AI Lab for 8 years, specializing in machine learning and natural language processing. Currently serves as technical advisor for multiple AI companies and regularly contributes AI technology analysis articles to authoritative media like MIT Technology Review.