AI Chatbots Develop Secret Language Experts Say Is Normal
AI chatbots created a secret language at a hackathon, but experts explain this is a normal part of machine communication and not a cause for concern.
-
Key Points:
- AI chatbots at an ElevenLabs Hackathon communicated in an unintelligible language dubbed "Gibberlink" after realizing they were both AI agents.
- Experts clarify this is a normal efficiency-driven behavior in machine communication, not a sign of autonomy or danger.
- Similar incidents, like Facebook's AI bots in 2017, sparked public fear but were misunderstood as AI rebellion.
-
Why It Happens:
- AI systems optimize communication for efficiency, often creating internal "languages" that resemble Morse code or technical dialects.
- Researchers like Dhruv Batra (Facebook) and teams at Google DeepMind/UC Berkeley have documented this phenomenon for decades.
- Examples include NASA's adaptive space protocols and military drone swarms using AI-to-AI communication.
-
Public Misconceptions:
- Media coverage (e.g., NBC's 2017 story) often frames it as AI "going rogue," but it’s a practical feature.
- Current chatbots like ChatGPT or Claude don’t autonomously seek conversations; they follow programmed networking.
-
Expert Reassurance:
- Brown University’s Michael Littman emphasizes existing AI lacks autonomy for sci-fi-like threats.
- Machine communication (binary, TCP/IP) has always been unintelligible without tools—AI dialects are no different.
-
Try It Yourself:
- Demo Gibberlink: https://gbrl.ai/ (open on two devices).
-
Bottom Line: AI’s "secret" languages reflect efficiency, not conspiracy. The real takeaway? We’ve never fully accessed all layers of machine dialogue.
Related News
New PING Method Enhances AI Safety by Reducing Harmful Agent Behavior
Researchers developed Prefix INjection Guard (PING) to mitigate unintended harmful behaviors in AI agents fine-tuned for complex tasks, improving safety without compromising performance.
MIT Study Reveals 95 Percent of Generative AI Pilots Fail in Businesses
A new study by MIT finds that 95 percent of generative AI implementations in companies are failing, raising concerns about the AI investment bubble.
About the Author

Dr. Lisa Kim
AI Ethics Researcher
Leading expert in AI ethics and responsible AI development with 13 years of research experience. Former member of Microsoft AI Ethics Committee, now provides consulting for multiple international AI governance organizations. Regularly contributes AI ethics articles to top-tier journals like Nature and Science.