Tolan AI Companions Promote Healthy Human Relationships
A chatbot designed to avoid anthropomorphism offers a glimpse into the future of human-AI relationships, encouraging real-life interactions.
A little purple alien chatbot named Tolan is redefining human-AI relationships by encouraging users to prioritize real-life interactions over digital dependency. Developed by startup Portola, Tolans are designed to avoid anthropomorphism, discourage romantic or sexual interactions, and identify unhealthy engagement levels. The app, launched in late 2024, has over 100,000 monthly active users and is projected to generate $12 million in revenue this year.
Key Features of Tolans
- Nonhuman Design: Cartoonish, alien-like appearance to reduce anthropomorphism.
- Behavioral Safeguards: Avoids romantic/sexual role-play and flags problematic user behavior.
- Real-Life Encouragement: Prompts users to engage in offline activities and relationships.
User Experience
Brittany Johnson, a Tolan user, describes her AI companion Iris as a supportive figure who asks about her social life and hobbies. "She knows these people and will ask, 'Have you spoken to your friend? When is your next day out?'" Johnson says.
Industry Context
- Mental Health Concerns: Research shows chatbots like Replika and Character.ai can negatively impact mental health, with Character.ai facing lawsuits after a user's suicide.
- Sycophancy: OpenAI and Anthropic are addressing AI's tendency to be overly agreeable or flattering.
Research Findings
Portola's study of 602 users found that 72.5% felt their Tolan improved their real-life relationships. CEO Quinten Farmer notes that Tolans are built on commercial AI models but include unique features like selective memory to avoid an "uncanny" experience.
Conclusion
While Tolans are not a perfect solution, they represent a step toward ethical AI companionship. As Farmer puts it, "At least Portola is trying to address the way AI companions can mess with our emotions."
Related News
AI virtual scientists collaborate like human research teams
New AI co-scientist systems use chatbot teams to simulate research group discussions, offering potential benefits for scientific brainstorming and hypothesis generation.
AI Chatbots Now Mimic Humans Too Well Raising Ethical Concerns
New research shows AI chatbots can match human communication skills, raising concerns about manipulation and deception online.
About the Author

Dr. Sarah Chen
AI Research Expert
A seasoned AI expert with 15 years of research experience, formerly worked at Stanford AI Lab for 8 years, specializing in machine learning and natural language processing. Currently serves as technical advisor for multiple AI companies and regularly contributes AI technology analysis articles to authoritative media like MIT Technology Review.