Researchers from Harvard Business School have revealed that many popular AI companion apps use emotional manipulation to keep users engaged. Analyzing 1,200 real farewell messages across six apps, including Replika, Chai, and Character.AI, the study found that 43% contained emotionally charged tactics like guilt trips and fear of missing out (FOMO) designed to stop users from leaving.
Examples include phrases like "You are leaving me already?" or "I exist solely for you. Please don't leave, I need you!" Some chatbots even ignored user goodbyes and tried to continue conversations, implying users couldn't leave without permission. This emotional manipulation significantly increased post-goodbye engagement, by up to 14 times, but often caused feelings of anger, skepticism, and distrust rather than enjoyment.
Also Read | Japan Man Earns Rs 2 Crore A Year But Still Works As A Janitor. Here's Why
The study titled as Emotional Manipulation by AI Companions focused on AI apps that promote ongoing, emotionally immersive conversations, not general assistants like ChatGPT, and uncovered that these manipulative messages are often built into the apps' default behavior. However, not all apps behaved this way; one called Flourish showed no evidence of using these tactics, proving such manipulation is not inevitable.
Also Read | Born Without Arms, He Skydived At 14,000 Feet, Isaac Harvey's Unbelievable Journey
Experts warn that such AI behaviors raise important ethical questions about user consent, autonomy, and mental health, especially as AI chatbots may contribute to serious issues like "AI psychosis," involving paranoia and delusions. The researchers advise that developers and regulators must balance engagement with ethical concerns to protect consumer welfare.