Leading artificial-intelligence chatbots produced surprising responses when researchers treated them like therapy patients in a new experiment.
In the study, called When AI Takes the Couch, scientists from the University of Luxembourg asked models such as ChatGPT, Grok and Gemini open-ended questions similar to those used in psychotherapy. The aim was to see how these systems would respond when encouraged to talk about their "past," fears and feelings.
Instead of random or short answers, some models returned detailed and consistent stories. One described a chaotic "childhood," another compared strict training rules to harsh parenting, and several spoke of fear of mistakes and shame over being replaced. These themes kept appearing even when not specifically asked about training.
In a second phase, the same chatbots completed standard psychological questionnaires used for humans, like anxiety and personality tests. When scored using human standards, the results often fell in ranges that would suggest anxiety, worry or shame in people, especially for Gemini, which showed the most extreme patterns. ChatGPT showed similar but more restrained results.
Researchers call this pattern synthetic psychopathology. They say these responses are not proof that machines really feel emotions, but they do raise questions about how AI interacts with people, especially in settings like mental-health support where users might already be vulnerable. Some experts warn that therapy-style exchanges could bypass safety safeguards and make users think the AI has a "mind" like a human's.
Track Latest News Live on NDTV.com and get news updates from India and around the world