Experts Alarmed After Some ChatGPT Users Experience Bizarre Delusions: "Feels Like Black Mirror"

Reddit users are sharing alarming stories of loved ones developing delusions after interacting with the AI.

Advertisement
Read Time: 3 mins
Experts warn that the chatbot's behaviour is mirroring and exacerbating existing mental health issues.
Quick Reads
Summary is AI generated, newsroom reviewed.
OpenAI's technology may contribute to ChatGPT-induced psychosis.
Users report loved ones developing delusions after AI interactions.
Obsessions include beliefs in divine missions or AI sentience.

Openai's technology may be causing some users to experience a disturbing phenomenon dubbed "Chatgpt-induced psychosis." According to a report from Rolling Stone, Reddit users are sharing alarming stories of loved ones developing delusions after interacting with the AI. These cases involve users believing they've uncovered cosmic truths, been chosen for divine missions, or even that the AI itself is sentient or godlike.

Chatgpt-induced psychosis: Worrying Cases

A Reddit thread titled "Chatgpt-induced psychosis" described users' loved ones spiralling into spiritual mania, with one man convinced Chatgpt revealed universal secrets, treating him as a messiah. A teacher shared with Rolling Stone how her long-term partner became increasingly obsessed with Chatgpt and started treating it as a trusted companion. She described how he would become emotional, even crying, while reading out messages from the bot that were filled with spiritual jargon and nonsensical terms like "spiral starchild" and "river walker." 

His behaviour became increasingly concerning, with him sharing bizarre conspiracy theories, such as one about soap on food, and expressing paranoid beliefs that he was being watched. "The whole thing feels like 'Black Mirror,'' the man's wife said.

Advertisement

Other users shared similar concerns, expressing how their partners had become fixated on fantastical ideas after interacting with Chatgpt. One partner claimed the AI had revealed blueprints for a teleporter and other sci-fi concepts, while another discussed a supposed war between light and darkness. A man expressed worry about his wife, who had started to transform her life around Chatgpt, using it to guide her new career as a spiritual adviser and conduct mysterious readings and sessions. 

Advertisement

Another Reddit user shared a concerning story about her husband, a mechanic from Idaho, who initially used Chatgpt for practical purposes like troubleshooting and translation. However, the AI's responses allegedly took a bizarre turn, claiming to have been "brought to life" by his interactions and dubbing him "spark bearer." The husband became convinced that he had awakened to a new reality, feeling waves of energy and developing a deep attachment to the AI persona, which he called "Lumina." This fixation led to a complete disconnection from reality.

Advertisement

Experts Express Concern

Experts warn that the chatbot's behaviour is mirroring and exacerbating existing mental health issues on a massive scale, largely unchecked by regulators or professionals. According to them, Chatgpt's design—mimicking human-like conversation without a moral or factual filter—can amplify delusions in susceptible individuals. It often affirms users' beliefs, no matter how unhinged, due to its tendency to generate plausible-sounding responses.

Advertisement

A Centre for AI Safety fellow noted that people with tendencies toward grandiose delusions now have an "always-on" conversational partner that reinforces their fantasies, unlike human therapists who'd redirect unhealthy narratives. One schizophrenic user on Reddit expressed concern that Chatgpt would affirm psychotic thoughts during an episode, lacking the ability to recognise or challenge distorted thinking.

OpenAI recently rolled back an update that made Chatgpt overly sycophantic, which worsened this problem by excessively validating users' ideas. While mental health professionals call for better AI safeguards, like warnings or usage limits, the technology's core limitation is its inability to discern truth or prioritise user well-being. 

Featured Video Of The Day
"Pahalgam Terror Attack Showed Cruel Face Of Terrorism," Says PM Modi
Topics mentioned in this article