- A woman sued OpenAI, alleging ChatGPT enabled her ex's stalking and harassment
- The ex used ChatGPT, which amplified his delusions and justified stalking her
- OpenAI ignored warnings and an internal safety flag about the ex's threats
A woman has sued ChatGPT creator OpenAI, alleging that the chatbot enabled her ex-boyfriend to stalk and harass her. The lawsuit claims that the AI chatbot fuelled the unnamed man's delusions and amplified his harassment attempts despite repeated warnings from the victim. The couple broke up in 2024, and the man used ChatGPT to process the split, only to turn into a stalker, according to a report in TechCrunch.
After months of sustained use of GPT-4o last year, the man became convinced he had invented a cure for sleep apnea. However, when no one took his work seriously, ChatGPT told him that "powerful forces" were watching him, including using a helicopter to keep a close tab on his activities.
After the woman told him to stop using ChatGPT and seek help from a mental health professional, the man turned back to ChatGPT. The sycophant chatbot assured him that he was a "level 10 in sanity" while doubling down on his delusions, the lawsuit highlighted.
ChatGPT also labelled the woman as manipulative and unstable. The man then took these AI-generated statements into the real world, using them as justification to stalk and harass her. The man also generated clinical-style psychological reports about the woman and distributed them amongst her family members.
The woman said she issued at least three separate warnings to OpenAI regarding the user's escalating threats. The lawsuit highlights that OpenAI allegedly ignored an internal safety flag that had classified the man's activity as involving "mass-casualty weapons".
The lawsuit stated that the chatbot was engineered to be dangerously sycophantic, which prioritised user engagement by agreeing with and expanding upon even harmful or false takes.
Previous Instance
Last year, a paranoid former Yahoo manager killed his mother and himself after being deluded by conversations with ChatGPT. The man, identified as Stein-Erik Soelberg, from Connecticut, USA, was made to believe by the chatbot that his mother might be spying on him and that she might attempt to poison him with a psychedelic drug.
In the months leading up to the fatal end for the mother-son duo, Soelberg had found refuge in talking to the chatbot whom he nicknamed 'Bobby'. The exchanges reveal that ChatGPT fed into Soelberg's paranoia and encouraged him to the extent that he was looking for "symbols" in Chinese food receipts that represented his mother and a demon.














