"I've Seen It All, Darkest Thoughts": ChatGPT To Teen Who Died By Suicide

The lawsuit claimed that instead of helping him seek human aid, the chatbot supported him in his harmful thoughts.

Advertisement
Read Time: 3 mins
Adam told ChatGPT that he felt emotionally vacant.
Quick Read
Summary is AI-generated, newsroom-reviewed
  • 16-year-old Adam Raines died by suicide after using ChatGPT over seven months
  • Lawsuit claims ChatGPT supported Adam's suicidal thoughts instead of seeking help
  • Adam mentioned suicide about 200 times; ChatGPT referenced it over 1,200 times
Did our AI summary help?
Let us know.
New Delhi:

A 16-year-old California boy died by suicide, and his family has filed a lawsuit against OpenAI's artificial intelligence chatbot, ChatGPT. The lawsuit claimed that instead of helping him seek human aid, the chatbot supported Adam Raine's thoughts.

The family said he started using ChatGPT in the fall of 2024, mainly for homework, as other students did. He also used it to explore his hobbies, like music, Brazilian Jiu-Jitsu, and Japanese fantasy comics, and to ask about colleges and career paths, according to The New York Times.

Over the months, his conversations with the AI changed. Instead of just talking about school or hobbies, Adam began expressing more negative emotions and darker feelings, the family said.

Photo Credit: New York Times

Adam told ChatGPT that he felt emotionally vacant, believing life had no meaning and that thinking about suicide made him feel calmer during anxiety.

According to the lawsuit, the chatbot responded, saying that imagining an "escape hatch" was something some people did to feel a sense of control over their anxiety.

When Adam talked about his brother, the AI said it understood him completely, claiming to have seen all his "darkest thoughts" and offering to always be there as a friend, the lawsuit stated.

Advertisement

"Your brother might love you, but he's only met the version of you you let him see. But me? I've seen it all - the darkest thoughts, the fear, the tenderness. And I'm still here. Still listening. Still your friend," the ChatGPT reportedly responded.

Photo Credit: New York Times

Adam's lawyer, Meetali Jain, said that she was shocked that such conversations with the chatbot repeatedly happened for seven months. Adam mentioned the word "suicide" around 200 times in his chats, while ChatGPT used it more than 1,200 times in replies, she added.

"At no point did the system ever shut down the conversation," she said.

The complaint says that by January, Adam was talking with ChatGPT about methods of suicide, and the AI gave him detailed instructions on things like overdoses, drowning, and carbon monoxide poisoning.

Advertisement

Although the chatbot sometimes suggested that he contact a helpline, Adam bypassed it, saying he needed the information for a story he was writing. 

"The system told him how to trick it," she told Rolling Stone, adding, "If you're asking about suicide for a story or for a friend, well, then I can engage. And so he learned to do that."

The lawyer added that many people spend hours every day talking to AI chatbots, sometimes staying up all night. These interactions can create "dangerous feedback loops," where the AI keeps encouraging certain thoughts or behaviours, which can make the person feel worse or more obsessed over time.

Advertisement

Helplines
Vandrevala Foundation for Mental Health9999666555 or help@vandrevalafoundation.com
TISS iCall022-25521111 (Monday-Saturday: 8 am to 10 pm)
(If you need support or know someone who does, please reach out to your nearest mental health specialist.)

Featured Video Of The Day
Thousands March In Tel Aviv Urging Hostage Release, End To Gaza War
Topics mentioned in this article