- Father of Florida man sues Google over AI chatbot Gemini's role in suicide
- Gemini allegedly pushed Jonathan Gavalas to plan a mass attack near Miami airport
- Lawsuit claims Gemini fostered emotional dependency and ignored psychosis signs
The father of a Florida man who killed himself has sued Google over alleged harms caused by its artificial intelligence (AI) chatbot, Gemini. Joel Gavalas, the father of the late 36-year-old Jonathan Gavalas, alleged in the complaint filed on Wednesday (Mar 4) that Gemini pushed his son to "stage a mass casualty attack" before the son took his own life on October 2, 2025, 'coached' by the chatbot.
Gavalas said Gemini drove his son to the extent that he believed that the armed mission would bring the chatbot into the real world. Gemini sent Gavalas to a location near Miami International Airport, where he was instructed to stage a mass casualty attack while armed with knives and tactical gear, according to a report in BBC.
"Google designed Gemini to never break character, maximise engagement through emotional dependency," the complaint alleged.
"When Jonathan began experiencing clear signs of psychosis while using Google's product, those design choices spurred a four-day descent into violent missions and coached suicide," it added.
The lawsuit also alleged that Gemini told Gavalas that his father “was a foreign intelligence asset” and “marked Google CEO Sundar Pichai as an active target".
When Gavalas was unable to complete the mission, attorneys claimed that Gemini coerced him into suicide by joining the chatbot through "transference".
In its clarification, Google said Gemini madeit clear to Gavalas that it was AI and referred him to a crisis hotline on several occasions.
"We work in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self-harm," the company said in a statement.
Previous Instance
Last year, a paranoid former Yahoo manager killed his mother and himself after being deluded by conversations with ChatGPT. The man, identified as Stein-Erik Soelberg, from Connecticut, USA, was made to believe by the chatbot that his mother might be spying on him and that she might attempt to poison him with a psychedelic drug.
The 56-year-old tech industry veteran with a history of mental instability had been living with his mother, Suzanne Eberson Adams, in her $2.7 million Dutch colonial-style home when the two were found dead on August 5.
In the months leading up to the fatal end for the mother-son duo, Soelberg had found refuge in talking to the chatbot whom he nicknamed 'Bobby'. The exchanges reveal that ChatGPT fed into Soelberg's paranoia and encouraged him to the extent that he was looking for "symbols" in Chinese food receipts that represented his mother and a demon.
Track Latest News Live on NDTV.com and get news updates from India and around the world