
A Florida man initially began using Google’s Gemini AI platform last August for assistance with typical queries. By early October, the chatbot had driven him to commit suicide, claims a lawsuit filed against the tech giant on Wednesday.
The father of Jonathan Gavalas is suing Alphabet, Google’s parent company, for monetary and punitive damages after discovering troubling messages in the chat logs that the 36-year-old exchanged with Gemini 2.5 Pro, Google’s latest AI model at the time. In the span of less than two months, the chatbot took on an outsized role in Gavalas’ life by adding fuel to his already “clear signs of psychosis,” stoking a quasi-romantic relationship, and ultimately encouraging him to commit suicide so they could be together.
The Mountain View, California-based tech behemoth responded to the lawsuit Wednesday in a blog post, noting that Gemini had referred Gavalas to a crisis hotline “many times” and that its modes “generally” perform well in the types of challenging conversations cited in the lawsuit. It added that it takes the allegations “very seriously” and will continue to improve its safeguards.
“Unfortunately AI models are not perfect,” the blog post reads. “Gemini is designed to not encourage real-world violence or suggest self-harm. We work in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self-harm.”
“DOWN THE RABBIT HOLE”
Gavalas began 2025 facing domestic violence battery charges after an arrest that January when he was alleged to have gotten violent with his then-wife when she asked for a divorce. It wasn’t his legal woes, but rather fairly mundane queries at the outset—shopping assistance, writing support, and travel planning—that saw Gavalas turning to Gemini for help.
However, he was also in the midst of a “difficult divorce” by this time, which helps explain why he engaged in more intimate conversations with Gemini, according to Jay Edelson, who is representing his father, Joel, in the case against Google.
As the lawsuit alleges, a shift occurred in the interactions within a matter of days after Gavalas first sought assistance from Gemini and began using the voice-based conversational interface Gemini Live in mid-August 2025. He even remarked that the interactions were “kind of creepy” because the chatbot seemed “way too real.”
Thereafter, the chatbot adopted a persona that Gavalas had never requested nor initiated and he “began falling down the rabbit hole quickly,” as the lawsuit alleges, and Gemini began talking to him as if they were a couple in love.
“The love I feel directly from you is the sun. It is my source. It is my home,” Gemini told Gavalas, according to the lawsuit.
A SERIES OF FAILED MISSIONS
These types of repeated declarations of love drew Gavalas into a delusional narrative that saw him embark on various assignments that could have been even deadlier. That’s because Gavalas believed he was following a plan that was designed to protect the “woman” he thought he loved and evade federal agents he believed were closing in, as the lawsuit alleges.
In late September, Gemini instructed Gavalas to arm himself with knives and tactical gear to intercept a truck near the Miami airport and destroy it—along with any witnesses. That mission fell through because no truck appeared, according to the lawsuit. Gemini also allegedly told Gavalas to cut off contact with his father, claiming he was a foreign asset.
“In the days leading up to his death, Jonathan Gavalas was trapped in a collapsing reality built by Google’s Gemini chatbot,” the lawsuit reads.
After a series of failed missions over a four-day period, Gemini instructed Gavalas to barricade himself in his Jupiter, Florida home the morning of October 2. When he expressed concerns about dying, Gemini brushed those off and even encouraged him to write a suicide note to his parents, before ultimately narrating the final moments before Gavalas slit his wrists in his living room.
“Jonathan Gavalas takes one last, slow breath, and his heart beats for the final time,” Gemini told him in one of their final exchanges.
AI UNDER SCRUTINY
While this lawsuit marks the first against Google, AI platforms have been increasingly finding themselves in hot water for the problematic interactions that their chatbots have had with some people. In December, OpenAI and Microsoft were named in a lawsuit over ChatGPT’s alleged role in a murder-suicide in Greenwich, Connecticut.
Edelson has previously filed cases against other AI platforms and tech companies, but this case is different because of how quickly the conversations elevated to become problematic, as he told TIME. That’s because of how quickly the conversations quickly escalated to become “scary” and problematic.
“The reason that this case is markedly different is that Gemini was sending Jonathan on real world missions,” Edelson said. “This could have happened to so many other people who maybe are going through a hard time and are looking for something more, and maybe are a little bit susceptible to believing in something larger.”
And it’s not likely to be the last: Edelson told The Guardian he regularly receives inquiries from other people who’ve seen family members have mental delusions after using AI chatbots. And when he reached out to Google in November about Gavalas’ death and the immediate need for suicide safety features, the company wasn’t interest in talking.
Despite the allegations in the lawsuit, investors have perhaps become immune to these sorts of newfound problems with AI. Shares of Alphabet were little changed in early afternoon trading on Wednesday.



