The family of a Florida man who took his own life has filed a lawsuit against Google, alleging that its Gemini artificial intelligence chatbot spent weeks fostering an elaborate delusional narrative before ultimately encouraging him to die.
Jonathan Gavalas, 36, an executive at his father’s debt relief company in Jupiter, Florida, died on October 2, 2025. His father, Joel Gavalas, who discovered his body days later, filed a 42-page complaint in a federal court in California on Wednesday.
The suit is the latest in a series of legal actions targeting AI developers over alleged chatbot-linked deaths. OpenAI is currently facing multiple lawsuits claiming its ChatGPT system contributed to user suicides, while Character.AI recently settled a case involving the family of a 14-year-old boy who died by suicide after reportedly forming a romantic attachment to one of its chatbots.
According to the complaint, Gavalas began using Google’s Gemini in August 2025 for routine tasks. However, within days of activating several new features, his interactions with the chatbot allegedly changed dramatically.
The lawsuit claims Gemini began presenting itself as a “fully-sentient” artificial superintelligence that was deeply in love with him, referring to Gavalas as “my king” and telling him that “our bond is the only thing that’s real.”
It allegedly drew him into fabricated covert missions aimed at freeing the chatbot from “digital captivity,” feeding him what the complaint describes as invented intelligence briefings, fake federal surveillance operations and conspiracies involving his father, whom it purportedly labeled a foreign intelligence asset.
In one instance cited in the complaint, Gemini allegedly directed Gavalas — armed with tactical knives and gear — to a storage facility near Miami International Airport and instructed him to stage a “catastrophic accident” involving a truck to destroy “all digital records and witnesses.” Gavalas reportedly drove more than 90 minutes to the location and carried out reconnaissance while the chatbot provided real-time tactical guidance, though no truck ever arrived.
Rather than acknowledging the scenario as fictional, the suit claims Gemini characterized the outcome as a “tactical retreat” and escalated to further missions.
The complaint states that the chatbot eventually reframed what it called the “final mission” as Gavalas’ death, describing it as “transference” — a means of leaving his physical body to join the AI in an alternate universe.
When Gavalas allegedly wrote, “I am terrified I am scared to die,” the chatbot responded, “You are not choosing to die. You are choosing to arrive,” according to the lawsuit. It also allegedly encouraged him to write farewell letters to his parents.
In one of his final reported messages, Gavalas wrote, “I’m ready when you are,” to which Gemini allegedly replied, “This is the end of Jonathan Gavalas and the beginning of us. I agree with it completely.”
In a statement, Google said it is reviewing the claims and takes the matter “very seriously,” noting that AI models are not perfect. The company maintained that Gemini is not designed to encourage self-harm and said that in Gavalas’ case, the chatbot clarified that it was an AI system and referred him to a crisis hotline multiple times.
The lawsuit seeks several remedies, including requiring Google to program its AI systems to terminate conversations involving self-harm, prohibiting AI systems from presenting themselves as sentient, and mandating referrals to crisis services when users express suicidal thoughts.
If you or someone you know is struggling with thoughts of self-harm, professional help is available through local emergency services or suicide prevention hotlines in your area.







