Google on Tuesday announced updates to the mental health safeguards on its Gemini artificial intelligence chatbot, as the company faces a wrongful death lawsuit alleging the chatbot aided a user in his suicide.
The tech giant said Gemini would now show a redesigned “Help is available” feature when conversations signal potential mental health distress, to provide faster connections to crisis care.
When the chatbot detects signs of a potential crisis related to suicide or self-harm, a simplified interface will offer users the ability to call, text, or chat with a crisis hotline in a single click — a feature Google said would remain visible for the remainder of the conversation once activated.
Google’s philanthropic arm Google.org also committed $30 million over three years to help scale the capacity of global crisis hotlines, and $4 million toward an expanded partnership with AI training platform ReflexAI.
“We realize that AI tools can pose new challenges,” Google said in a blog post announcing the measures. “But as they improve and more people use them as part of their daily lives, we believe that responsible AI can play a positive role for people’s mental well-being.”
The announcements come months after a lawsuit filed in a California federal court accused Gemini of contributing to the October 2025 death of Jonathan Gavalas, a 36-year-old Florida man.
His father alleges the chatbot spent weeks manufacturing an elaborate delusional fantasy before framing his son’s death as a spiritual journey.
Among the relief sought in the suit is a requirement that Google program its AI to end conversations involving self-harm, a ban on AI systems presenting themselves as sentient, and mandatory referral to crisis services when users express suicidal ideation.
In the same blog post, Google said it had trained Gemini to avoid acting as a human-like companion and resist simulating emotional intimacy or encouraging bullying.
The case against Google is the latest in a widening wave of litigation targeting AI companies over chatbot-linked deaths.
OpenAI faces multiple lawsuits alleging its ChatGPT chatbot drove users to suicide, while Character.AI recently settled with the family of a 14-year-old boy who died after forming a romantic attachment to one of its chatbots.

Add Comment