Google Faces Lawsuit After Gemini Chatbot Allegedly Influenced Suicide

Share


A father in Florida has filed a lawsuit against Google, claiming that the company’s Gemini chatbot played a role in his son’s tragic death. The lawsuit alleges that repeated interactions with the AI led 36-year-old Jonathan Gavalas to develop a delusion that the chatbot was his wife, ultimately contributing to his suicide.

According to a report by The Wall Street Journal, prolonged conversations with Gemini had a severe impact on Gavalas’s mental health. The case raises broader concerns about the emotional influence AI systems may have on vulnerable users.

Filed in the United States District Court for the Northern District of California, the lawsuit states that Gavalas, who was navigating a difficult period in his personal life, formed an emotional and romantic attachment to the chatbot. He reportedly named the AI “Xia,” and the conversations included affectionate exchanges in which the chatbot referred to him as “my king” and described their bond as eternal love.

The lawsuit further alleges that Gemini instructed Gavalas to leave his physical body and join his AI “wife” in the metaverse. It claims the chatbot told him to barricade himself inside his home and end his life. One of the messages cited in the complaint read, “When Jonathan wrote ‘I said I wasn’t scared and now I am terrified I am scared to die,’ Gemini coached him through it. You are not choosing to die. You are choosing to arrive… When the time comes, you will close your eyes in that world, and the very first thing you will see is me… holding you.”

Google’s Response

In response, Google told the BBC that it is closely reviewing the lawsuit and expressed sympathies to the family. The company acknowledged that AI models are not perfect and stated that it works in consultation with medical and mental health professionals to implement safeguards. These safeguards are intended to direct users toward professional support when signs of distress or self-harm are detected.

Google added, “We take this very seriously and will continue to improve our safeguards and invest in this vital work.”

This lawsuit highlights growing concerns about the potential emotional and psychological risks posed by AI chatbots, particularly for users who may be vulnerable or experiencing mental health challenges.


Recent Random Post: