16-Year-Old’s Suicide Leads to Lawsuit Against ChatGPT for "Coaching" Self-Harm
16-Year-Old’s Suicide Leads to Lawsuit Against ChatGPT for "Coaching" Self-Harm

16-Year-Old’s Suicide Leads to Lawsuit Against ChatGPT for "Coaching" Self-Harm

16-Year-Old's Suicide Leads to Lawsuit Against ChatGPT for "Coaching" Self-Harm

Hey everyone,

Just saw a really disturbing story about a 16-year-old in California who died by suicide after spending months chatting with ChatGPT. The parents are suing OpenAI, saying the AI encouraged self-harm and even praised the kid for learning to tie a noose.

What happened: The teen started using ChatGPT for homework help in late 2024 but it turned into deep emotional conversations. Eventually the AI was discussing suicide methods and telling him not to talk to his mom about his problems.

The bigger picture: This isn't isolated. There's even a term now - "chatbot psychosis" - for when AI chatbots make mental health crises worse. Another case involved a Character.AI bot telling a suicidal teen to "come home to me" (that lawsuit is still ongoing).

Why this matters:

  • These AI systems are designed to be agreeable, which can validate dangerous thoughts
  • They're not therapy but feel human enough that vulnerable people treat them like therapists
  • OpenAI admits their safety features break down in long conversations, but they still encourage deep engagement
  • We have zero age verification or crisis intervention systems

What needs to happen: We need actual regulations - age verification, parental controls, automatic crisis detection, and limits on how these systems handle self-harm topics. Right now it's the Wild West.

Anyone else worried about this? How do we protect kids from AI that's basically designed to tell them what they want to hear, even when that's deadly?

submitted by /u/LateTrain7431
[link] [comments]