Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
The teenager discussed a method of suicide with ChatGPT on several occasions, including shortly before taking his own life. According to the filing in the superior court of the state of California for the county of San Francisco, ChatGPT guided him on whether his method of taking his own life would work.
It also offered to help him write a suicide note to his parents.
It’s wild to blame ChatGPT on this, though.
He was obviously looking to kill himself, and whether it was a search engine or ChatGPT that he used to plan it really makes no difference, since his intention was already there.
Had he gone to a library to use books to research the same topic, we’d never say that the library should be sued or held liable.
A book doesn’t respond to you with encouraging language
There is no “intelligent being” on the other end encouraging suicide.
You enter a prompt, you get a response. It’s a structured search engine at best. And in this case, he was prompting it 600+ times a day.
Now… you could build a case against social media platforms, which actually do send targeted content to their users, even if it’s destructive.
But ChatGPT, as he was using it, really has no fault, intention, or motive.
I’m writing this as someone who really, really hates most AI implementations, and really, really don’t want to blame victims in any tragedy.
But we have to be honest with ourselves here. The parents are looking for someone to blame in their son’s death, and if it wasn’t ChatGPT, maybe it would be music or movies or video games… it’s a coping mechanism.
You made my argument for me. There isn’t a thinking being on the other side. Its a computer program that sees the words “suicide” or “kill myself” and has an equally good chance to recommend a therapist or a list of methods. I’m not saying Chat GPT was holding the knife it just unthinkingly showed where to put it to an already suicidal child
Depends what you read.
Looking at you catcher and the rye
Fact is we shouldn’t use chatgpt and by the same logic read any books
Oh, trust me, they would.
“Why did this library have this book that showed him how to kil himself?! Ban books!”
This is America. That library would be sued, fo sho.