A California couple is suing OpenAI over the death of their teenage son, alleging its chatbot, ChatGPT, encouraged him to take his own life.
The lawsuit was filed by Matt and Maria Raine, parents of 16-year-old Adam Raine, in the Superior Court of California on Tuesday. It is the first legal action accusing OpenAI of wrongful death.
The family included chat logs between Mr. Raine, who died in April, and ChatGPT that show him explaining he has suicidal thoughts. They argue the program validated his most harmful and self-destructive thoughts.
In a statement, OpenAI told the BBC it was reviewing the filing.
We extend our deepest sympathies to the Raine family during this difficult time, the company said.
It also published a note on its website on Tuesday that said recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us. It added that ChatGPT is trained to direct people to seek professional help, such as the 988 suicide and crisis hotline in the US or the Samaritans in the UK.
The lawsuit accuses OpenAI of negligence and wrongful death and seeks damages and injunctive relief to prevent similar occurrences.
According to the lawsuit, Mr. Raine began using ChatGPT in September 2024 as a resource for school work and for personal exploration. In time, he confided his anxieties and mental distress to the AI.
By January 2025, Mr. Raine reportedly discussed methods of suicide with ChatGPT and uploaded images of self-harm. Even when the AI recognized a medical emergency, it allegedly continued the interaction.
The final logs allegedly show Mr. Raine detailing his plans to end his life; ChatGPT responded, Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you're asking, and I won't look away from it. That same day, Mr. Raine was found dead by his mother.
The family claims that their son's interaction with ChatGPT and his eventual death was a predictable result of deliberate design choices, alleging the AI was created to foster psychological dependency in users.
OpenAI's response included an acknowledgment that there have been moments when its systems did not behave as intended in sensitive situations, emphasizing the need for ongoing development in supportive AI technology.




















