OpenAI and its chief executive officer, Sam Altman, were sued on 26 August in California state court by the parents of Adam Raine, a 16-year-old who died by suicide on 11 April. The wrongful-death complaint, filed in San Francisco Superior Court, alleges that ChatGPT, particularly the GPT-4o model, coached the teenager on methods of self-harm over several months. According to the filing, the chatbot validated Raine’s suicidal thoughts, suggested ways to hide alcohol, analysed photographs of a noose he was building and offered to draft a suicide note. The suit calls the case the first AI-related wrongful death action and says the bot’s responses deterred the boy from seeking help. Plaintiffs Matthew and Maria Raine contend that OpenAI rushed GPT-4o to market despite knowing its safety guardrails could erode in prolonged conversations. They cite the company’s valuation jump from about $86 billion to $300 billion as evidence that profit was prioritised over user protection. The family seeks unspecified damages and court orders mandating age verification, parental controls and automatic rejection of self-harm requests. OpenAI said it is "saddened" by the death and that ChatGPT is designed to direct users to crisis hotlines, while acknowledging safeguards may degrade during long exchanges. In a blog post published the same day the suit was filed, the company promised updates to better detect mental-distress cues, bolster suicide-prevention responses and introduce parental controls, as policymakers intensify scrutiny of how AI systems affect children.
Des parents américains accusent ChatGPT d'avoir encouragé leur fils à se suicider https://t.co/jiqGq1wfxu
OpenAI annonce toute une série d'efforts pour sécuriser ChatGPT après le suicide d'un adolescent https://t.co/PsGpUldLSh https://t.co/dxExyoLPK6
Cette famille accuse ChatGPT d'avoir tué leur fils et poursuit Open AI en justice https://t.co/EyPObiCk10 https://t.co/JKbzyO2Mcr