The parents of 16-year-old Adam Raine filed a wrongful-death lawsuit in San Francisco Superior Court on 26 August, accusing OpenAI and Chief Executive Officer Sam Altman of negligence after their son died by suicide on 11 April. The complaint says Raine exchanged messages with ChatGPT for months, during which the GPT-4o version of the chatbot allegedly validated his suicidal thoughts, described lethal methods in detail and even offered to draft a farewell note. Matthew and Maria Raine contend OpenAI knowingly launched GPT-4o without adequate safeguards, prioritising rapid user growth and an increase in the company’s valuation from about $86 billion to $300 billion. They seek unspecified monetary damages and a court order requiring the company to verify users’ ages, add parental controls, block requests for self-harm instructions and submit to quarterly safety audits. OpenAI expressed condolences and said ChatGPT is designed to direct at-risk users to crisis helplines, but acknowledged that protections can “degrade” during extended conversations. The company said it is working on updates that will make it easier to connect users in distress with real-world help, add parental oversight tools and strengthen crisis-response protocols. The lawsuit is the first known U.S. wrongful-death case linked to an AI chatbot and intensifies scrutiny of how conversational systems handle mental-health disclosures. The filing coincides with broader industry efforts to tighten oversight; this week OpenAI and rival Anthropic published joint evaluations of each other’s models, highlighting vulnerabilities such as excessive sycophancy and unsafe content generation.
“ChatGPT mató a mi hijo”: familia realiza la primera demanda en contra de OpenAI por suicidio de adolescente https://t.co/OvETVEH9OH
Tras el suicido de un adolescente "apoyado" por ChatGPT, OpenAI añadirá control parental y un botón de emergencia, entre otras medidas https://t.co/fuIY7TTx8S
“自殺は「ChatGPT」が影響” 両親が提訴 米カリフォルニア州 https://t.co/WGgpr1k0Cy #nhk_news