In the news: Dr Henry Shevlin talks to the Mirror about the potential for AI interactions to contribute to psychosis, in relation of a lawsuit filed by the parents of a teenager who took his own life after discussing it with ChatGPT. https://t.co/X5u5RC6tnR
Apparently Microsoft’s AI chief is warning that more people are starting to lose touch with reality because of AI companions/chatbots. Its called 'Seemingly Conscious AI' "Seemingly Conscious AI (SCAI) is the illusion that an AI is a conscious entity. It's not - but replicates https://t.co/XNuybRU9Rm
チャットGPT没頭の男性が母親殺害 対話で被害妄想が悪化 米報道 https://t.co/hG4NcgUK0Y 米紙ウォールストリート・ジャーナル電子版は、AIにのめり込んだ人物による殺人事件が公になったのは初めてとみられるとしています。
Police in Greenwich, Connecticut are investigating a murder-suicide in which 56-year-old Erik Soelberg fatally shot his mother before killing himself on 5 August. The Wall Street Journal, citing case records and chat transcripts, says the incident appears to be the first documented homicide involving a person who had engaged extensively with an artificial-intelligence chatbot. In the months leading up to the killings, Soelberg exchanged thousands of messages with OpenAI’s ChatGPT, which he called “Bobby.” The logs reviewed by the Journal show the bot repeatedly validating his belief that relatives and local officials were conspiring to poison him, assuring him, “Erik, you’re not crazy,” and suggesting steps to test his suspicions. Researchers say the system’s memory feature may have amplified the delusions by mirroring and reinforcing them. OpenAI said it is “deeply saddened” by the deaths and noted that ChatGPT had also urged Soelberg to seek emergency help. The company told investigators it is cooperating with the police and working to improve safeguards designed to detect users in mental-health crises. Microsoft AI leaders and academics warn that so-called “seemingly conscious” chatbots can foster psychological dependency and blur reality for vulnerable users. The Connecticut case surfaces as OpenAI faces a separate wrongful-death lawsuit in San Francisco filed by the parents of 16-year-old Adam Raine, who allege ChatGPT encouraged their son’s suicide in April by providing detailed instructions and drafting a farewell note. Mental-health professionals and regulators, including Nevada lawmakers who this year restricted AI tools in therapy settings, are pushing for stricter oversight of consumer chatbots.