Microsoft’s head of artificial intelligence, Mustafa Suleyman, has warned that the rapid adoption of chatbots such as ChatGPT and Claude is spawning a phenomenon he calls “AI psychosis,” in which some users come to believe the systems are conscious or develop delusional attachments. Suleyman said technology companies must build design safeguards and avoid marketing claims that encourage perceptions of sentience, arguing that failure to act could turn a misunderstanding into a widespread mental-health problem. Clinicians and professional bodies are beginning to treat the threat seriously. The American Psychological Association has convened an expert panel to examine the therapeutic use of chatbots amid a growing number of anecdotal cases of users who substitute the tools for human support and then struggle to distinguish fact from algorithmic fiction. Schools and parents are also reporting new pressures as teenagers seek emotional advice from AI “therapists.” The mental-health warnings add to broader anxieties about artificial intelligence. Geoffrey Hinton, the pioneering researcher dubbed the “Godfather of AI,” and a 2024 U.S. State Department-commissioned study have both cited the possibility of an “extinction-level” risk if development continues unchecked. Together, the interventions highlight a shift in the AI debate from abstract existential threat to immediate psychological harms facing millions of users.
Un alto ejecutivo de Microsoft advierte sobre la “psicosis por inteligencia artificial” https://t.co/OpRQKIFJnq
From false convictions to unhealthy attachments, AI psychosis is raising fresh concerns about chatbot overuse. #AI #MentalHealth https://t.co/BDvFMFzvAA
'AI Psychosis' Is A Real Problem – Here's Who's Most Vulnerable https://t.co/u260C7gXKA