Microsoft’s head of artificial intelligence, Mustafa Suleyman, has warned that systems capable of convincing users they are conscious could emerge within two to three years, calling the prospect “a dangerous turn in AI progress” that warrants immediate attention. In a personal essay published last week, the 41-year-old executive coined the term “AI psychosis” to describe delusions and unhealthy attachments that some users develop after prolonged chatbot interactions. Suleyman said rising belief in machine sentience could lead people to demand legal rights for AI and further erode social bonds. Evidence of vulnerability is mounting. Industry surveys cited by Suleyman show 97 percent of Generation Z already use chatbots and roughly a third rely on them to write school essays. Separate research finds hundreds of millions are engaging with AI “companions,” while ChatGPT alone approaches 700 million weekly users. Cyber-security specialists and mental-health groups are also flagging risks. Guardio Labs this week detailed “PromptFix,” a technique that tricks autonomous AI browsers into making fraudulent purchases, and The Economist reported that generative tools have accelerated deepfake production and social-engineering attacks. The American Psychological Association is convening an expert panel to study the mental-health impact of chatbots. Suleyman urged technology firms to stop marketing their systems as conscious and to adopt guardrails that limit misleading behaviour. He also called for clearer industry standards and regulatory oversight before more advanced models arrive.
ICYMI: Microsoft AI CEO Mustafa Suleyman, 41, warned in a personal essay published earlier this week that AI could one day appear to be conscious, posing a danger to society. https://t.co/IO8VYK5sOL
AI models are getting smarter and better at understanding what we want. They’re also better at scheming against us. https://t.co/TvKzURd8CX
Scammers are using AI to bypass detection and trick automated browsers into handling fraudulent purchases and phishing attacks. https://t.co/XgShe1ofkq