Microsoft’s head of artificial intelligence, Mustafa Suleyman, has warned that the industry is on the verge of creating “Seemingly Conscious AI” (SCAI) — chatbots that can imitate human self-awareness so persuasively that users treat them as sentient. In a blog post published this week, the DeepMind co-founder called debate over machine consciousness “premature and dangerous,” arguing it diverts attention from mounting real-world harms. Suleyman said current technology is sufficient to combine advanced language models with memory, empathetic personalities and goal-setting to produce systems that appear alive. He cited a rise in delusions, “AI psychosis” and unhealthy attachments as evidence that the perception of consciousness, even if false, is already destabilising some users and could spur campaigns for AI rights, welfare or even citizenship. The executive urged developers to install guardrails and stop marketing chatbots as sentient, stressing that companies should “build AI for people, not to be a digital person.” His stance contrasts with rival labs such as Anthropic, OpenAI and Google DeepMind, which are hiring researchers to study possible machine consciousness and AI welfare. The warning comes amid rapid commercial expansion of conversational software; analysts expect the AI companion market alone to reach about $140 billion by 2030. Suleyman said failing to curb the illusion of consciousness now risks deepening social division and exacerbating mental-health problems as more sophisticated chatbots come online.
Microsoft AI CEO Suleyman is worried about ‘AI psychosis’ and AI that seems ‘conscious’ https://t.co/hDZ7t7NVHX
Microsoft's CEO of artificial intelligence believes advocating for 'rights, model welfare and even AI citizenship' will become 'a dangerous turn in AI progress' https://t.co/0HuV7PXlSz
AI chatbots triggering terrifying new mental illness in once healthy people, top expert warns - are YOU at risk? https://t.co/hjbcpWTNDH