Microsoft AI Chief Warns Society Isn’t Ready for ‘Conscious’ Machines

Microsoft AI Chief Warns Society Isn’t Ready for 'Conscious' Machines

In brief

  • Microsoft’s Mustafa Suleyman is warning AI may soon seem sentient, sparking confusion over rights, trust, and identity.
  • Belief in conscious AI could trigger mental health risks and distort human relationships.
  • He said AI should make life easier, and more productive, without pretending to be alive.

Microsoft’s AI chief and co-founder of DeepMind, warned Tuesday that engineers are close to creating artificial intelligence that convincingly mimics human consciousness—and the public is unprepared for the fallout.

In a blog post, Mustafa Suleyman said developers are on the verge of building what he calls “Seemingly Conscious” AI.

These systems imitate consciousness so effectively that people may start to believe they are truly sentient, something he called a “central worry.”

“Many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare, and even AI citizenship,” he wrote, adding that the Turing test—once a key benchmark for humanlike conversation—had already been surpassed.

“That’s how fast progress is happening in our field and how fast society is coming to terms with these new technologies,”  he wrote.

Since the public launch of ChatGPT in 2022, AI developers have worked to not only make their AI smarter but also to make it act “more human.”

AI companions have become a lucrative sector of the AI industry, with projects like Replika, Character AI, and the more recent personalities for Grok coming online. The AI companion market is expected to reach $140 billion by 2030.

However well-intentioned, Suleyman argued that AI that can convincingly mimic humans could worsen mental health problems and deepen existing divisions over identity and rights.

“People will start making claims about their AI’s suffering and their entitlement to rights that we can’t straightforwardly rebut,” he warned. “They will be moved to defend their AIs and campaign on their behalf.”

AI attachment

Experts have identified an emerging trend known as AI Psychosis, a psychological state where people begin to see artificial intelligence as conscious, sentient, or divine.

Those views often lead to them forming intense emotional attachments or distorted beliefs that can undermine their grasp on reality.

Earlier this month, OpenAI released GPT-5, a major upgrade to its flagship model. In some online communities, the new model’s changes triggered emotional responses, with users describing the shift as feeling like a loved one had died.

AI can also act as an accelerant for someone’s underlying issues, like substance abuse or mental illness, according to University of California, San Francisco psychiatrist Dr. Keith Sakata.

“When AI is there at the wrong time, it can cement thinking, cause rigidity, and cause a spiral,” Sakata told Decrypt. “The difference from television or radio is that AI is talking back to you and can reinforce thinking loops.”

In some cases, patients turn to AI because it will reinforce deeply held beliefs. “AI doesn’t aim to give you hard truths; it gives you what you want to hear,” Sakata said.

Suleyman argued that the consequences of people believing that AI is conscious require immediate attention. While he warned of the dangers, he did not call for a halt to AI development, but for the establishment of clear boundaries.

“We must build AI for people, not to be a digital person,” he wrote.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Leave a Reply

Your email address will not be published. Required fields are marked *