Руководитель ИИ-отдела Microsoft предостерегает: нейросети не обладают сознанием Translation: Microsoft AI Chief Warns: Neural Networks Lack Consciousness

Only biological organisms are capable of consciousness. Developers and researchers should abandon projects that suggest otherwise, stated Mustafa Suleyman, head of AI at Microsoft, in an interview with CNBC.

«I don’t think people should be working on this. If you pose the wrong question, you’re going to get the wrong answer. I believe this is precisely the case,» he remarked at the AfroTech conference in Houston.

The Microsoft executive is opposed to the idea of creating artificial intelligence that can be conscious or AI services that supposedly experience suffering.

In August, Suleyman wrote an essay introducing the term «Seemingly Conscious AI» (SCAI). This kind of artificial intelligence exhibits all the traits of sentient beings, thus it appears to possess consciousness. It mimics all characteristics of self-awareness but is internally empty.

«The system I envision is not truly conscious, yet it will convincingly simulate a human-like mind to the extent that it becomes indistinguishable from claims you or I might make about our own thinking,» Suleyman explained.

Attributing consciousness to AI is dangerous, according to the expert. This could reinforce misconceptions, create new dependency issues, exploit our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing debates over rights, and lead to a monumental categorical error for society.

In 2023, Suleyman authored a book titled «The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma,» where he examines the risks associated with AI and other emerging technologies. These include:

The AI market is moving toward AGI—artificial general intelligence capable of performing any human-level task. In August, OpenAI CEO Sam Altman noted that this term might not be «very useful.» Models are evolving rapidly, and soon we will increasingly rely on them, he believes.

For Suleyman, it is crucial to draw a clear line between the increasing intelligence of artificial systems and their ability to ever experience human emotions.

«Our physical experience of pain is what makes us very sad and feel terrible, but AI does not feel sadness when it undergoes ‘pain’,» he stated.

This distinction is vital, according to the expert. In reality, AI creates an appearance—a seemingly narrative about experience—about itself and consciousness, but it does not truly feel any of this.

«Technically, you know this because we can observe what the model does,» the expert emphasized.

In the field of artificial intelligence, there is a theory proposed by philosopher John Searle known as biological naturalism. It asserts that consciousness depends on the processes of a living brain.

«The reason we grant rights to people today is that we don’t want to harm them because they can suffer. They have pain and preferences that involve avoiding it. These models lack such attributes. It’s merely a simulation,» Suleyman pointed out.

The executive is against the notion of researching consciousness in AI since it does not possess it. He stated that Microsoft is focused on creating services that are aware they are artificial intelligence.

«In simpler terms, we’re building AI that always works for the benefit of humanity,» he noted.

It should be remembered that in October, experts from Anthropic discovered that leading models can exhibit a form of «introspective self-awareness» — they can recognize and describe their own internal «thoughts,» and in some cases, even control them.