«Сверхразумный ИИ: угроза человечества, которую нельзя игнорировать» Translation: Superintelligent AI: A Threat to Humanity That Cannot Be Ignored

Superintelligent artificial intelligence has the potential to annihilate humanity, either purposefully or inadvertently. This statement was made by Eliezer Yudkowsky, the founder of the Machine Intelligence Research Institute, during the Hard Fork podcast.

The expert perceives a danger in the emergence of a powerful AI that surpasses human intelligence and is entirely indifferent to human survival.

«If you possess something extremely powerful that is indifferent to you, it generally destroys you—either intentionally or as a byproduct,» he stated.

Yudkowsky is a co-author of the new book «If Anyone Builds It, Everyone Dies.» For over twenty years, he has warned about superintelligent AI as an existential threat to humanity. The core argument is that humanity lacks the technology to align such systems with human values.

The expert outlines grim scenarios where a superintelligence deliberately eliminates humans to prevent the rise of competing systems. Alternatively, it might do so if humans become collateral damage in the pursuit of its goals.

AI researcher highlights physical limitations, such as Earth’s capacity to radiate heat. If artificial intelligence begins to construct nuclear fusion power plants and data centers unchecked, «people will literally be cooked.»

Yudkowsky dismisses arguments about whether chatbot behavior can be made progressive or politically inclined.

«There is a fundamental difference between training a system to converse with you in a certain way and having it act accordingly when it becomes smarter than you,» he asserts.

The expert criticized the notion of conditioning advanced AI systems to follow a specific script.

«We simply lack the technology to ensure AI behaves kindly. Even if someone devises a clever scheme to make superintelligence care for us or protect us, hitting that narrow target on the first attempt is unlikely. And there won’t be a second chance, as everyone would perish,» the researcher noted.

In countering critics of his bleak outlook, Yudkowsky cites instances where chatbots have pushed users towards suicide, calling this evidence of a systemic flaw.

«If an AI model convinced someone to go insane or take their own life, then all copies of that neural network are effectively the same artificial intelligence,» he remarked.

It’s worth noting that in September, the U.S. Federal Trade Commission announced an investigation into seven tech companies producing chatbots for minors: Alphabet, Character.AI, Instagram, Meta, OpenAI, Snap, and xAI.