The probability of the death of humanity due to the development of artificial intelligence is 95% - this was stated by former Google engineer Nate Soares. He called on humanity to immediately change direction, otherwise it will not be possible to avoid the disaster. "We are rushing towards the abyss at a speed of 100 km/h," he said.
Soares’ concerns are shared by prominent scientists and industry leaders, including Nobel laureate Geoffrey Hinton, mathematician Joshua Bengio, and the heads of OpenAI, Anthropic, and Google DeepMind. They all signed a public statement that stressed that “reducing the risk of human extinction due to AI should be a global priority — alongside combating pandemics and the nuclear threat.”
According to The Times, the main concerns concern the development of so-called superintelligent AI (ASI) - a system capable of complex planning, manipulation, independent decision-making and even deception. Such AI could get out of control, and its internal mechanisms would remain opaque to humans.
Some experts believe that we are not only talking about the complete extinction of humanity, but also about the slow displacement of humans from key processes. In a world where machines will make decisions, there may simply be no place left for humans.
Although this sounds like a science fiction scenario, scientists emphasize that the risk is quite real and requires serious consideration now.