31.05.2021
Summary: existential risks to humanity and technological progress
31.05.2021
Summary: existential risks to humanity and technological progress

Tech & Society Communication Group
explored different aspects of existential risks for humanity connected with the technological advance, including the threat of the development of artificial intelligence systems.

Jaan Tallinn, technologist and philanthropist, a founding engineer of Skype and Kazaa, was a special guest at the closed meeting of the Group.

In the in-depth interview to Olena Boytsun, the founder of the Group, Jaan Tallinn shared a broad definition of existential risk as a catastrophic reduction of the maximum potential of the humanity. The expert divides existential risks into two categories — natural and technological ones. "If you want to predict the future of the planet, the most important factor is going to be what kind of technology we will have," — Jaan Tallinn noted.

At the first working meeting of the Group, moderated by Olena Boytsun, members continued the discussion about the classification of existential risks, current state of AI development globally, governance of the technological field and questions of ethics, both ethics of a developer and the importance of AI alignment.

According to Jaan Tallinn, in 20-30 years the world can reach an inflection point with machines that humanity will not be able to control. "AI is like an alien ship, approaching our planet. However, the important thing is that we, humans, are to create the cooperation criterias. With this degree of freedom we need to do our best and try to be thoughtful."

The future in which humans and AI meta-technology will coexist poses many new challenges. Leading technologists of our time, including Elon Musk and Stephen Hawking, called AI a likely major threat to civilization in the near future. At the group meeting, Jaan Tallinn noted: "You can divide the future into three categories. Everything is going to be bad regardless of what we do. Everything will be fine regardless of what we do. And in the middle there is such a future that depends on what we do. My argument is that we can safely ignore the extremes. I am a pragmatic optimist".

Olena Boytsun suggested that the study of existential risks of artificial intelligence is going to require the involvement of a wide range of professionals and society as a whole: "It makes sense to think about how to make processes more inclusive and understandable, to attract all stakeholders, and to include the issue of existential risks of artificial intelligence development into the public agenda ".

Before and after the working meeting group members participated in the poll about their attitude to the existential risks associated with AI. According to the results of the survey, after a comprehensive discussion of the topic, 86% of participants consider that AI development can lead to threats to human existence. The development of AI was also identified as the most likely risk that can lead to the most significant negative consequences for mankind's existence.
Poll: Existential risks
Which of the risks below, in your opinion, is the most likely to lead to the most significant negative consequences for the existence of mankind?
(before the meeting of the Group)
31%
Climate change
25%
Nuclear war
25%
Development of the AI
13%
None of the above
6%
Natural disasters (tsunami, volcanic eruptions etc)
(after the meeting of the Group)
17%
Climate change
8%
Nuclear war
50%
Development of the AI
25%
None of the above
0%
Natural disasters (tsunami, volcanic eruptions etc)
Useful links on the topic:
© 2021 Tech&Society Communication group. All rights reserved.