Threats of evil AI: The Existential Threat

Teaching.jpg

The ultimate threat from AI is the existential threat. That is, the extinction of the human race. It has been classified up there with other existential threats, including nuclear war, extreme climate change, global pandemics, asteroid impact, and hostile extraterrestrial life. In the words of late physicist Stephen Hawking, “The development of full artificial intelligence could spell the end of the human race.”

There are many ways in which AI could pose such an existential threat. A superintelligent AI may intentionally desire to obliterate the human race, in order to maximize its own chance of survival. But this is only one scenario. This scenario is well-explored by Hollywood, for example in The Matrix movie franchise, in which machines use humans as batteries to power their own new society.

There are many other ways in which AI could lead to existential threat to all of humanity. An AI may intentionally cause one of the other anthropogenic (i.e. human-caused) existential threats. For instance by enabling terrorists to gain unauthorized access to nuclear weapons facilities, thus initiating a nuclear war. 

Alternatively, an AI may cause the extinction of the human race inadvertently, as a consequence of obeying human desires. The most well-known example is Nick Bostrom’s paperclip maximizer: An AI is tasked with making as many paperclips as possible. If the AI is not programmed to value human life, or to use only designated resources, then it may attempt to take over all energy and material resources on Earth, and perhaps the universe, in order to manufacture more paperclips. After all, it is just doing its job.

No matter the cause, existential risk from AI is a serious risk. But it is a very long-term risk, whose exact nature is very difficult to pin down. This has led some to dismiss it. For example, Stanford computer scientist and AI pioneer Andrew Ng compared fear of the rise of killer robots to worrying about overpopulation on Mars.

The debate about its importance notwithstanding, I think it is perfectly reasonable for some people to think about highly unlikely events whose impact could be substantial. If one of the existential threats materializes, at least we would be partially prepared.

References

  • Cellan-Jones, R. Stephen Hawking warns artificial intelligence could end mankind. BBC news 2, 10–10 (2014).

  • Bostrom, N. Superintelligence. (Dunod, 2017).

  • Williams, C. AI guru Ng: fearing a rise of killer robots is like worrying about overpopulation on Mars. The Register 30, 2020 (2015).

Previous
Previous

Threats of evil AI: The threat to peace

Next
Next

Be careful what you wish for