Vox on AI as a threat to humanity

Should we take AI seriously as a threat to humanity? This Vox piece says yes.

Vox has a new piece on AI, why people fear it, and the threats that it might pose to the human race:

The case for taking AI seriously as a threat to humanity

This piece gives a brief overview of AI for those who are unfamiliar with any of the technical details. It is one example of a growing awareness of ethical and even existential issues related to artifical intelligence and machine learning. While most of the discussion of these issues in the press and in the academic literature focuses on the problem-solving aspect and the positive uses of the technology, the downsides are very real, and we ignore them at our own peril. This awareness is growing, but not, in my opinion, quickly enough. AI technology is progressing much faster than our ability to understand how it will change our society, and how best to respond to these changes and to manage the integration of this astoundingly powerful technology.

commented recently on Yuval Noah Harari's piece on technology and tyranny, and this Vox piece raises many of the same issues. There is a good discussion of how exactly AI might wipe us out - though this is a common science fiction trope, it is a possibility that we should think through in detail. The most detailed discussion I have seen of just how such a process might play out is found in Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies, which, though speculative, raises tremendously important questions. I wrote a short review of the book for one of my machine learning classes:

 

 

 


Print