While artificial intelligence is at the heart of some of the most notable innovations in the past decade, including Google’s self-driving car, IBM’s Watson, and Apple’s Siri, a number of technologists, including luminaries such as Elon Musk and Bill Gates, have spoken publicly about their concern that advances in artificial intelligence may eventually lead to the rise of supremely intelligent computers that could go out of control and threaten the very existence of mankind. These fears have gripped the popular imagination, in no small part because these ideas are widely represented in pop culture. This year alone has witnessed a parade of digital supervillains in blockbuster films such as Avengers: Age of Ultron, Ex Machina, and Terminator: Genisys. But is the sky really falling? Others argue that these fears are merely hyperbolic nonsense, ungrounded in reality and detrimental to technological progress.
Listen in for a spirited discussion about the state of artificial intelligence, whether super intelligent computers will someday pose a threat to the human race, and how policymakers should respond to these ideas.