It’s been an annual tradition since 01998: with a new year comes a new Edge question.
Every January, John Brockman presents the members of his online salon with a question that elicits discussion about some of the biggest intellectual and scientific issues of our time. Previous iterations have included prompts such as “What should we be worried about?” or “What scientific concept would improve everybody’s cognitive toolkit?” The essay responses – in excess of a hundred each year – offer a wealth of insight into the direction of today’s cultural forces, scientific innovations, and global trends.
This year, Brockman asks:
What do you think about machines that think?
In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI) – whether computers can “really” think, refer, be conscious, and so on – have led to new conversations about how we should deal with the forms that many argue actually are implemented. These “AIs,” if they achieve “Superintelligence” (Nick Bostrom), could pose “existential risks” that lead to “Our Final Hour” (Martin Rees). And Stephen Hawking recently made international headlines when he noted “The development of full artificial intelligence could spell the end of the human race.”
Is AI becoming increasingly real? Are we now in a new era of the “AIs”? To consider this issue, it’s time to grow up. Enough already with the science fiction and the movies, Star Maker, Blade Runner, 2001, Her, The Matrix, “The Borg.” Also, 80 years after Turing’s invention of his Universal Machine, it’s time to honor Turing, and other AI pioneers, by giving them a well-deserved rest. We know the history.
The extensive collection of answers (more than 186 this year!) is sure to prompt debate – and, as usual, includes contributions by several Long Now Board members, fellows, and past (and future!) SALT speakers:
Paul Saffo argues that the real question is not whether AIs will appear, but rather what place humans will occupy in a world increasingly run by machines – and how we will relate to that artificial intelligence around us.
Michael Shermer writes that we should be protopian in our thinking about the future of AI. It’s a fallacy to attribute either utopian goodness or dystopian evil to AIs, because these are emotional states that cannot be programmed.
Danny Hillis argues that AI most likely will outsmart us, and may not always have our best interest in mind. But if we approach their design in the right way, they may still mostly serve us in the way we had intended.
These are just a few of this year’s thought-provoking answers; you can read the full collection here.