Edge Question 02015

dahlia640_0It’s been an annual tradition since 01998: with a new year comes a new Edge question.

Every January, John Brockman presents the members of his online salon with a question that elicits discussion about some of the biggest intellectual and scientific issues of our time. Previous iterations have included prompts such as “What should we be worried about?” or “What scientific concept would improve everybody’s cognitive toolkit?” The essay responses – in excess of a hundred each year – offer a wealth of insight into the direction of today’s cultural forces, scientific innovations, and global trends.

This year, Brockman asks:

What do you think about machines that think?

In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI) – whether computers can “really” think, refer, be conscious, and so on – have led to new conversations about how we should deal with the forms that many argue actually are implemented. These “AIs,” if they achieve “Superintelligence” (Nick Bostrom), could pose “existential risks” that lead to “Our Final Hour” (Martin Rees). And Stephen Hawking recently made international headlines when he noted “The development of full artificial intelligence could spell the end of the human race.”

Is AI becoming increasingly real? Are we now in a new era of the “AIs”? To consider this issue, it’s time to grow up. Enough already with the science fiction and the movies, Star Maker, Blade Runner, 2001, Her, The Matrix, “The Borg.” Also, 80 years after Turing’s invention of his Universal Machine, it’s time to honor Turing, and other AI pioneers, by giving them a well-deserved rest. We know the history.

The extensive collection of answers (more than 186 this year!) is sure to prompt debate – and, as usual, includes contributions by several Long Now Board members, fellows, and past (and future!) SALT speakers:

Paul Saffo argues that the real question is not whether AIs will appear, but rather what place humans will occupy in a world increasingly run by machines – and how we will relate to that artificial intelligence around us.

George Church blurs the distinction between machine and biological life form, imagining a future of hybrids that are partly grown and partly engineered.

Michael Shermer writes that we should be protopian in our thinking about the future of AI. It’s a fallacy to attribute either utopian goodness or dystopian evil to AIs, because these are emotional states that cannot be programmed.

Bruce Sterling claims that it’s not useful to wonder only about the intelligence of AIs; we should be discussing the ways AI is employed to further the interests of money, power, and influence.

Kevin Kelly predicts that AI will transform our understanding of what ‘intelligence’ and ‘thinking’ actually mean – and how ‘human’ these capacities really are.

Samuel Arbesman calls on us to be proud of the machines we build, even if their actions and accomplishments exceed our direct control.

Mary Catherine Bateson wonders what will happen to the domains of thought that cannot be programmed – those distinctly human capacities for emotion, compassion, intuition, imagination, and fantasy.

George Dyson thinks we should be worried not about digital machines, but about analog ones.

Tim O’Reilly wonders if AI should be thought of not as a population of individual consciousnesses, but more as a multicellular organism.

Martin Rees suggests that in the ongoing process of coming to understand our world, human intelligence may be merely transient: the real comprehension will be achieved by AI brains.

Sam Harris writes that we need machines with superhuman intelligence. The question is, what kind of values will we instill in them? Will we be able to impart any values to machines?

Esther Dyson wonders what intelligence and life will be like for a machine who is not hindered by the natural constraint of death.

Steven Pinker thinks it’s a waste of time to worry about civilizational doom brought on by AI: we have time to get it right.

Brian Eno reminds us that behind every machine we rely on but don’t understand, still stands a human who built it.

Danny Hillis argues that AI most likely will outsmart us, and may not always have our best interest in mind. But if we approach their design in the right way, they may still mostly serve us in the way we had intended.

These are just a few of this year’s thought-provoking answers; you can read the full collection here.

Share on Facebook Share on Twitter

More from Computing

What is the long now?

The Long Now Foundation is a nonprofit established in 01996 to foster long-term thinking. Our work encourages imagination at the timescale of civilization — the next and last 10,000 years — a timespan we call the long now.

Learn more

Join our newsletter for the latest in long-term thinking

Long Now's website is changing...