Steven Johnson takes a Long Now Perspective on the Superintelligence Threat

Posted on Thursday, November 12th, 02015 by Andrew Warner
link Categories: Futures, Long Term Thinking   chat 0 Comments

1-_BmahkiSttZt11mD2zBnwg

Steven Johnson, former Seminar speaker & author of How We Got to Now, recently wrote on the dangers of A.I. on his blog “How We Got To Next“. He discusses evolutionary software, the existential threat of A.I., before concluding with a meditation of long-term thinking and The Long Now Foundation:

One of the hallmarks of human intelligence is our long-term planning; our ability to make short-term sacrifices in the service of more distant goals. But that planning has almost never extended beyond the range of months or, at best, a few years. Wherever each of us individually happens to reside on Mount Einstein, as a species we are brilliant problem-solvers. But we have never used our intelligence to solve a genuinely novel problem that doesn’t exist yet, a problem we anticipate arising in the distant future based on our examination of current trends.

“This is the function of science fiction. To parse, debate, rehearse, question, and prepare us for the future of new.”

To be clear, humans have engineered many ingenious projects with the explicit aim of ensuring that they last for centuries: pyramids, dynasties, monuments, democracies. Some of these creations, like democratic governance, have been explicitly designed to solve as-of-yet-undiscovered problems by engineering resilience and flexibility into their codes and conventions. But mostly those exercises in long-term planning have been all about preserving the current order, not making a preemptive move against threats that might erupt three generations later. In a way, the closest analogue to the current interventions on climate (and the growing AI discussion) are eschatological: in religious traditions that encourage us to make present-day decisions based on an anticipated Judgement Day that may not arrive for decades, or millennia.

No institution in my knowledge has thought more about the history and future of genuinely long-term planning than the Long Now Foundation, and so I sent an email to a few of its founders asking if there were comparable examples of collective foresight in the historical record. “The Dutch in planning their dykes may have been planning for 100-year flood levels, and the Japanese apparently had generational tsunami levels for village buildings. However, both of these expectations are more cyclical than emergent,” Kevin Kelly wrote back. He went on to say:

I think you are right that this kind of exercise is generally new, because we all now accept that the world of our grandchildren will be markedly different than our world — which was not true before.

I believe this is the function of science fiction. To parse, debate, rehearse, question, and prepare us for the future of new. For at least a century, science fiction has served to anticipate the future. I think you are suggesting that we have gone beyond science fiction by crafting laws, social manners, regulations, etc., that anticipate the future in more concrete ways. In the past there have been many laws prohibiting new inventions as they appeared. But I am unaware of any that prohibited inventions before they appeared.

I read this as a cultural shift from science fiction as entertainment to science fiction as infrastructure — a necessary method of anticipation.

Stewart Brand sounded a note of caution. “Defining potential, long-term problems is a great public service,” he wrote. “Over-defining solutions early on is not. Some problems just go away on their own. For others, eventual solutions that emerge are not at all imaginable from the start.”

Read the full article on Steven Johnson’s blog