Support Long-term Thinking
Support Long-term Thinking

Essays by George Church and George Dyson from John Brockman’s New Book on A.I.

by Ahmed Kabil on February 19th, 02019

On Monday, February 25th, 02019, John Brockman (Founder of edge.org) will spoke at Long Now about his new book on artificial intelligence, Possible Minds. Brockman interviewed several of the contributors to the book, Rodney BrooksAlison Gopnik and Stuart Russell on stage. Following the interviews, Kevin Kelly hosted a Q&A and discussion with the group.

Medium recently published two essays from Possible Minds by historian George Dyson and Harvard Geneticist George Church. Dyson has spoken at Long Now on several occasions (in 02004, 02005, and 02013) and has contributed a book list for The Manual For Civilization. Church is working with Stewart Brand to help bring back the woolly mammoth.

The Third Law: The Future of Computing is Analog by George Dyson

The third law states that any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.

The third law offers comfort to those who believe that until we understand intelligence, we need not worry about superhuman intelligence arising among machines. But there is a loophole in the third law. It is entirely possible to build something without understanding it. You don’t need to fully understand how a brain works in order to build one that works. This is a loophole that no amount of supervision over algorithms by programmers and their ethical advisers can ever close. Provably “good” A.I. is a myth. Our relationship with true A.I. will always be a matter of faith, not proof.

A Bill of Rights for the Age of Artificial Intelligence by George Church

Probably we should be less concerned about us versus them and more concerned about the rights of all sentients in the face of an emerging unprecedented diversity of minds. We should be harnessing this diversity to minimize global existential risks, like supervolcanoes and asteroids.

While we may not know what ratio of bio/homo/nano/robo hybrids will be dominant at each step of our accelerating evolution, we can aim for high levels of humane, fair, and safe treatment (“use”) of one another.