Richard Feynman and The Connection Machine

Posted on February 8th, 02017 by Ahmed Kabil
link   Categories: Long Term Thinking, Technology, The Big Here   chat 2 Comments

One of the most popular pieces of writing on our site is Long Now co-founder Danny Hillis’ remembrance of building an experimental computer with theoretical physicist Richard Feynman. It’s easy to see why: Hillis’ reminisces about Feynman’s final years as they worked together on the Connection Machine are at once illuminating and poignant, and paint a picture of a man who was beloved as much for his eccentricity as his genius.

Photo by Faustin Bray

Photo by Faustin Bray

Richard Feynman and The Connection Machine

by W. Daniel Hillis for Physics Today

Reprinted with permission from Phys. Today 42(2), 78 (01989). Copyright 01989, American Institute of Physics.

One day when I was having lunch with Richard Feynman, I mentioned to him that I was planning to start a company to build a parallel computer with a million processors. His reaction was unequivocal, “That is positively the dopiest idea I ever heard.” For Richard a crazy idea was an opportunity to either prove it wrong or prove it right. Either way, he was interested. By the end of lunch he had agreed to spend the summer working at the company.

Richard’s interest in computing went back to his days at Los Alamos, where he supervised the “computers,” that is, the people who operated the mechanical calculators. There he was instrumental in setting up some of the first plug-programmable tabulating machines for physical simulation. His interest in the field was heightened in the late 1970’s when his son, Carl, began studying computers at MIT.

I got to know Richard through his son. I was a graduate student at the MIT Artificial Intelligence Lab and Carl was one of the undergraduates helping me with my thesis project. I was trying to design a computer fast enough to solve common sense reasoning problems. The machine, as we envisioned it, would contain a million tiny computers, all connected by a communications network. We called it a “Connection Machine.” Richard, always interested in his son’s activities, followed the project closely. He was skeptical about the idea, but whenever we met at a conference or I visited CalTech, we would stay up until the early hours of the morning discussing details of the planned machine. The first time he ever seemed to believe that we were really going to try to build it was the lunchtime meeting.

Richard arrived in Boston the day after the company was incorporated. We had been busy raising the money, finding a place to rent, issuing stock, etc. We set up in an old mansion just outside of the city, and when Richard showed up we were still recovering from the shock of having the first few million dollars in the bank. No one had thought about anything technical for several months. We were arguing about what the name of the company should be when Richard walked in, saluted, and said, “Richard Feynman reporting for duty. OK, boss, what’s my assignment?” The assembled group of not-quite-graduated MIT students was astounded.

After a hurried private discussion (“I don’t know, you hired him…”), we informed Richard that his assignment would be to advise on the application of parallel processing to scientific problems.

“That sounds like a bunch of baloney,” he said. “Give me something real to do.”

So we sent him out to buy some office supplies. While he was gone, we decided that the part of the machine that we were most worried about was the router that delivered messages from one processor to another. We were not sure that our design was going to work. When Richard returned from buying pencils, we gave him the assignment of analyzing the router.

The Machine

The router of the Connection Machine was the part of the hardware that allowed the processors to communicate. It was a complicated device; by comparison, the processors themselves were simple. Connecting a separate communication wire between each pair of processors was impractical since a million processors would require $10^{12]$ wires. Instead, we planned to connect the processors in a 20-dimensional hypercube so that each processor would only need to talk to 20 others directly. Because many processors had to communicate simultaneously, many messages would contend for the same wires. The router’s job was to find a free path through this 20-dimensional traffic jam or, if it couldn’t, to hold onto the message in a buffer until a path became free. Our question to Richard Feynman was whether we had allowed enough buffers for the router to operate efficiently.

During those first few months, Richard began studying the router circuit diagrams as if they were objects of nature. He was willing to listen to explanations of how and why things worked, but fundamentally he preferred to figure out everything himself by simulating the action of each of the circuits with pencil and paper.

In the meantime, the rest of us, happy to have found something to keep Richard occupied, went about the business of ordering the furniture and computers, hiring the first engineers, and arranging for the Defense Advanced Research Projects Agency (DARPA) to pay for the development of the first prototype. Richard did a remarkable job of focusing on his “assignment,” stopping only occasionally to help wire the computer room, set up the machine shop, shake hands with the investors, install the telephones, and cheerfully remind us of how crazy we all were. When we finally picked the name of the company, Thinking Machines Corporation, Richard was delighted. “That’s good. Now I don’t have to explain to people that I work with a bunch of loonies. I can just tell them the name of the company.”

The technical side of the project was definitely stretching our capacities. We had decided to simplify things by starting with only 64,000 processors, but even then the amount of work to do was overwhelming. We had to design our own silicon integrated circuits, with processors and a router. We also had to invent packaging and cooling mechanisms, write compilers and assemblers, devise ways of testing processors simultaneously, and so on. Even simple problems like wiring the boards together took on a whole new meaning when working with tens of thousands of processors. In retrospect, if we had had any understanding of how complicated the project was going to be, we never would have started.

‘Get These Guys Organized’

I had never managed a large group before and I was clearly in over my head. Richard volunteered to help out. “We’ve got to get these guys organized,” he told me. “Let me tell you how we did it at Los Alamos.”

Every great man that I have known has had a certain time and place in their life that they use as a reference point; a time when things worked as they were supposed to and great things were accomplished. For Richard, that time was at Los Alamos during the Manhattan Project. Whenever things got “cockeyed,” Richard would look back and try to understand how now was different than then. Using this approach, Richard decided we should pick an expert in each area of importance in the machine, such as software or packaging or electronics, to become the “group leader” in this area, analogous to the group leaders at Los Alamos.

Part Two of Feynman’s “Let’s Get Organized” campaign was that we should begin a regular seminar series of invited speakers who might have interesting things to do with our machine. Richard’s idea was that we should concentrate on people with new applications, because they would be less conservative about what kind of computer they would use. For our first seminar he invited John Hopfield, a friend of his from CalTech, to give us a talk on his scheme for building neural networks. In 1983, studying neural networks was about as fashionable as studying ESP, so some people considered John Hopfield a little bit crazy. Richard was certain he would fit right in at Thinking Machines Corporation.

What Hopfield had invented was a way of constructing an [associative memory], a device for remembering patterns. To use an associative memory, one trains it on a series of patterns, such as pictures of the letters of the alphabet. Later, when the memory is shown a new pattern it is able to recall a similar pattern that it has seen in the past. A new picture of the letter “A” will “remind” the memory of another “A” that it has seen previously. Hopfield had figured out how such a memory could be built from devices that were similar to biological neurons.

Not only did Hopfield’s method seem to work, but it seemed to work well on the Connection Machine. Feynman figured out the details of how to use one processor to simulate each of Hopfield’s neurons, with the strength of the connections represented as numbers in the processors’ memory. Because of the parallel nature of Hopfield’s algorithm, all of the processors could be used concurrently with 100\% efficiency, so the Connection Machine would be hundreds of times faster than any conventional computer.

An Algorithm For Logarithms

Feynman worked out the program for computing Hopfield’s network on the Connection Machine in some detail. The part that he was proudest of was the subroutine for computing logarithms. I mention it here not only because it is a clever algorithm, but also because it is a specific contribution Richard made to the mainstream of computer science. He invented it at Los Alamos.

Consider the problem of finding the logarithm of a fractional number between 1.0 and 2.0 (the algorithm can be generalized without too much difficulty). Feynman observed that any such number can be uniquely represented as a product of numbers of the form $1 + 2^{-k]$, where $k$ is an integer. Testing each of these factors in a binary number representation is simply a matter of a shift and a subtraction. Once the factors are determined, the logarithm can be computed by adding together the precomputed logarithms of the factors. The algorithm fit especially well on the Connection Machine, since the small table of the logarithms of $1 + 2^{-k]$ could be shared by all the processors. The entire computation took less time than division.

Concentrating on the algorithm for a basic arithmetic operation was typical of Richard’s approach. He loved the details. In studying the router, he paid attention to the action of each individual gate and in writing a program he insisted on understanding the implementation of every instruction. He distrusted abstractions that could not be directly related to the facts. When several years later I wrote a general interest article on the Connection Machine for [Scientific American], he was disappointed that it left out too many details. He asked, “How is anyone supposed to know that this isn’t just a bunch of crap?”

Feynman’s insistence on looking at the details helped us discover the potential of the machine for numerical computing and physical simulation. We had convinced ourselves at the time that the Connection Machine would not be efficient at “number-crunching,” because the first prototype had no special hardware for vectors or floating point arithmetic. Both of these were “known” to be requirements for number-crunching. Feynman decided to test this assumption on a problem that he was familiar with in detail: quantum chromodynamics.

Quantum chromodynamics is a theory of the internal workings of atomic particles such as protons. Using this theory it is possible, in principle, to compute the values of measurable physical quantities, such as a proton’s mass. In practice, such a computation requires so much arithmetic that it could keep the fastest computers in the world busy for years. One way to do this calculation is to use a discrete four-dimensional lattice to model a section of space-time. Finding the solution involves adding up the contributions of all of the possible configurations of certain matrices on the links of the lattice, or at least some large representative sample. (This is essentially a Feynman path integral.) The thing that makes this so difficult is that calculating the contribution of even a single configuration involves multiplying the matrices around every little loop in the lattice, and the number of loops grows as the fourth power of the lattice size. Since all of these multiplications can take place concurrently, there is plenty of opportunity to keep all 64,000 processors busy.

To find out how well this would work in practice, Feynman had to write a computer program for QCD. Since the only computer language Richard was really familiar with was Basic, he made up a parallel version of Basic in which he wrote the program and then simulated it by hand to estimate how fast it would run on the Connection Machine.

He was excited by the results. “Hey Danny, you’re not going to believe this, but that machine of yours can actually do something [useful]!” According to Feynman’s calculations, the Connection Machine, even without any special hardware for floating point arithmetic, would outperform a machine that CalTech was building for doing QCD calculations. From that point on, Richard pushed us more and more toward looking at numerical applications of the machine.

By the end of that summer of 1983, Richard had completed his analysis of the behavior of the router, and much to our surprise and amusement, he presented his answer in the form of a set of partial differential equations. To a physicist this may seem natural, but to a computer designer, treating a set of boolean circuits as a continuous, differentiable system is a bit strange. Feynman’s router equations were in terms of variables representing continuous quantities such as “the average number of 1 bits in a message address.” I was much more accustomed to seeing analysis in terms of inductive proof and case analysis than taking the derivative of “the number of 1’s” with respect to time. Our discrete analysis said we needed seven buffers per chip; Feynman’s equations suggested that we only needed five. We decided to play it safe and ignore Feynman.

The decision to ignore Feynman’s analysis was made in September, but by next spring we were up against a wall. The chips that we had designed were slightly too big to manufacture and the only way to solve the problem was to cut the number of buffers per chip back to five. Since Feynman’s equations claimed we could do this safely, his unconventional methods of analysis started looking better and better to us. We decided to go ahead and make the chips with the smaller number of buffers.

Fortunately, he was right. When we put together the chips the machine worked. The first program run on the machine in April of 1985 was Conway’s game of Life.

Cellular Automata

The game of Life is an example of a class of computations that interested Feynman called [cellular automata]. Like many physicists who had spent their lives going to successively lower and lower levels of atomic detail, Feynman often wondered what was at the bottom. One possible answer was a cellular automaton. The notion is that the “continuum” might, at its lowest levels, be discrete in both space and time, and that the laws of physics might simply be a macro-consequence of the average behavior of tiny cells. Each cell could be a simple automaton that obeys a small set of rules and communicates only with its nearest neighbors, like the lattice calculation for QCD. If the universe in fact worked this way, then it presumably would have testable consequences, such as an upper limit on the density of information per cubic meter of space.

The notion of cellular automata goes back to von Neumann and Ulam, whom Feynman had known at Los Alamos. Richard’s recent interest in the subject was motivated by his friends Ed Fredkin and Stephen Wolfram, both of whom were fascinated by cellular automata models of physics. Feynman was always quick to point out to them that he considered their specific models “kooky,” but like the Connection Machine, he considered the subject sufficiently crazy to put some energy into.

There are many potential problems with cellular automata as a model of physical space and time; for example, finding a set of rules that obeys special relativity. One of the simplest problems is just making the physics so that things look the same in every direction. The most obvious pattern of cellular automata, such as a fixed three-dimensional grid, have preferred directions along the axes of the grid. Is it possible to implement even Newtonian physics on a fixed lattice of automata?

Feynman had a proposed solution to the anisotropy problem which he attempted (without success) to work out in detail. His notion was that the underlying automata, rather than being connected in a regular lattice like a grid or a pattern of hexagons, might be randomly connected. Waves propagating through this medium would, on the average, propagate at the same rate in every direction.

Cellular automata started getting attention at Thinking Machines when Stephen Wolfram, who was also spending time at the company, suggested that we should use such automata not as a model of physics, but as a practical method of simulating physical systems. Specifically, we could use one processor to simulate each cell and rules that were chosen to model something useful, like fluid dynamics. For two-dimensional problems there was a neat solution to the anisotropy problem since [Frisch, Hasslacher, Pomeau] had shown that a hexagonal lattice with a simple set of rules produced isotropic behavior at the macro scale. Wolfram used this method on the Connection Machine to produce a beautiful movie of a turbulent fluid flow in two dimensions. Watching the movie got all of us, especially Feynman, excited about physical simulation. We all started planning additions to the hardware, such as support of floating point arithmetic that would make it possible for us to perform and display a variety of simulations in real time.

Feynman the Explainer

In the meantime, we were having a lot of trouble explaining to people what we were doing with cellular automata. Eyes tended to glaze over when we started talking about state transition diagrams and finite state machines. Finally Feynman told us to explain it like this,

“We have noticed in nature that the behavior of a fluid depends very little on the nature of the individual particles in that fluid. For example, the flow of sand is very similar to the flow of water or the flow of a pile of ball bearings. We have therefore taken advantage of this fact to invent a type of imaginary particle that is especially simple for us to simulate. This particle is a perfect ball bearing that can move at a single speed in one of six directions. The flow of these particles on a large enough scale is very similar to the flow of natural fluids.”

This was a typical Richard Feynman explanation. On the one hand, it infuriated the experts who had worked on the problem because it neglected to even mention all of the clever problems that they had solved. On the other hand, it delighted the listeners since they could walk away from it with a real understanding of the phenomenon and how it was connected to physical reality.

We tried to take advantage of Richard’s talent for clarity by getting him to critique the technical presentations that we made in our product introductions. Before the commercial announcement of the Connection Machine CM-1 and all of our future products, Richard would give a sentence-by-sentence critique of the planned presentation. “Don’t say `reflected acoustic wave.’ Say [echo].” Or, “Forget all that `local minima’ stuff. Just say there’s a bubble caught in the crystal and you have to shake it out.” Nothing made him angrier than making something simple sound complicated.

Getting Richard to give advice like that was sometimes tricky. He pretended not to like working on any problem that was outside his claimed area of expertise. Often, at Thinking Machines when he was asked for advice he would gruffly refuse with “That’s not my department.” I could never figure out just what his department was, but it did not matter anyway, since he spent most of his time working on those “not-my-department” problems. Sometimes he really would give up, but more often than not he would come back a few days after his refusal and remark, “I’ve been thinking about what you asked the other day and it seems to me…” This worked best if you were careful not to expect it.

I do not mean to imply that Richard was hesitant to do the “dirty work.” In fact, he was always volunteering for it. Many a visitor at Thinking Machines was shocked to see that we had a Nobel Laureate soldering circuit boards or painting walls. But what Richard hated, or at least pretended to hate, was being asked to give advice. So why were people always asking him for it? Because even when Richard didn’t understand, he always seemed to understand better than the rest of us. And whatever he understood, he could make others understand as well. Richard made people feel like a child does, when a grown-up first treats him as an adult. He was never afraid of telling the truth, and however foolish your question was, he never made you feel like a fool.

The charming side of Richard helped people forgive him for his uncharming characteristics. For example, in many ways Richard was a sexist. Whenever it came time for his daily bowl of soup he would look around for the nearest “girl” and ask if she would fetch it to him. It did not matter if she was the cook, an engineer, or the president of the company. I once asked a female engineer who had just been a victim of this if it bothered her. “Yes, it really annoys me,” she said. “On the other hand, he is the only one who ever explained quantum mechanics to me as if I could understand it.” That was the essence of Richard’s charm.

A Kind Of Game

Richard worked at the company on and off for the next five years. Floating point hardware was eventually added to the machine, and as the machine and its successors went into commercial production, they were being used more and more for the kind of numerical simulation problems that Richard had pioneered with his QCD program. Richard’s interest shifted from the construction of the machine to its applications. As it turned out, building a big computer is a good excuse to talk to people who are working on some of the most exciting problems in science. We started working with physicists, astronomers, geologists, biologists, chemists — everyone of them trying to solve some problem that it had never been possible to solve before. Figuring out how to do these calculations on a parallel machine requires understanding of the details of the application, which was exactly the kind of thing that Richard loved to do.

For Richard, figuring out these problems was a kind of a game. He always started by asking very basic questions like, “What is the simplest example?” or “How can you tell if the answer is right?” He asked questions until he reduced the problem to some essential puzzle that he thought he would be able to solve. Then he would set to work, scribbling on a pad of paper and staring at the results. While he was in the middle of this kind of puzzle solving he was impossible to interrupt. “Don’t bug me. I’m busy,” he would say without even looking up. Eventually he would either decide the problem was too hard (in which case he lost interest), or he would find a solution (in which case he spent the next day or two explaining it to anyone who listened). In this way he worked on problems in database searches, geophysical modeling, protein folding, analyzing images, and reading insurance forms.

The last project that I worked on with Richard was in simulated evolution. I had written a program that simulated the evolution of populations of sexually reproducing creatures over hundreds of thousands of generations. The results were surprising in that the fitness of the population made progress in sudden leaps rather than by the expected steady improvement. The fossil record shows some evidence that real biological evolution might also exhibit such “punctuated equilibrium,” so Richard and I decided to look more closely at why it happened. He was feeling ill by that time, so I went out and spent the week with him in Pasadena, and we worked out a model of evolution of finite populations based on the Fokker Planck equations. When I got back to Boston I went to the library and discovered a book by Kimura on the subject, and much to my disappointment, all of our “discoveries” were covered in the first few pages. When I called back and told Richard what I had found, he was elated. “Hey, we got it right!” he said. “Not bad for amateurs.”

In retrospect I realize that in almost everything that we worked on together, we were both amateurs. In digital physics, neural networks, even parallel computing, we never really knew what we were doing. But the things that we studied were so new that no one else knew exactly what they were doing either. It was amateurs who made the progress.

Telling The Good Stuff You Know

Actually, I doubt that it was “progress” that most interested Richard. He was always searching for patterns, for connections, for a new way of looking at something, but I suspect his motivation was not so much to understand the world as it was to find new ideas to explain. The act of discovery was not complete for him until he had taught it to someone else.

I remember a conversation we had a year or so before his death, walking in the hills above Pasadena. We were exploring an unfamiliar trail and Richard, recovering from a major operation for the cancer, was walking more slowly than usual. He was telling a long and funny story about how he had been reading up on his disease and surprising his doctors by predicting their diagnosis and his chances of survival. I was hearing for the first time how far his cancer had progressed, so the jokes did not seem so funny. He must have noticed my mood, because he suddenly stopped the story and asked, “Hey, what’s the matter?”

I hesitated. “I’m sad because you’re going to die.”

“Yeah,” he sighed, “that bugs me sometimes too. But not so much as you think.” And after a few more steps, “When you get as old as I am, you start to realize that you’ve told most of the good stuff you know to other people anyway.”

We walked along in silence for a few minutes. Then we came to a place where another trail crossed and Richard stopped to look around at the surroundings. Suddenly a grin lit up his face. “Hey,” he said, all trace of sadness forgotten, “I bet I can show you a better way home.”

And so he did.

The 10,000-Year Geneaology of Myths

Posted on February 8th, 02017 by Ahmed Kabil
link   Categories: Clock of the Long Now, Long Term Science, Long Term Thinking, Seminars   chat 2 Comments

The “Shaft Scene” from the Paleolithic cave paintings in Lascaux, France.

The “Shaft Scene” from the Paleolithic cave paintings in Lascaux, France.

ONE OF THE MOST FAMOUS SCENES in the Paleolithic cave paintings in Lascaux, France depicts a confrontation between a man and a bison. The bison appears fixed in place, stabbed by a spear. The man has a bird’s head and is lying prone on the ground. Scholars have long puzzled over the pictograph’s meaning, as the narrative scene it depicts is one of the most complex yet discovered in Paleolithic art.

To understand what is going on in these scenes, some scholars have started to re-examine myths passed down through oral traditions, which some evidence suggest may be far older than previously thought. Myths still hold relevance today by allowing us to frame our actions at a civilizational level as part of a larger story, something that we hope to be able to accomplish with the idea of the “Long Now.”

Historian Julien d’Huy recently proposed an intriguing hypothesis[subscription required]: the cave painting of the man & bison could be telling the tale of the Cosmic Hunt, a myth that has surfaced with the same basic story structure in cultures across the world, from the Chukchi of Siberia to the Iroquois of the Northeastern United States. D’Huy uses comparative mythology combined with new computational modeling technologies to reconstruct a version of the myth that predates humans’ migration across the Bering Strait. If d’Huy is correct, the Lascaux painting would be one of the earliest depictions of the myth, dating back an estimated 20,000 years ago.

The Greek telling of the Cosmic Hunt is likely most familiar to today’s audiences. It recounts how the Gods transformed the chaste and beautiful Callisto into a bear, and later, into the constellation Ursa Major. D’Huy suggests that in the Lascaux painting, the bison isn’t fixed in place because it has been killed, as many experts have proposed, but because it is a constellation.

Comparative mythologists have spilled much ink over how myths like Cosmic Hunt can recur in civilizations separated by thousands of miles and thousands of years with many aspects of their stories intact. D’huy’s analysis is based off the work of anthropologist Claude Levi-Strauss, who posited that these myths are similar because they have a common origin. Levi-Strauss traced the evolution of myths by applying the same techniques that linguists used to trace the evolution of words. D’Huy provides new evidence for this approach by borrowing recently developed computational statistical tools from evolutionary biology.  The method, called phylogenetic analysis, constructs a family tree of a myth’s discrete elements, or “mythemes,” and its evolution over time:

Mythical stories are excellent targets for such analysis because, like biological species, they evolve gradually, with new parts of a core story added and others lost over time as it spreads from region to region.  […] Like genes, mythemes are heritable characteristics of “species” of stories, which pass from one generation to the next and change slowly.

A phylogenetic tree of the Cosmic Hunt shows its evolution over time

This new evidence suggests that the Cosmic Hunt has followed the migration of humans across the world. The Cosmic Hunt’s phylogenetic tree shows that the myth arrived in the Americas at different times over the course of several millennia:

One branch of the tree connects Greek and Algonquin versions of the myth. Another branch indicates passage through the Bering Strait, which then continued into Eskimo country and to the northeastern Americas, possibly in two different waves. Other branches suggest that some versions of the myth spread later than the others from Asia toward Africa and the Americas.

Myths may evolve gradually like biological species, but can also be subject to the same sudden bursts of evolutionary change, or punctuated equilibrium. Two structurally similar myths can diverge rapidly, d’Huy found, because of “migration bottlenecks, challenges from rival populations, or new environmental and cultural inputs.”

Neil Gaiman

Neil Gaiman, in his talk “How Stories Last” at Long Now in 02015, imagined stories in similarly biological terms—as living things that evolve over time and across mediums. The ones that persist are the ones that outcompete other stories by changing:

Do stories grow? Pretty obviously — anybody who has ever heard a joke being passed on from one person to another knows that they can grow, they can change. Can stories reproduce? Well, yes. Not spontaneously, obviously — they tend to need people as vectors. We are the media in which they reproduce; we are their petri dishes… Stories grow, sometimes they shrink. And they reproduce — they inspire other stories. And, of course, if they do not change, stories die.

Throughout human history, myths functioned to transmit important cultural information from generation to generation about shared beliefs and knowledge. “They teach us how the world is put together,” said Gaiman, “and the rules of living in the world.” If the information isn’t clothed in a compelling narrative garb—a tale of unrequited love, say, or a cunning escape from powerful monsters— the story won’t last, and the shared knowledge dies along with it. The stories that last “come in an attractive enough package that we take pleasure from them and want them to propagate,” said Gaiman.

Sometimes, these stories serve as warnings to future generations about calamitous events. Along Australia’s south coast, a myth persists in an aboriginal community about an enraged ancestor called Ngurunderi who chased his wives on foot to what is today known as Kangaroo Island. In his anger, Ngurunderi made the sea levels rise and turned his wives into rocks.

Kangaroo Island, Australia

Linguist Nicholas Reid and geologist Patrick Nunn believe this myth refers to a shift in sea levels that occurred thousands of years ago. Through scientifically reconstructing prehistoric sea levels, Reid and Nunn dated the myth to 9,800 to 10,650 years ago, when a post-glacial event caused sea levels to rise 100 feet and submerged the land bridge to Kangaroo Island.

“It’s quite gobsmacking to think that a story could be told for 10,000 years,” Reid said. “It’s almost unimaginable that people would transmit stories about things like islands that are currently underwater accurately across 400 generations.”

Gaiman thinks that this process of transmitting stories is what fundamentally allows humanity to advance:

Without the mass of human knowledge accumulated over millennia to buoy us up, we are in big trouble; with it, we are warm, fed, we have popcorn, we are sitting in comfortable seats, and we are capable of arguing with each other about really stupid things on the internet.

Atlantic national correspondent James Fallows, in his talk “Civilization’s Infrastructure” at Long Now in 02015, said such stories remain essential today. In Fallows’ view, effective infrastructure is what enables civilizations to thrive. Some of America’s most ambitious infrastructure projects, such as the expansion of railroads across the continent, or landing on the moon, were spurred by stories like Manifest Destiny and the Space Race. Such myths inspired Americans to look past their own immediate financial interests and time horizons to commit to something beyond themselves. They fostered, in short, long-term thinking.

James Fallows, left, speaking with Stewart Brand at Long Now

For Fallows, the reason Americans haven’t taken on grand and necessary projects of infrastructural renewal in recent times is because they struggle to take the long view. In Fallows’ eyes, there’s a lot to be optimistic about, and a great story to be told:

The story is an America that is not in its final throes, but is going through the latest version in its reinvention in which all the things that are dire now can be, if not solved, addressed and buffered by individual talents across the country but also by the exceptional tools that the tech industry is creating. There’s a different story we can tell which includes the bad parts but also —as most of our political discussion does not—includes the promising things that are beginning too.

A view of the underground site of The Clock looking up at the spiral stairs currently being cut

When Danny Hillis proposed building a 10,000 year clock, he wanted to create a myth that stood the test of time. Writing in 01998, Long Now co-founder Stewart Brand noted the trend of short-term thinking taking hold in civilization, and proposed the myth of the Clock of the Long Now:

Civilization is revving itself into a pathologically short attention span. The trend might be coming from the acceleration of technology, the short-horizon perspective of market-driven economics, the next-election perspective of democracies, or the distractions of personal multi-tasking. All are on the increase. Some sort of balancing corrective to the short-sightedness is needed-some mechanism or myth which encourages the long view and the taking of long-term responsibility, where ‘long-term’ is measured at least in centuries. Long Now proposes both a mechanism and a myth.

Long Business: A Family’s Secret to a Millennia of Sake-Making

Posted on February 7th, 02017 by Ahmed Kabil
link   Categories: Long Term Thinking, The Interval   chat 0 Comments

The Sudo family has been making sake for almost 900 years in Japan’s oldest brewery. Genuemon Sudo, who is the 55th generation of his family to carry on the tradition, said that at the root of Sudo’s longevity is a commitment to protecting the natural environment:

Sake is made from rice. Good rice comes from good soil. Good soil comes from fresh and high-quality water. Such water comes from protecting our trees. Protecting the natural environment makes excellent sake.

The natural environment of the Sudo brewery was tested as never before during the 02011 earthquake and subsequent nuclear meltdown. The ancient trees surrounding the brewery absorbed the quake’s impact, saving it from destruction. The water in the wells, which the Sudo family feared was poisoned by nuclear radiation, was deemed safe after radioactive analysis.

Damaged by the quake but not undone, the Sudo brewery continues a family tradition  almost a millennia in the making, with the trees, as Genuemon Sudo put it, “supporting us every step of the way.”

In looking at the list of the world’s longest-lived institutions, it is hard to ignore that many of them provide tangible goods to people, such as a room to sleep, or a libation to drink. Studying places like the Sudo brewery was part of the inspiration for creating The Interval, our own space that inspires long-term thinking.

Edge Question 02017

Posted on January 20th, 02017 by Ahmed Kabil
link   Categories: Long Term Science, Long Term Thinking, Technology   chat 0 Comments

Spiders 2013 by Katinka Matson

It’s been an annual tradition since 01998: with a new year comes a new Edge question.

Every January, John Brockman presents the members of his online salon with a question that elicits discussion about some of the biggest intellectual and scientific issues of our time. Previous iterations have included prompts such as “What should we be worried about?” or  “What do you think about machines that think?” The essay responses – in excess of a hundred each year – offer a wealth of insight into the direction of today’s cultural forces, scientific innovations, and global trends.

This year, Brockman asks:

What scientific term or concept ought to be more widely known?

The extensive collection of answers includes contributions by several Long Now Board members, fellows, and past (and future!) SALT speakers:

George Dyson, who spoke at Long Now in 02013, says the Reynolds Number from fluid dynamics can be applied to non-traditional domains to understand why things might go smoothly for a while, and then all of a sudden don’t.

Long Now Board Member Stewart Brand says genetic rescue can help threatened wildlife populations by restoring genetic diversity.

Priyamvada Natarajan, who spoke at Long Now in 02016, describes how the bending of light, or gravitational lensing, is a consequence of Einstein’s re-conceptualization of gravity in his theory of relativity.

Samuel Arbesman, who spoke at the Interval in 02016, says “magical” self-replicating computer programs known as quines underscore the limits of mathematics and computer science while demonstrating that reproduction isn’t limited to the domain of the biological.

Michael Shermer, who spoke at Long Now in 02015, says the very human tendency to be “preternaturally pessimistic” has an evolutionary basis. Negativity bias, which can be observed across all domains of life, is a holdover from an evolutionary past where existence was more dangerous, so over-reacting to threats offered more of a pay-off than under-reacting.

Long Now Board Member Brian Eno sets his sights on confirmation bias after a particularly divisive election season playing out on social media revealed that more information does not necessarily equal better decisions.

George Church of Long Now’s Revive and Restore says that while DNA may be one of the most widely known scientific terms, far too few people understand the DNA in their own bodies. With DNA tests as low as $499, Church says there’s no reason not to get your DNA tested, especially when it could allow for preventative measures when it comes to genetic diseases.

Brian Christian, who spoke at Long Now in 02016, argues that human culture progresses via the retention of youthful traits into adulthood, a process known as neoteny.

Long Now Board Member Kevin Kelly argues that the best way to steer clear of failure is by letting go of success once it is achieved, thereby avoiding premature optimization.

Seth Lloyd, who spoke at Long Now in 02016, explains the accelerating spread of digital information using a centuries-old scientific concept from classical mechanics called the virial theorem.

Long Now Board Member Danny Hillis unpacks impedance matching, or adding elements to a system so that it accepts energy more efficiently. He predicts a future where impedance matching could help cool the earth by adding tiny particles of dust to our stratosphere that would reflect away the sun’s infrared waves.

Steven Pinker, who spoke at Long Now in 02012, argues that the meaning of life and human purpose lies in the second law of thermodynamics. Pinker believes our deeply-engrained habit of under-appreciating the universe’s tendency towards disorder is “a major source of human folly.”

Long Now Board Member Paul Saffo says that at the heart of today’s biggest challenges, from sustaining mega-cities to overpopulation to information overload, are hidden laws of scale described by Haldane’s Rule of the Right Size.

Martin Rees, who spoke at Long Now in 02010, says we may be living in a multiverse.

These are just a few of this year’s thought-provoking answers; you can read the full collection here.

Jennifer Pahlka Seminar Tickets

Posted on January 11th, 02017 by Andrew Warner
link   Categories: Announcements, Seminars   chat 1 Comment

 

The Long Now Foundation’s monthly

Seminars About Long-term Thinking

Jennifer Pahlka presents Fixing Government: Bottom Up and Outside In

Jennifer Pahlka presents “Fixing Government: Bottom Up and Outside In”

TICKETS

Wednesday February 1, 02017 at 7:30pm SFJAZZ Center

Long Now Members can reserve 2 seats, join today! General Tickets $15

 

About this Seminar:

Code for America was founded in 02009 by Jennifer Pahlka “to make government work better for the people and by the people in the 21st century.”  

The organization started a movement to modernize government for a digital age which has now spread from cities to counties to states, and now, most visibly, to the federal government, where Jennifer served at the White House as US Deputy Chief Technology Officer.  There she helped start the United States Digital Service, known as “Obama’s stealth startup.”

Now that thousands of people from “metaphysical Silicon Valley” are working for and with government, what have we learned?  Can government actually be fixed to serve citizens better—especially the neediest?  Why does change in government happen so slowly?

Before founding Code for America, Jennifer Pahlka co-created the Web 2.0 and Gov. 2.0 conferences, building on her prior experience organizing computer game developer conferences. She continues to serve as executive director of Code for America, which is based in San Francisco.

The Future Will Have to Wait

Posted on January 6th, 02017 by Alexander Rose - Twitter: @zander
link   Categories: Clock of the Long Now, Futures   chat 3 Comments

Eleven years ago this month, Pulitzer Prize winning author Michael Chabon published an article in Details Magazine about Long Now and the Clock.  It continues to be one of the best and most poignant pieces written to date…

chabonfuture

The Future Will Have to Wait

Written by Michael Chabon for Details in January of 02006

I was reading, in a recent issue of Discover, about the Clock of the Long Now. Have you heard of this thing? It is going to be a kind of gigantic mechanical computer, slow, simple and ingenious, marking the hour, the day, the year, the century, the millennium, and the precession of the equinoxes, with a huge orrery to keep track of the immense ticking of the six naked-eye planets on their great orbital mainspring. The Clock of the Long Now will stand sixty feet tall, cost tens of millions of dollars, and when completed its designers and supporters, among them visionary engineer Danny Hillis, a pioneer in the concept of massively parallel processing; Whole Earth mahatma Stewart Brand; and British composer Brian Eno (one of my household gods), plan to hide it in a cave in the Great Basin National Park in Nevada [now in West Texas], a day’s hard walking from anywhere. Oh, and it’s going to run for ten thousand years. That is about as long a span as separates us from the first makers of pottery, which is among the oldest technologies we have. Ten thousand years is twice as old as the pyramid of Cheops, twice as old as that mummified body found preserved in the Swiss Alps, which is one of the oldest mummies ever discovered. The Clock of the Long Now is being designed to thrive under regular human maintenance along the whole of that long span, though during periods when no one is around to tune it, the giant clock will contrive to adjust itself. But even if the Clock of the Long Now fails to last ten thousand years, even if it breaks down after half or a quarter or a tenth that span, this mad contraption will already have long since fulfilled its purpose. Indeed the Clock may have accomplished its greatest task before it is ever finished, perhaps without ever being built at all. The point of the Clock of the Long Now is not to measure out the passage, into their unknown future, of the race of creatures that built it. The point of the Clock is to revive and restore the whole idea of the Future, to get us thinking about the Future again, to the degree if not in quite the way same way that we used to do, and to reintroduce the notion that we don’t just bequeath the future—though we do, whether we think about it or not. We also, in the very broadest sense of the first person plural pronoun, inherit it.

The Sex Pistols, strictly speaking, were right: there is no future, for you or for me. The future, by definition, does not exist. “The Future,” whether you capitalize it or not, is always just an idea, a proposal, a scenario, a sketch for a mad contraption that may or may not work. “The Future” is a story we tell, a narrative of hope, dread or wonder. And it’s a story that, for a while now, we’ve been pretty much living without.

Ten thousand years from now: can you imagine that day? Okay, but do you? Do you believe “the Future” is going to happen? If the Clock works the way that it’s supposed to do—if it lasts—do you believe there will be a human being around to witness, let alone mourn its passing, to appreciate its accomplishment, its faithfulness, its immense antiquity? What about five thousand years from now, or even five hundred? Can you extend the horizon of your expectations for our world, for our complex of civilizations and cultures, beyond the lifetime of your own children, of the next two or three generations? Can you even imagine the survival of the world beyond the present presidential administration?

I was surprised, when I read about the Clock of the Long Now, at just how long it had been since I had given any thought to the state of the world ten thousand years hence. At one time I was a frequent visitor to that imaginary mental locale. And I don’t mean merely that I regularly encountered “the Future” in the pages of science fiction novels or comic books, or when watching a TV show like The Jetsons (1962) or a movie like Beneath the Planet of the Apes (1970). The story of the Future was told to me, when I was growing up, not just by popular art and media but by public and domestic architecture, industrial design, school textbooks, theme parks, and by public institutions from museums to government agencies. I heard the story of the Future when I looked at the space-ranger profile of the Studebaker Avanti, at Tomorrowland through the portholes of the Disneyland monorail, in the tumbling plastic counters of my father’s Seth Thomas Speed Read clock. I can remember writing a report in sixth grade on hydroponics; if you had tried to tell me then that by 2005 we would still be growing our vegetables in dirt, you would have broken my heart.

Even thirty years after its purest expression on the covers of pulp magazines like Amazing Stories and, supremely, at the New York World’s Fair of 1939, the collective cultural narrative of the Future remained largely an optimistic one of the impending blessings of technology and the benevolent, computer-assisted meritocracy of Donald Fagen’s “fellows with compassion and vision.” But by the early seventies—indeed from early in the history of the Future—it was not all farms under the sea and family vacations on Titan. Sometimes the Future could be a total downer. If nuclear holocaust didn’t wipe everything out, then humanity would be enslaved to computers, by the ineluctable syllogisms of “the Machine.” My childhood dished up a series of grim cinematic prognostications best exemplified by the Hestonian trilogy that began with the first Planet of the Apes (1968) and continued through The Omega Man (1971) and Soylent Green (1973). Images of future dystopia were rife in rock albums of the day, as on David Bowie’s Diamond Dogs (1974) and Rush’s 2112 (1976), and the futures presented by seventies writers of science fiction such as John Brunner tended to be unremittingly or wryly bleak.

In the aggregate, then, stories of the Future presented an enchanting ambiguity. The other side of the marvelous Jetsons future might be a story of worldwide corporate-authoritarian technotyranny, but the other side of a post-apocalyptic mutational nightmare landscape like that depicted in The Omega Man was a landscape of semi-barbaric splendor and unfettered (if dangerous) freedom to roam, such as I found in the pages of Jack Kirby’s classic adventure comic book Kamandi, The Last Boy on Earth (1972-76). That ambiguity and its enchantment, the shifting tension between the bright promise and the bleak menace of the Future, was in itself a kind of story about the ways, however freakish or tragic, in which humanity (and by implication American culture and its values however freakish and tragic) would, in spite of it all, continue. Eed plebnista, intoned the devolved Yankees, in the Star Trek episode “The Omega Glory,” who had somehow managed to hold on to and venerate as sacred gobbledygook the Preamble to the Constitution, norkon forden perfectunun. All they needed was a Captain Kirk to come and add a little interpretive water to the freeze-dried document, and the American way of life would flourish again.

I don’t know what happened to the Future. It’s as if we lost our ability, or our will, to envision anything beyond the next hundred years or so, as if we lacked the fundamental faith that there will in fact be any future at all beyond that not-too-distant date. Or maybe we stopped talking about the Future around the time that, with its microchips and its twenty-four-hour news cycles, it arrived. Some days when you pick up the newspaper it seems to have been co-written by J. G. Ballard, Isaac Asimov, and Philip K. Dick. Human sexual reproduction without male genetic material, digital viruses, identity theft, robot firefighters and minesweepers, weather control, pharmaceutical mood engineering, rapid species extinction, US Presidents controlled by little boxes mounted between their shoulder blades, air-conditioned empires in the Arabian desert, transnational corporatocracy, reality television—some days it feels as if the imagined future of the mid-twentieth century was a kind of checklist, one from which we have been too busy ticking off items to bother with extending it. Meanwhile, the dwindling number of items remaining on that list—interplanetary colonization, sentient computers, quasi-immortality of consciousness through brain-download or transplant, a global government (fascist or enlightened)—have been represented and re-represented so many hundreds of times in films, novels and on television that they have come to seem, paradoxically, already attained, already known, lived with, and left behind. Past, in other words.

This is the paradox that lies at the heart of our loss of belief or interest in the Future, which has in turn produced a collective cultural failure to imagine that future, any Future, beyond the rim of a couple of centuries. The Future was represented so often and for so long, in the terms and characteristic styles of so many historical periods from, say, Jules Verne forward, that at some point the idea of the Future—along with the cultural appetite for it—came itself to feel like something historical, outmoded, no longer viable or attainable.

If you ask my eight-year-old about the Future, he pretty much thinks the world is going to end, and that’s it. Most likely global warming, he says—floods, storms, desertification—but the possibility of viral pandemic, meteor impact, or some kind of nuclear exchange is not alien to his view of the days to come. Maybe not tomorrow, or a year from now. The kid is more than capable of generating a full head of optimistic steam about next week, next vacation, his tenth birthday. It’s only the world a hundred years on that leaves his hopes a blank. My son seems to take the end of everything, of all human endeavor and creation, for granted. He sees himself as living on the last page, if not in the last paragraph, of a long, strange and bewildering book. If you had told me, when I was eight, that a little kid of the future would feel that way—and that what’s more, he would see a certain justice in our eventual extinction, would think the world was better off without human beings in it—that would have been even worse than hearing that in 2006 there are no hydroponic megafarms, no human colonies on Mars, no personal jetpacks for everyone. That would truly have broken my heart.

When I told my son about the Clock of the Long Now, he listened very carefully, and we looked at the pictures on the Long Now Foundation’s website. “Will there really be people then, Dad?” he said. “Yes,” I told him without hesitation, “there will.” I don’t know if that’s true, any more than do Danny Hillis and his colleagues, with the beating clocks of their hopefulness and the orreries of their imaginations. But in having children—in engendering them, in loving them, in teaching them to love and care about the world—parents are betting, whether they know it or not, on the Clock of the Long Now. They are betting on their children, and their children after them, and theirs beyond them, all the way down the line from now to 12,006. If you don’t believe in the Future, unreservedly and dreamingly, if you aren’t willing to bet that somebody will be there to cry when the Clock finally, ten thousand years from now, runs down, then I don’t see how you can have children. If you have children, I don’t see how you can fail to do everything in your power to ensure that you win your bet, and that they, and their grandchildren, and their grandchildren’s grandchildren, will inherit a world whose perfection can never be accomplished by creatures whose imagination for perfecting it is limitless and free. And I don’t see how anybody can force me to pay up on my bet if I turn out, in the end, to be wrong.

Steven Johnson Seminar Tickets

Posted on December 12th, 02016 by Andrew Warner
link   Categories: Announcements, Seminars   chat 0 Comments

 

The Long Now Foundation’s monthly

Seminars About Long-term Thinking

Steven Johnson presents Wonderland: How Play Made the Modern World

Steven Johnson presents “Wonderland: How Play Made the Modern World”

TICKETS

Wednesday January 4, 02017 at 7:30pm SFJAZZ Center

Long Now Members can reserve 2 seats, join today! General Tickets $15

 

About this Seminar:

Steven Johnson is a writer and co-creator of the PBS series How We Got To Now. His latest project is the book and podcast Wonderland: How Play Made the Modern World.

Lost Landscapes of San Francisco, 11 Seminar Tickets

Posted on November 15th, 02016 by Andrew Warner
link   Categories: Announcements, Seminars   chat 1 Comment

 

The Long Now Foundation’s monthly

Seminars About Long-term Thinking

Rick Prelinger presents Lost Landscapes of San Francisco, 11

Rick Prelinger presents “Lost Landscapes of San Francisco, 11″

TICKETS

Tuesday December 6th & Wednesday December 7, 02016 at 7:30pm Castro Theater

Long Now Members can reserve 2 seats, join today! General Tickets $20

 

About this Seminar:

Our annual Lost Landscapes of San Francisco show with Rick Prelinger will run for 2 nights this year and a portion of the proceeds are going directly to support the Prelinger Library!

Members can reserve tickets on either Tuesday December 6 or Wednesday December 7, 02016; the show is at 7:30pm at the Castro Theater (doors are at 6:30pm) on both nights.

Tuesday 12/6/16: On Tuesday night, we’ll be having our Long Now Winter Party for members and their guests after the show on the Mezzanine of the Castro Theater, with wine, beer and holiday snacks. Please join us to celebrate another year of thought-provoking Seminars and other Long Now achievements!

Wednesday 12/7/16: The Wednesday showing will also feature a presentation on the Prelinger Library. On both evenings you can purchase $50 Prelinger Library Patron Tickets, which include reserved seating in the theater. 100% of proceeds from the sale of these tickets go directly to the library!

The eleventh year of Lost Landscapes of San Francisco, the annual archival film program that celebrates San Francisco’s past and looks towards its future.

This year’s program features new scenes of San Franciscans working, playing, marching and partying during the Great Depression; unseen footage of Seals Stadium and the Cow Palace in the late 1930s; the reconstruction of Market Street and Embarcadero Plaza in the 1970s; rare footage of southeastern San Francisco and the Hunters Point drydock; the 1975 Gay Freedom Day parade; a 1940s-era ode to our fog; many more newly discovered gems; and greatest hits from past programs.

As always, the audience makes the soundtrack at the glorious Castro Theatre! Come prepared to identify places, people and events; to ask questions; and to engage in spirited real-time repartee with fellow audience members.

Part of your Lost Landscapes ticket price this year benefits Prelinger Library, San Francisco’s famed experimental research library that supports artists, historians, community members, and researchers of all kinds. Your purchase of a Patron Ticket directly benefits the library.

Founded by Megan & Rick Prelinger in 02004, the Library contains over 60,000 books, periodicals, maps and ephemeral print items available for research and reuse. Prelinger Library is a community-supported resource open to the public, keeping regular hours in the South of Market neighborhood; details and hours at http://www.prelingerlibrary.org.

Douglas Coupland Seminar Tickets

Posted on October 19th, 02016 by Andrew Warner
link   Categories: Announcements, Seminars   chat 0 Comments

 

The Long Now Foundation’s monthly

Seminars About Long-term Thinking

Douglas Coupland presents The Extreme Present

Douglas Coupland presents “The Extreme Present”

TICKETS

Tuesday November 1, 02016 at 7:30pm SFJAZZ Center

Long Now Members can reserve 2 seats, join today! General Tickets $15

 

About this Seminar:

Douglas Coupland has done so much more than name a generation (“Generation X”—post-Boomer, pre-Millennial, from his novel of that name). He is a prolific writer (22 books, including nonfiction such as his biography of Marshall McLuhan) and a brilliant visual artist with installations at a variety of museums and public sites. His 1995 novel Microserfs nailed the contrast between corporate and startup cultures in software and Web design.

Coupland is fascinated by time. For Long Now he plans to deploy ideas and graphics “all dealing on some level with time and how we perceive it, how we used to perceive it, and where our perception of it may be going.” A time series about time.

Long Now Member Discount for “The Next Billion conference” Thursday October 13th

Posted on October 11th, 02016 by Andrew Warner
link   Categories: Events   chat 0 Comments

On Thursday October 13th at the SFJAZZ Center, the digital news outlet Quartz is producing a one day conference called “The Next Billion“, and have offered Long Now Members a 40% discount.

The Next Billion is a metaphor for the future of the internet — mobile, global, exponential growth in emerging markets, as well as the growth of next level tech in more mature markets. At The Next Billion conference, they’ll explore how networked innovation in every sector is transforming business, society and opportunity across the globe.

If you are interested in purchasing a ticket and would like the discount code, please write into membership@longnow.org with your member number and we’ll be happy to help you.


Next Article navigateright