Blog Archive for the ‘Technology’ Category

navigateleft Older Articles    Newer Articles navigateright

The Economist hosts online debate about the future of driverless cars

Posted on Thursday, May 2nd, 02013 by Austin Brown
link   Categories: Technology   chat 0 Comments

driverless

The Economist asks:

Are completely self-driving cars feasible in the foreseeable future?

Debating this proposition are Long Now board member Paul Saffo and automotive R&D executive Andrew Bergbaum. Saffo is “for the proposition” – he argues that self-driving cars will be commercially available, and maybe even common, by 02030 – while Bergbaum is against, arguing that legislation and existing business models will hamper roll-out of the technology.

The debate is ongoing on The Economist’s site: opening arguments were posted on Tuesday 4/30, rebuttals will go up Friday 5/3 and closing remarks are to be made on Wednesday 5/8.

Readers are encouraged to participate as well, in comment-form or through voting. Results of the vote are tallied daily and there’s a clear trend over the first 3 days in the direction of Paul Saffo’s stance. Will it hold when the rebuttals are posted? Check in over the next week to see.

A Voice From the Past

Posted on Monday, April 29th, 02013 by Charlotte Hajer
link   Categories: Technology   chat 0 Comments

Alexander_Graham_Telephone_in_Newyork

If you’ve ever wondered what the inventor of the telephone might have sounded like from the other end of a landline, you may finally have your answer: researchers have discovered and managed to restore a brief recording of Alexander Graham Bell’s own voice.

Famously – if controversially – credited for patenting the acoustic telegraph, Bell (01847-01922) dedicated his life to the science of creating, recording, and transmitting sound waves. He co-founded the Volta Laboratory in Washington, DC, where his team experimented with magnetic sound recording and worked on improvements to Edison’s phonograph. The Smithsonian Magazine writes:

Inside the lab, Bell and his associates bent over their pioneering audio apparatus, testing the potential of a variety of materials, including metal, wax, glass, paper, plaster, foil and cardboard, for recording sound, and then listening to what they had embedded on discs or cylinders.

The laboratory produced a considerable collection of recordings, and kept meticulous records of their proceedings. In the late 19th century, Bell donated a large amount of this to the Smithsonian Institution, where they have been carefully preserved. Sadly, however, Bell’s documentation included little information about the instruments he used to play his own recordings, and so the passing of time and evolution of technology reduced his discs to “mute artifacts.”

So when researchers at the Smithsonian discovered a piece of paper in a collection of the earliest audio recordings ever made that transcribed an 1885 recording ostensibly made by Bell, then matched that to an actual wax-on-cardboard disc sporting the initials “AGB” and the same date, April 15, 1885, they couldn’t just drop it into an old-school player and crank away. (Time)

Now, researchers at the Lawrence Berkeley National Laboratory have managed to bridge this technological divide: they have developed a way to “extract” sound from 19th century recordings. A high-resolution 3D camera allows them to create a topographical map of an audio disc without damaging its surface; a computer can then convert this map into sound waves. Using this technology, the Library of Congress brought the 1885 recording back to life, and found, indeed, a snippet of A.G. Bell’s own voice.

(Wikipedia)

It is “a new invention in the service of invention,” says National Museum of American History curator Carlene Stephens. The Berkeley lab’s new disc-reading  technology has succeeded in restoring a piece of its very own origins: it has revived the legacy – and the voice – of a pioneer in the science of audio transmission.

The Digital Public Library of America

Posted on Friday, April 26th, 02013 by Charlotte Hajer
link   Categories: Long Term Thinking, Technology   chat 0 Comments

DPLA

A digital library that makes published material available to anyone with an internet connection, free of charge: a realistic possibility, or a utopian fantasy?

Last April, a contributor to the MIT Technology review questioned whether it could be done: if Google Books had become mired in legal battles with US copyright law, would anyone else be able to figure out how to make published matter publicly available?

But this past week, the Digital Public Library of America celebrated its official launch at a library in Boston. As Harvard University librarian Robert Darnton explains, the Library is a nonprofit “project to make the holdings of America’s research libraries, archives, and museums available to all Americans – and eventually to everyone in the world – online and free of charge.”

Partnering with institutions such as the Smithsonian, the New York Public Library, and ARTstor, the Digital Public Library of America is not a database but a “distributed system of electronic content.” Rather than reinvent the wheel of digitization, it embraces what existing libraries and other organizations have already scanned in, and simply works to bring these resources together on a single, openly accessible, and nation-wide platform.

Unlike Google Books, the DPLA doesn’t hoover up institutions’ documents to be stored on its own servers. Its primary goal is to support coordinate scanning efforts by each of its partner institutions, and to act as a central search engine and metadata repository. Most of these libraries and museums have been slowly scanning and cataloguing their collections for years; the DPLA helps make those materials aggregatable and interoperable. (theverge.com)

In efforts to contribute to a truly universal spread of knowledge, the Digital Public Library of America offers a user-friendly interface and a searchable collection of materials under a Creative Commons license: “We are really fighting for a maximally usable and transferrable knowledge base,” says executive director Dan Cohen. Though the Library will – for the moment – refrain from offering anything that is currently under US copyright protection, part of its mission is to explore alternatives to existing copyright laws. As The Verge explains,

[Cohen] wants to create a platform where academic scholarship, whether in journals or monographs, can be disseminated and preserved in open formats for current and future generations. He wants to find ways for public libraries to engage in collective action with book publishers to make e-books as available as possible to US citizens. He wants the DPLA to explore alternative approaches to copyright that preserve authors’ and publishers’ chief profit window but also maximizing a work’s circulation, including the “library license” that would allow public, noncommercial entities (like the DPLA) to have digital access to certain works in copyright after five years, or Knowledge Unlatched, a consortium that purchases in-copyright books for open access. The DPLA also wants to work directly with authors to donate their books to the commons.

Princeton philosopher Peter Singer writes that “scholars have long dreamed of a universal library containing everything that has ever been written.” He calls this a “Library of Utopia” – but agrees with the Digital Public Library of America that a utopia is more than idle fantasy. It is an idea worth striving for; perhaps even a moral imperative.

“If we can put a man on the moon and sequence the human genome, we should be able to devise something close to a universal digital public library. At that point, we will face another moral imperative, one that will be even more difficult to fulfill: expanding internet access beyond the less than 30% of the world’s population that currently has it.”

Our Digital Afterlives

Posted on Monday, April 22nd, 02013 by Charlotte Hajer
link   Categories: Digital Dark Age, Technology   chat 0 Comments

dddgirl

In 02006, Long Now Board Member David Eagleman wrote in Nature:

There is no afterlife, but a version of us lives on nonetheless.

At the beginning of the computer era, people died with passwords in their heads and no one could access their files. When access to these files was critical, companies could grind to a halt. That’s when programmers invented death switches.

With a death switch, the computer prompts you for your password once a week to make sure you are still alive. When you don’t enter your password for some period of time, the computer deduces you are dead, and your passwords are automatically e-mailed to the second-in-command. Individuals began to use death switches to reveal Swiss bank account numbers to their heirs, to get the last word in an argument, and to confess secrets that were unspeakable during a lifetime.

In other words, a “death switch” is a way for us to pre-program an afterlife for our digital selves. Despite the relatively short lifespan of software platforms, it is likely that the data we post on the internet will live on – somewhere – after we ourselves expire.

Eagleman, along with several others, is urging us to think about what will happen to our digital legacy after death: to decide where we want our data to live, and who will have the privilege to engage with it. Do we want to place our legacy in the hands of an heir, or do we want our online presence to be erased? Alternatively, do we perhaps want to designate our own computers as executors of our estate, and have it send out friendly messages to our descendants every once in a while?

Over the past two years, a series of Digital Death Day “unconferences” has brought people together to talk about these kinds of questions. Evan Carroll and John Romano published a book and host an accompanying blog about ways to shape our digital afterlives. And most recently, Google introduced its Inactive Account Manager: a new tool that allows you to decide what will happen to your emails, photo albums, posted videos and personal profiles when your account becomes inactive.

Planning for our digital beyond is a way to save our own lives from receding into a digital dark age – and as such, it may be a way to keep something of ourselves alive after our bodies die. Eagleman muses:

This situation allows us to forever revisit shared jokes, to remedy lost opportunities for a kind word, to recall stories about delightfully earthly experiences that can no longer be felt. Memories now live on their own, and no one forgets them or grows tired of telling them. We are quite satisfied with this arrangement, because reminiscing about our glory days of existence is perhaps all that would have happened in an afterlife anyway.

Danny Hillis: We need a backup internet

Posted on Tuesday, March 19th, 02013 by Austin Brown
link   Categories: Digital Dark Age, Long Term Thinking, Technology   chat 0 Comments

dannyTED

Speaking at TED earlier this year, Long Now co-founder Danny Hillis described the early days of networked computing – a time when one could register “think.com” on a whim and everyone with an email address or a domain could be listed in a single book.

He explained that the design of the Internet Protocol and the early community using it were infused with communistic values - ironic, he notes, as the tech grew out of Cold War militarism.

Since then, of course, the internet, its users and its uses have expanded far beyond the wildest dreams of its creators. In so doing, it has become an essential societal infrastructure – without having been designed as such. As another Long Now Board Member, David Eagleman, points out, the internet is not invulnerable. Emergency communications and other high-priority services must be possible without the internet, but increasingly depend on it.

Hillis says a separate backup internet would not be hard to build and would dramatically increase our resilience to disaster and malfeasance.

Long Now Board Members at TED 02013

Posted on Friday, March 1st, 02013 by Andrew Warner
link   Categories: Announcements, Long Term Thinking, Revive & Restore, Technology   chat 0 Comments

This year’s TED conference has two of Long Now’s board members presenting, Stewart Brand and Danny Hillis. Although the videos will not be published on the TED site until later this year, some attendees graciously summarized and illustrated the talks for the rest of us. The cartoons below come from Fever Pitch, a group of artists that put information in illustrated form. You can find the rest of their TED illustrations on their Facebook page.

De-Extinction

Stewart’s talk introduces the concept of de-extinction to the TED community. First giving an overview of the technology and previous research, he goes on to explain how the newly launched Revive & Restore project is working on bringing back other extinct species, starting with the passenger pigeon. Revive & Restore will be hosting TEDxDeExtinction in Washington DC on March 15th to further explore this project.

stewart-brandTEDcartoon

The Internet Needs a Plan B

Danny’s talk calls for the creation for a plan B in the case of internet failure. Michael Copeland from Wired also gives a good summary of the key points of the talk for those that were not physically present.

hillis ted cartoon

Researchers theorize new method of highly precise atomic timekeeping

Posted on Wednesday, February 6th, 02013 by Charlotte Hajer
link   Categories: Long Term Thinking, Technology   chat 0 Comments

Albert Einstein discovered that time is woven into the fabric of space. Now, Berkeley researcher Holger Müller suggests that time is woven into matter, as well.

Interested in determining the simplest possible way of measuring time, Müller has discovered a way to turn matter into a natural clock.

“When I was very young and reading science books, I always wondered why there was so little explanation of what time is,” said Müller, who is also a guest scientist at Lawrence Berkeley National Laboratory. “Since then, I’ve often asked myself, ‘What is the simplest thing that can measure time, the simplest system that feels the passage of time?’ Now we have an upper limit: one single massive particle is enough.”

This is not an atomic clock in the traditional sense; rather than measuring the energy emissions of electrons, Müller’s timekeeper looks at atoms as a whole by making use of a particular feature of matter: its building blocks behave like both particles and waves.

Quantum mechanics was born when physicists decided that light was neither a wave (as argued by Huygens) nor a beam of particles (as Newton thought), but both. In 1924, De Broglie discovered that this duality was true for all forms of matter, and developed a formula to calculate the wavelength of different particles.

Müller theorized that if you can find a way to measure these wavelengths experimentally – if you can construct a clockwork to count an atom’s oscillations – you have a very fundamental unit of time. Unfortunately, these atomic frequencies (also known as Compton frequencies) still outpace our best instruments of detection. But Müller has found a possible solution in yet another one of Einstein’s discoveries. Because motion slows the passage of time, a moving atom oscillates at a slower pace than a stationary one. The difference between the two frequencies may be measurable, and thereby give us a unit of time.

Though Müller’s mechanism is not yet as precise as an atomic clock, improvement of the design will increase its accuracy. Either way, the implications are far-reaching. If matter can be used to count time, the reverse is also true: time can become a unit of measurement for matter. In other words: Müller’s work shows that time is woven into the most fundamental building blocks of our world. One might even say it’s what we’re made of.

The Cambridge Project for Existential Risk

Posted on Wednesday, December 5th, 02012 by Alex Mensing
link   Categories: Futures, Long Term Thinking, Technology   chat 0 Comments

Human technology is undoubtedly getting more powerful every year, and our destructive potential is no exception. The Cold War notion of ‘mutually assured destruction‘ was unthinkable for most of human history, as was the ability to fundamentally alter the climate of the planet on which we rely. As the capabilities of our technologies continue to grow, what are the ways in which we become increasingly able to bring about our own demise as a species?

Martin Rees and Huw Price of the University of Cambridge and the Skype founder Jaan Tallinn teamed up to investigate and mitigate that very possibility. In founding the Centre for the Study of Existential Risk at Cambridge University, they explain their motivation:

Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake.

Rees, Huw and Tallinn agree that scientists need to pay more attention to this issue. The long-term future of humanity is at stake, and we need to understand more clearly the power that we wield in the modern world, and how to avoid using it destructively. The issue can, in fact, be extended beyond our own species. As Stewart Brand concluded his summary of co-founder Martin Rees’ SALT talk:

Now that we are stewards of this planet, we are responsible for maintaining life’s possibilities in this cosmic neighborhood.

Decoding Long-Term Data Storage

Posted on Friday, October 12th, 02012 by Charlotte Hajer
link   Categories: Digital Dark Age, Rosetta, Technology   chat 0 Comments

If human societies are founded on the accumulation of knowledge through the ages, then the long-term transmission of information must be the cornerstone of a durable civilization. And as we accelerate ever more rapidly in our expansion of knowledge and technological capability, the development of durable storage methods becomes ever more important.

In the process of brainstorming such methods, two central questions emerge. The first of these concerns the type of storage media you might use: what kind of material is likely to last long enough to convey a message to generations thousands of years into the future? Throughout much of history, people carved important messages into stone, bone, or other hard materials. So far, we don’t seem to have come up with anything better: most of us are familiar with the limited lifespan of CDs, vinyl, and computer hard drives. Faced with this lack of suitable options, several organizations and companies around the world have re-embraced the long-term durability of hard natural substances. The Long Now’s Rosetta disk, for example, is made of nickel. Arnano, a French technology start-up, has developed a disk of sapphire on which to micro-etch information – civic records, perhaps, or important messages about the storage of nuclear waste. And most recently, Japanese electronics giant Hitachi announced a new data storage technology that uses quartz glass.

The second – and perhaps even more intriguing – question concerns the language of your message. What kind of ‘code’ will be most easily accessible to future generations, and what technologies will they have available to help them decrypt a message from the past?

The storage and transmission of data often requires multiple levels of encoding. When we think of ‘code’ we often think of computers – but in fact, we routinely go through two layers of encryption before we can even begin to digitize information. Spoken human language is itself a code, in which sounds are used to signify things or ideas. The use of a writing system adds a further layer of encryption: sequences of letters or pictographs signify the sounds that represent things or ideas. Yet another layer of encryption can then be applied by translating a writing system into binary numbers (and numeric systems are a kind of code, as well!) or perhaps even DNA.

These extra layers of encoding offer the advantage of information density: they can help you pack lots of information into a very small format. However, each layer also further complicates the decodability and readability of a message. Because the Rosetta Disk is itself intended to be a tool for decryption – a primer of human language meant to help future archaeologists unlock entire worlds of culture, just like the Rosetta stone did in the 19th century – Long Now has chosen to store its data in the analog form of human alphabets, rather than add an extra layer of encryption by a digital code of 1’s and 0’s.

Arnano, the makers of the sapphire disk, have made a similar choice. The added advantage is that this analog information is readable by the human eye (aided by a microscope or magnifying glass).

It’s safe to assume that the languages future generations will speak – and the technologies they’ll have available – will most likely be very different from what we use today. This brings up an important third question: how do you include ‘instructions’ for decoding and reading with your message? Following the example of its namesake predecessor, the content on Long Now’s Rosetta Disk is its own primer: if you know at least one of the 1,500 languages included on the disk, all other information can be decoded. Perhaps a similar kind of parallel multiplicity of codes is possible for other storage methods as well.

These questions of language and code are inevitably more difficult to answer than that of the storage medium. You can subject your chosen material to stress tests to make sure that it will stand up to acid, erosion, or any other kind of potential natural disaster. But there’s no similar test for language; it’s impossible to predict what codes will be interpretable by the people of the future, or what technology they’ll have available to decrypt a message. Nevertheless, these conundrums are no less important to grapple with, and any proposal for long-term storage worth its salt must offer some potential answers to these questions.

Cultural Memories in the Digital World

Posted on Wednesday, September 19th, 02012 by Charlotte Hajer
link   Categories: Digital Dark Age, Long Term Thinking, Technology   chat 0 Comments

A book is much more than a collection of information. It is also a physical object, and this materiality plays an important role in shaping the way we relate to literature. Think of how the pages of your favorite story feel between your fingers, and the way its spine creases as you immerse yourself further and further in the plot. The weight of a thick novel reflects the depth of its story, an illustrated cover helps to seed your imagination, while the font in which the text is printed might convey a certain emotional tone. Perhaps you like to record your own thoughts in the white margins of a book’s pages; perhaps you prefer to leave those edges clean and allow the story to stand on its own. Either way, the materiality of a book shapes our relationship to its content; it becomes a physical souvenir of our engagement with a story.

So what happens, then, as our media and the culture they convey move increasingly into the digital world? How does the emergence of e-books change the way we experience a story?

These questions were the subject of a brief talk by publisher and technologist James Bridle, broadcast recently on BBC Radio’s Four Thought. Bridle suggests that the digitization and globalization of our cultural world not only transforms the nature of our cultural artifacts; it changes us, as well.

People are changed by these encounters with the network as much as our cultural objects are. That’s fundamentally important. Even though we’ve always been connected [to the world] in all these ways, the visibility of that connection that the network brings, is deeply strange. You can reach out across space, and you can reach out across time, as well; the network has this extraordinary flattening effect on time, so things that look distant are just as accessible to us as things that are near. And you can see this process happening in the ways that we write, in the ways that we read, and the things that it’s doing to the texts themselves.

Whereas it was the physicality of a book that brought its narrative to life in our experience, it is now the instantaneousness and interactivity of information that facilitate our connection to a story. But while this necessarily changes the way we engage with cultural artifacts, Bridle suggests that this need not entail a loss of value. Our collective cultural memory is not in the process of disappearing; it is simply being transformed – embodied no longer by physical objects, but rather by the process of sharing.