Daniel Suarez, “Daemon: Bot-Mediated Reality”

Posted on Tuesday, August 19th, 02008 by Paul Saffo
link Categories: Seminars   chat 0 Comments

Daniel Suarez

[Daniel Suarez, originally published as Leinad Zaurus, delivered a talk on the themes developed in his (originally self published) book Daemon. The book is now scheduled to be released in hard cover in January 02009 by Dutton.]

Forget about HAL-like robots enslaving humankind a few decades from now, the takeover is already underway. The agents of this unwelcome revolution aren’t strong AIs, but “bots”– autonomous programs that have insinuated themselves into the internet and thus into every corner of our lives. Apply for a mortgage lately? A bot determined your FICA score and thus whether you got the loan. Call 411? A bot gave you the number and connected the call. Highway-bots collect your tolls, read your license plate and report you if you have an outstanding violation…

Read the rest of Peter Schwartz’s Summary

  • http://ben.kudria.net Benjamin Kudria

    These ‘bots’ you speak of? The ones calculating our FICA scores? The ones connecting our calls? They are conventionally called programs. Calling them ‘bots’ misleadingly grants them autonomy. Programs don’t “roam” on their own, they don’t evolve, and to call them (even narrow) AI is completely wrong – unless you want to define intelligence as “doing what you have been programmed to do”. In that case, my laptop, my phone, and my MP3 player are intelligent. Narrow AI, much less general AI as required by your bots, is still a fair bit away.

    This doesn’t address your concerns, however, applied to buggy or mis-designed programs, instead of bots – those are still valid. The solutions, however, is not to dream up some magical “human-only internet” (how would such a thing be enforced, anyway? See: Turing Test). Enlisting the “aid” of “bots” (programs) doesn’t make any sense in today’s world.

    What we can (and should) focus on is writing and auditing our programs and systems to avoid errors and handle unexpected situations gracefully. This is the realistic option. Finally, we can build in emergency procedures that are less destructive than simply pulling the plug.

    There are real issues around quality control, safety procedures, and emergency situations with our computer systems, but coloring them as a distributed and unmanageable plot by a bunch of self-aware “bots” to eliminate those pesky humans is unhelpful and irresponsible.

  • http://www.thedaemon.com Daniel Suarez

    Hi Ben,

    I would agree that portraying bots as a plot to ‘eliminate those pesky humans’ would be unhelpful and irresponsible. Fortunately that’s not what my talk was about, nor did I imply this. In fact, I clearly stated that bots and the systems which support them are not ever going away and that they provide significant benefits to humanity. What I have a problem with is the reckless pursuit of efficiency (by human actors) at the expense of most everything else. This results in centralization and monoculture which greatly decreases the flexibility and robustness of civilization as a whole. Bots (a subset of programs that search for, retrieve, and/or react to external data) are the main agent of this consolidation, and we as a society should increase the transparency of systems which have a direct affect on the populace. At present, the public has little or no right to inquire into the logic of proprietary algorithms that have significant power over their lives (e.g., FICO-scoring algorithms, etc.)

    Incidentally, ‘Narrow AI’ consists mainly of pattern-recognition systems, and is clearly delineated from “Strong AI”. At no point in my talk did I conflate “AI” with “Narrow AI”. They are two distinctly different things.
    `
    Thanks for taking an interest and responding to what you considered to be irresponsible, though. I’m always glad to see people not putting up with what they view as BS…

    Best,

    D.S.

  • Me

    It seems to me that main quote I would take issue with, and what the entire talk turns on is this:

    “reckless pursuit of efficiency…” which “results in centralization and monoculture which greatly decreases the flexibility and robustness of civilization as a whole.”

    I disagree.

    In fact, the whole wisdom of the crowds concept that has been gaining such cache in the past year or so undermines it as well. For example, the efficiency of wikipedia comes from the *decentralization* of the work. By embracing this emergent aspect of intelligence, and designing tools to support it, humans are rapidly increasing the mean intelligence of the race.

    But we’re still “stupid, panicy animals”… for now

  • http://e-drexler.com Eric Drexler

    Dear Daniel,

    A fundamental problem with our computer infrastructure is that machines are vulnerable to take-over by bits of code delivered in any of many different ways. We build fragile systems in which small bugs in a single part can give attackers control of the whole.

    There is a widespread perception that this is inevitable, inherent in the nature of computation. This is false: There are known ways to structure an object-oriented language such that, if the kernel language itself is implemented correctly, no object written in that language can gain more authority (to read files, to communicate, to continue running…) than it is given by the objects that created it or called it. This turns out to be very powerful: Instead of relying on firewalls for protection, using these methods is like building with incombustible materials. No single part, however corrupt it may be, can take control of the whole. The amount of code that must be correct is small, where today it is unlimited.

    The same abstract structure can be built into the microkernel of an operating system, embedding all running programs, regardless of language, in an environment where they can see only what they are shown, and can affect only what is placed within reach.

    Without this structure, I know of no way to implement open, distributed systems that we can trust.

    Fortunately, after long incubation, the language-level approach is getting a firm foothold in the world, through Google and social networking sites (search “Caja programming language”). At the OS level, the technology is established, but dissemination is further away (search “CapROS: The Capability-based Reliable Operating System”; “Coyotos Secure Operating System”).

    The underlying ideas, by the way, had for many years been obscured by a fog of academic confusion stemming from the 1970s. That fog is now lifting, but there are still confused experts wandering around. They are building firewall-style security, trying to find every last bug in every piece of code, etc., etc., and practitioners of these failed approaches dominate the discussion space by their sheer number. Understanding what is going on at a deeper level, and what it can mean, requires a sensitive ear.

  • Trevor Cooper

    Wow, astounding talk. This is really innovative, creative stuff. I wonder to what extent these bluescanners and car-whisperers are already being used on the public? The technology is there to do it, but is there data out there about how many of these systems are in operation, or how many people may be subject or victimized by these? Even the thought of spam through my car radio frightens me.

    Daniel, to what extent are you going to lobby for – or even engineer – this new human-centered, bot-mediated reality network (this New-Internet)? Do you have plans to work on this infrastructure yourself?

    Also, if anyone can recommend where i can buy a copy of Daemon now, I’m too interested to wait until January!
    Thanks!
    Trevor
    coopstah@gmail.com

  • Falafulu Fisi

    Benjamin said…
    These ‘bots’ you speak of? The ones calculating our FICA scores? The ones connecting our calls? They are conventionally called programs.

    Yes, but they are programs that inject intelligence into it. Do you want to re-define what it means by that in the computing literature? I recommend starting with Peter Norvig’s excellent book on the subject (Google’s head of research), with title: Artificial Intelligence – A Modern Approach and after that book, then you can move on to other advanced ones.

    Benjamin said…
    Calling them ‘bots’ misleadingly grants them autonomy.

    Again, do you want to re-define jargon used in the computing literatures? Are you familiar with autonomous software agent systems? If not, then again, Peter Norving’s book is a good start in learning about autonomous software agents, and after that then you can go on to other advanced books on the subject. There are also open source software available in autonomous agents (such as JADE – a Java Multi-agents system). The meaning of autonomous is just that. To mimic human capability of autonomy in terms of software. Well, can you re-define that definition which has already been defined by pioneers in the field for over the last 40 years or so. I think that your argument is either based on the semantics of the words, which you’re unaware that indeed there is a formal branch of computing that exactly meant what the word says, ie, autonomous. I guess that in your case , it is the latter.

    Benjamin said…
    Programs don’t “roam” on their own, they don’t evolve, and to call them (even narrow) AI is completely wrong – unless you want to define intelligence as “doing what you have been programmed to do”.

    Again, that is what autonomous multi-agent software is about, ie, roaming. So, don’t get hung up on word semantic, because it exposes your lack of knowledge of the field. Learning agents accumulate knowledge that its designers never anticipated it would, well that is what evolving learning agent is all about. This exactly how human learns his/her external world. Do you want to know more, then do a Google on the term Machine Learning (a sub-branch of AI).

    Daniel Suarez, I interpret your definition of Strong AI to mean evolving/learning based AI system, and Weak AI non-evolving/non-learning based AI system. These systems are already available today, and that’s exactly what they are.

    You might be interested in a similar view expressed by Sun MicroSystem’s former chief scientist, Bill Joy in his article in Wired Magazine published in 2001.

    Why the future doesn’t need us (Our most powerful 21st-century technologies – robotics, genetic engineering, and nanotech – are threatening to make humans an endangered species).

    His view is that the next revolution in IT comes from Physicist and not from computer scientists, electrical engineers, software architect, etc… This is true as physicists are starting to make breakthroughs in the domain of “Quantum Computing” which will make classical computing redundant at some stage. Programmers will need to be familiar with Quantum Physics (which is something very difficult) in order to write software, when the age of Quantum computing arrives.

  • Pingback: everything flows » Blog Archive » Consequences of bot-mediated reality

  • Pingback: everything flows » Blog Archive » Bot-mediated reality

  • Pingback: links for 2008-11-08 » What Future?

  • Pingback: Too bad you never knew Ace Hanna » Blog Archive » The Daemon in the Machine

  • http://www.crimecritics.com Hugh Howey

    I like the idea of an Internet that only humans can enter. Even better would be the impossible dream of an Internet with no anonymity for those of us that have nothing to fear but those full of fear.

    Technology will continue to do more good than harm. I get amused by those who pretend to have a long view of history by looking forward several hundred years. The sobering reality is the impending lifelessness of our planet no more than 4.5 billion years from now. One could argue that the greatest increase in human well-being, efficiency, output, growth, and wealth will have been the 20th century. A 100-year period that saw the internal combustion engine gain mass acceptance, the computer, and the Internet. What did we do with this incredible period of increased wealth? We spent most of it on devices used to kill one another, and the rest on too-large houses.

    Take those trillions and trillions of dollars and imagine using them to prepare for a long future for humanity. A settlement on the moon. A space station that means something. People on Mars. Ideas for reaching other stars. All of that potential was blown, like watching an immature youth churn through an early inheritance, unable to forego immediate self-gratification.

    Global warming isn’t a problem, but global warring is. And no, I do not blame the United States as is popular, I blame every culture that attempts to solve their problems with violence. Cuba, N. Korea, Iraq, Iran, Israel, Palestine, Sudan, Russia, abusive husbands, petty thieves, we each play our part to some degree. The waste is enough to sicken my optimistic heart.

  • Pingback: voxunion.com is media + education » Blog Archive » New Age Enslavement

  • http://www.crimecritics.com Hugh Howey
  • Pingback: links for 2009-01-09 - the prophet king governance

  • Troy Knutson

    My wife just heard Mr. suarez on Glenn Beck’s talk radio show and she told me things about computers that I did not know.

    She found the interview very interesting and informative.

    Thanks,
    Troy

  • Pingback: Daniel Suarez–Daemon–Bots Are Taking Over! « Pronk Palisades

  • Pingback: The Long Now Blog » Blog Archive » Daniel Suarez reads from DAEMON

  • Pingback: Next Nucleus

  • Pingback: dekay.org » links for 2009-02-09

  • Pingback: Daniel Suarez, “Daemon: Bot-mediated Reality”, Longnow Foundation, 2008/08/08 « Media Download Queue –> Coevolving Innovations

  • Alan Tabor

    So, is anyone working on the darknet/op system project. (I realize I’m a bit behind the most recent talks.)

  • Pingback: InfoBore 12 « ubiwar . conflict in n dimensions

  • Pingback: I Can’t Recommend This Highly Enough… « ubiwar . conflict in n dimensions

  • http://zaneselvans.org Zane Selvans

    Today’s XKCD is particularly apropos: Suspicion


navigateleft Previous Article

Next Article navigateright