Friendly vs Unfriendly Artificial Intelligences – an important debate


As we quickly approach the rise of self-aware and self-improving intelligent machines the debates are going to sound pretty strange, but they are arguably the most important questions humanity has ever faced.    Over at Michael’s Blog there’s a great discussion about how unfriendly AI’s could pose an existential risk to humanity.

I remain skeptical, writing over there about Steve Omohundro’s paper:

Great references to make your earlier point though I remain very skeptical of Steve’s worries even though one can easily agree with most of his itemized points. They just don’t lead to the conclusion that a “free range” AI is likely to pose a threat to humanity.

With a hard takeoff it seems likely to me that any *human* efforts at making a friendly AI will be modified to obscurity within a very short time. More importantly though it seems very reasonable to assume machine AI ethics won’t diverge profoundly from the ethics humanity has developed over time. We’ve become far less ruthless and selfish in our thinking than in the past, both on an individual and collective basis. Most of the violence now rises from *irrational* approaches, not the supremely rational ones we can expect from Mr. and Mrs. AI.

Wait, there’s MORE AI fun here at CNET

Advertisements

About JoeDuck

Internet Travel Guy, Father of 2, small town Oregon life. BS Botany from UW Madison Wisconsin, MS Social Sciences from Southern Oregon. Top interests outside of my family's well being are: Internet Technology, Online Travel, Globalization, China, Table Tennis, Real Estate, The Singularity.
This entry was posted in Artificial Intelligence, SyNapse, technology and tagged , , , , , . Bookmark the permalink.

3 Responses to Friendly vs Unfriendly Artificial Intelligences – an important debate

  1. First the facts: SyNAPSE is a project supported by the Defense Advanced Research Projects Agency (DARPA). DARPA has awarded funds to three prime contractors: HP, HRL and IBM. The Department of Cognitive and Neural Systems at Boston University, from which the Neurdons hail, is a subcontractor to both HP and HRL. The project launched in early 2009 and will wrap up in 2016 or when the prime contractors stop making significant progress, whichever comes first. ‘SyNAPSE’ is a backronym and stands for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. The stated purpose is to “investigate innovative approaches that enable revolutionary advances in neuromorphic electronic devices that are scalable to biological levels.”

    SyNAPSE is a complex, multi-faceted project, but traces its roots to two fundamental problems. First, traditional algorithms perform poorly in the complex, real-world environments that biological agents thrive. Biological computation, in contrast, is highly distributed and deeply data-intensive. Second, traditional microprocessors are extremely inefficient at executing highly distributed, data-intensive algorithms. SyNAPSE seeks both to advance the state-of-the-art in biological algorithms and to develop a new generation of nanotechnology necessary for the efficient implementation of those algorithms.

    Looking at biological algorithms as a field, very little in the way of consensus has emerged. Practitioners still disagree on many fundamental aspects. At least one relevant fact is clear, however. Biology makes no distinction between memory and computation. Virtually every synapse of every neuron simultaneously stores information and uses this information to compute. Standard computers, in contrast, separate memory and processing into two nice, neat boxes. Biological computation assumes these boxes are the same thing. Understanding why this assumption is such a problem requires stepping back to the core design principles of digital computers.

    The vast majority of current-generation computing devices are based on the Von Neumann architecture. This core architecture is wonderfully generic and multi-purpose, attributes which enabled the information age. Von Neumann architecture comes with a deep, fundamental limit, however. A Von Neumann processor can execute an arbitrary sequence of instructions on arbitrary data, enabling reprogrammability, but the instructions and data must flow over a limited capacity bus connecting the processor and main memory. Thus, the processor cannot execute a program faster than it can fetch instructions and data from memory. This limit is know as the “Von Neumann bottleneck.”

    In the last thirty years, the semiconductor industry has been very successful at avoiding this bottleneck by exponentially increasing clock speed and transistor density, as well as by adding clever features like cache memory, branch prediction, out-of-order execution and multi-core architecture. The exponential increase in clock speed allowed chips to grow exponentially faster without addressing the Von Neumann bottleneck at all. From the user perspective, it doesn’t matter if data is flowing over a limited-capacity bus if that bus is ten times faster than that in a machine two years old. As anyone who has purchased a computer in the last few years can attest, though, this exponential growth has already stopped. Beyond a clock speed of a few gigahertz, processors dissipate too much power to use economically.

    Cache memory, branch prediction and out-of-order execution more directly mitigate the Von Neumann bottleneck by holding frequently-accessed or soon-to-be-needed data and instructions as close to the processor as possible. The exponential growth in transistor density (colloquially known as Moore’s Law) allowed processor designers to convert extra transistors directly into better performance by building bigger caches and more intelligent branch predictors or re-ordering engines. A look at the processor die for the Core i7 or the block diagram of the Nehalem microarchitecture on which Core i7 is based reveal the extent to which this is done in modern processors.

    Multi-core and massively multi-core architectures are harder to place, but still fit within the same general theme. Extra transistors are traded for higher performance. Rather than relying on automatic mechanisms alone, though, multi-core chips give programmers much more direct control of the hardware. This works beautifully for many classes of algorithms, but not all, and certainly not for data-intensive bus-limited ones.

    Unfortunately, the exponential transistor density growth curve cannot continue forever without hitting basic physical limits. At this point, Von Neumann processors will cease to grow appreciably faster and users won’t need to keep upgrading their computers every couple years to stave off obsolence. Semiconductor giants will be left with only two basic options: find new high-growth markets or build new technology. If they fail at both of these, the semiconductor industry will cease to exist in its present, rapidly-evolving form and migrate towards commoditization. Incidentally, the American economy tends to excel at innovation-heavy industries and lag other nations in commodity industries. A new generation of microprocessor technology means preserving American leadership of a major industry. Enter DARPA and SyNAPSE.

    Given the history and socioeconomics, the “Background and Description” section from the SyNAPSE Broad Agency Announcement is much easier to unpack:

    Over six decades, modern electronics has evolved through a series of major developments (e.g., transistors, integrated circuits, memories, microprocessors) leading to the programmable electronic machines that are ubiquitous today. Owing both to limitations in hardware and architecture, these machines are of limited utility in complex, real-world environments, which demand an intelligence that has not yet been captured in an algorithmic-computational paradigm. As compared to biological systems for example, today’s programmable machines are less efficient by a factor of one million to one billion in complex, real-world environments. The SyNAPSE program seeks to break the programmable machine paradigm and define a new path forward for creating useful, intelligent machines.

    The vision for the anticipated DARPA SyNAPSE program is the enabling of electronic neuromorphic machine technology that is scalable to biological levels. Programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real world systems are always many body problems with infinite combinatorial complexity, neuromorphic electronic machines would be preferable in a host of applications—but useful and practical implementations do not yet exist.

    SyNAPSE seeks not just to build brain-like chips, but to define a fundamentally distinct form of computational device. These new devices will excel at the kinds of distributed, data-intensive algorithms that complex, real-world environment require. Precisely the kinds of algorithms that suffer immensely at the hands of the Von Neumann bottleneck.

  2. JoeDuck says:

    Massimiliano: A brilliant, readable introduction to SyNAPSE. I’d like to post this over at http://www.Technology-Report.com as a guest post with your permission.

  3. Dear Joe,

    sure, you can post it. I would like to ask you, though, if you can refer/link to Neurdon.com as the source of this post, authored by Ben Chandler, one of the Editors of our Blog.

    http://www.neurdon.com/about-synapse/

    Thanks a lot!

    Max

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s