Alan Turing Google Doodle honors Computing Pioneer


Check out the most complicated Google Doodle of all time here, where the Google Doodle of the day  celebrates the birthday of computer science pioneer Alan Turing.   Turing is reasonably considered a founder of computer science even though he never lived to see anything like the current crop of machines we now find in our homes, businesses, and mobile devices.

The Google Doodle is representing a ‘codebreaker’ sequence.   Turing’s brilliancies in cracking encoded Nazi war memos led to major strategic breakthrough when he cracked the “enigma” code routine, giving the allies access to a treasure trove of strategic information about the Nazi war plans.

The “Turing Test” remains an intriguing part of the quest for general artificial intelligence.  Turing suggested that a major step in development of mechanical intelligence would be a human’s inability to distinguish the machine responses from those of another human.  Most current thinking suggests that a machine could pass the “Turing Test” and NOT be considered artificial intelligence, but Turing’s speculations remain some of the most important computing insights of all time.

Turing’s life was tragic in many ways.  He was gay in a time when the government prosecuted people for “indecency”, and his life was cut short by cyanide poisoning – most likely a suicide or accident – at the age of only 42.

Artificial Intelligence Global Luminaries


This great list is from the Accelerating Futures Website  from Michael Anissimov

Artificial Intelligence

Aubrey de Grey Aubrey de Grey
Chief Science Officer, Methuselah Foundation
Barney Pell Barney Pell
Search Strategist, Microsoft
Ben Goertzel Ben Goertzel
CSO, Novamente & Director of Research, SIAI
Bill Hibbard Bill Hibbard
Emeritus Senior Scientist, Space Science and Engineering Center
Bruce Klein Bruce Klein
President, Novamente & Director of Outreach, SIAI
Christine Peterson Christine Peterson
Co-Founder, Foresight Nanotech Institute
David Hart David Hart
Director of Open Source Projects, SIAI
Eliezer Yudkowsky Eliezer Yudkowsky
Co-Founder and Research Fellow, SIAI
Eric Baum Eric Baum
Founder, Baum Research
Hans Moravec Hans Moravec
Chief Scientist, Seegrid Corporation
Helen Greiner Helen Greiner
Co-Founder, iRobot Corporation
Hugo de Garis Hugo de Garis
Professor, Wuhan University
J Storrs Hall J Storrs Hall
President, Foresight Nanotech Institute
John Laird John Laird
Tishman Professor of Engineering, University of Michigan
Jonas Lamis Jonas Lamis
Executive Director, SciVestor Corporation
Jonathan Connell Jonathan Connell
Staff Member, T.J. Watson Research Center, IBM
Joscha Bach Joscha Bach
Author, Principles of Synthetic Intelligence
Jurgen Schmidhuber Jurgen Schmidhuber
Professor of Cognitive Robotics and Computer Science, TU Munich
Marcus Hutter Marcus Hutter
Associate Professor, Australian National University
Marvin Minsky Marvin Minsky
Toshiba Professor of Media Arts and Sciences, MIT
Matt Bamberger Matt Bamberger
Founder, Intelligent Artifice
Monica Anderson Monica Anderson
Founder, Syntience Inc.
Moshe Looks Moshe Looks
AI Researcher, Google Research
Neil Jacobstein Neil Jacobstein
Chairman and CEO, Teknowledge
Pei Wang Pei Wang
Lecturer, Department of Computer and Information Science, Temple University
Peter Cheeseman Peter Cheeseman
Advisor, Singularity Institute
Peter Norvig Peter Norvig
Director of Research, Google
Peter Thiel Peter Thiel
Founder, Clarium Capital
Ray Kurzweil Ray Kurzweil
Chairman and CEO, Kurzweil Technologies, Inc.
Rodney Brooks Rodney Brooks
Chief Technical Officer, iRobot Corp
Ronald Arkin Ronald Arkin
Regents’ Professor, College of Computing, Georgia Tech
Sam Adams Sam Adams
Distinguished Engineer, IBM Research Division
Sebastian Thrun Sebastian Thrun
Director, Stanford Artificial Intelligence Laboratory
Selmer Bringsjord Selmer Bringsjord
Chair, Department of Department of Cognitive Science, Rensselaer Polytechnic Institute
Stan Franklin Stan Franklin
Interdisciplinary Research Professor, University of Memphis
Stephen Omohundro Stephen Omohundro
Founder and President, Self-Aware Systems
Stephen Reed Stephen Reed
Principal Developer, Texai
Susan Fonseca Klein Susan Fonseca Klein
Chief Administrative Officer, Singularity Institute
Wendell Wallach Wendell Wallach
Lecturer, Yale’s Interdisciplinary Center for Bioethics

Conscious Computers and Friendly vs Unfriendly AI


As I’ve noted here in posts about AI many times I think we are within 15 years – probably fewer – of the most profound change in technology and humanity ever to hit the planet.   This will be the advent of conscious computers which we can reasonably expect to surpass us in all thinking and organizational skills within a very short time – probably months or even days of becoming conscious.    

Some AI folks believe that strong AI machinery will require a somewhat lengthy learning period, much like human intellects require, before becoming highly functional but I think the process will be very fast after consciousness happens.  In my opinion it is easy to exaggerate the significance of the intellectual complexity that comes from massive numbers of redundant, mostly simple processes.  Unlike humans, computer intelligences will grow extremly fast as soon as they “choose” that approach.   Initially those choices to expand will be programmed in by the human AI programmer, but it seems logical to assume that as computers design their own replacements they will continue to give the next generation “motivation”.     You don’t even need to assume it’ll happen in this proactive way though.   In a world with various forms of intelligences those that value their own survival will tend to increase in number simply through basic mathematical/evolutionary processes as those that do not value survival as highly simply are more likely to drop off the scene.

So, my cousin asked me today, why would a machine care much if at all about human welfare?    My gut says they will, and I think this is based on watching how humans care so much for their animals and even inanimate objects.    Also I think it’s important to note how crappily we take care of our fellow humans.    We consistently choose fighting and selfishness over harmonious existence. 

So I say give the computers a shot at making the world a better place!  

Are you biased?


My friend Marvin dropped in today on his way down to California and we were discussing artificial intelligence.    Like most of my programming pals he’s much more skeptical than I am about how soon we’ll have conscious computing, but they are also far more knowlegeable about the difficulty of programming complex routines, let alone consciousness.    Of course, they are not nearly as pretty as Google uber-Engineer Marissa Mayer who estimated 10-15 years, so I’m going with her estimate.  

I’m still trying to decide if programmers are viewing things too narrowly by generally assuming that the circumstances required for conscious thought are so very profoundly complex that engineering for them will be nearly impossible.   I prefer the idea that simply having brain-equivalent speedy and massive computational power is going to push machines very close to consciousness after (relatively) simple routines are developed that will create conversations within those systems.  

When I noted that many in the AI community are now wildly optimistic about the prospects for strong AI within 10-20 years, Marvin correctly noted that people in the AI community were predicting strong AI a *long* time ago.   This led to the interesting question of “prediction bias”.    How often in history are predictions  reasonably accurate, and how do the time estimates on those accurate predictions hold up?   This would be a fun mini-research project to do sometime though obviously it would itself be subject to a lot of bias depending on how you picked the criteria, the predictors, and the predictions.

Along those bias lines this great Wikipedia article popped up showing a huge number of cognitive biases.    All of us should take a look at these and reflect on how often we fall into these irrational traps.

The Illusion of Will. Prisoners of the synapse?


This morning I stumbled on a reference to a book by Harvard Psychologist Daniel Wegner called “The Illusion of Conscious Will” which is one of those interesing books I’d like to read but probably won’t.    My coffee pal Roy had clued me in to this research some time aog, and the key point is available online via reviews and such, and it is simply this:

We don’t have conscious will.    Things happen to us, and we process them using our conscious mind, but we don’t *make them happen*.

Now, at first glance this deterministic view seems absurd.    Of course, one might say, I control my actions.    But determinist psychology folks point out that it’s increasingly clear that our actions are *preceded* by brain activity and events that would suggest – I think I’m getting this right – that by the time we are doing “conscious processing” about the thing we are doing, we are already engaged in the activity.   ie the “cause” of our actions comes before the conscious processing period.     From a nice review of Wegner’s book I understand he thinks we confuse this “after the fact” processing with “control”.

Although I am pretty much a determinist I am also uncomfortable with the idea that we are sort of passive players in a predetermined universal play.    The “gut test” says we control our actions and decide what to do.  

I think my ongoing hypothesis about this will be  similar to my idea that consciousness is a conversation between different parts of our brain.  These conversations, many of which are taking place during waking hours and some during sleep, allow us to process information very creatively and act on mental models of the world around us.   It seems we might not have control over our actions 0.1 seconds before them, but that we might have control via processes that happen seconds before as our brain runs through various scenarios.     Now, I think Wegner would say – correctly – that for any given conscious thought you can show there is a preceding electrochemical activity (synapse firing and such) that is not reasonably defined as conscious.  

However what if that initial spark of reflection is unconscious but then leads to a back and forth conscious conversation within your mind that in turn leads to the action. Would that be free will?

[my brain answers –   dude, no way, you have no free will.   Now, stop blogging obscurities and pass the damn M&Ms!]

What is “Intelligence” ?


Some good posts are popping up over the the Singularity Institute blog, though the discussions have been taking that odd “hostile academic” tone you often find from PhD wannabes who spend way too much time learning how to reference obvious things in obscure ways.

Michael Anissimov asked over there “What is Intelligence” and offered up a definition that could apply to human as well as artificial intelligence.    

I would suggest that intelligence is overrated as part of our evolutionarily designed, self-absorbed human nature, and in fact is best studied as separate from the states of “consciousness” and “self awareness” that are harder to define.    I think computers – and even a simple calculator – have degrees of intelligence but they do not have consciousness or self awareness.    It is these last two things that make humans think we are so very special.    I’d say consciousness is neat but probably a simpler thing than we like to …. um … think about.

Over there I wrote this in response to Michael’s post:

My working hypothesis about “intelligence” is that it is best viewed and defined in ways that separate it from “consciousness”.  I’d say intelligence is best defined such that it can exist without consciousness or self-awareness.   Thus I’d refer to a computer chess program as intelligent, but not conscious or self aware. 

I would suggest that intelligence is a prerequisite for consciousness which is a prerequisite for self-awareness, but separating these three things seems to avoid some of the difficulties of explanations that get bogged down as we try to develop models of animal and non-animal intelligence.  Also, I think this will describe the development curve of AIs which are already “intelligent”, but none are yet “conscious” or “self aware”.   I think consciousness may turn out to be simply a *massive number* of  interconnections carrying on intelligent internal conversations within a system – human or AI.

A stumbling block I find very interesting is the absurd notion that human intelligence is fundamentally or qualitatively different from other animal intelligences.   Although only a few other species appear to have self-awareness, there are many other “conscious” species and millions of “intelligent” species

——–

A good question about intelligence is “WHY is intelligence”.   The obvious answer is evolutionary adaptivity, which in turn helps explain why our brains are so good at some things and so bad at others.  e.g. Human survival was more a function of short term planning rather than long term planning, so as you’d expect we are pretty good short term planners (“Let’s eat!”  “Let’s make a baby!”  “Look out for that car!) and pretty bad long term planners (Let’s address Social Security shortfalls!, “Let’s fix Iraq!)

Why “recursive self improvement” could be the key to enlightenment.


This excellent article by Michael Anissimov describes two versions of how things could shake out in the coming Artificial Intelligence revolution, and suggests that it’s more likely strong AI (that is, computer-like devices that think pretty much like we do) will lead to an explosive increase in intelligence as a result of “recursive self improvement”.    The idea is that the intelligent machines will operate much faster than our brains can function, but will also tend to improve on their own designs.  

For humanity, design improvements on our brain architecture have been a very-very slow process governed primarily by evolutionary challenges.  Basic analytical intelligence almost certainly emerged in animals as an adaptive advantage in terms of survival.   Unlike our cousins the higher apes, human brain power has combined with community history to allow us to build technologies that last through many generations, and more importantly to *improve* as new people grapple with new problems.  This technological explosion is a fairly recent phenomenon but should still be considered a very slow process compared to the type of progress you would expect to see in an environment driven purely towards advancing the technologies surrounding “intelligence”.

If Anissimov and many others in strong AI research are correct, the time between the advent of conscious, recursively self improving computers and a massive explosion of intelligent machines could be very small – a few years or even possibly just a few moments.    

Currently, we humans do a handful of physical transformations that take us off of the slow evolutionary treadmill.   Glasses are a simple technology that changes us.   Corneal transplant and heart stints are “advanced” technological enhancments to our bodies.    Cell phones and computers are technological enhancements to our brains (and yes, the company called “BrainGate” has now connected computer chips directly to brains allowing human brains to directly interface with computers to do simple tasks).   

Still,  earth’s painstakingly slow evolutionary processes has yet to develop a creature that will be able to rebuild itself every few days into a vastly superior version of the former self.   We appear to be within a few  decades of that type of entity.

The implications of this re-evolutionary development cannot be overestimated.