Alan Turing Google Doodle honors Computing Pioneer

Check out the most complicated Google Doodle of all time here, where the Google Doodle of the day  celebrates the birthday of computer science pioneer Alan Turing.   Turing is reasonably considered a founder of computer science even though he never lived to see anything like the current crop of machines we now find in our homes, businesses, and mobile devices.

The Google Doodle is representing a ‘codebreaker’ sequence.   Turing’s brilliancies in cracking encoded Nazi war memos led to major strategic breakthrough when he cracked the “enigma” code routine, giving the allies access to a treasure trove of strategic information about the Nazi war plans.

The “Turing Test” remains an intriguing part of the quest for general artificial intelligence.  Turing suggested that a major step in development of mechanical intelligence would be a human’s inability to distinguish the machine responses from those of another human.  Most current thinking suggests that a machine could pass the “Turing Test” and NOT be considered artificial intelligence, but Turing’s speculations remain some of the most important computing insights of all time.

Turing’s life was tragic in many ways.  He was gay in a time when the government prosecuted people for “indecency”, and his life was cut short by cyanide poisoning – most likely a suicide or accident – at the age of only 42.

What do I want for Christmas? General Artificial Intelligence.

If you are interested in artificial intelligence (or even just philosophy) you’ll enjoy the discussion over at and Ieee Spectrum about new computer chips that many researchers believe are a big step towards conscious computers.

Although it would take a miracle to see things pop this year, my view remains the same as it has been for some time – we tend to exaggerate the complexity of human thought and consciousnes, and machine  “self awareness” is probably more a function of the quantity of activity than the quality of activity.

Processor speeds and memory capacities are in some ways already “competitive” with capabilities of human brains, though it will likely take many more years of research to build the programs that can utilize these capacities effectively enough to duplicate most human-style thinking.

Evolution used simple tools and simple constructs to create complexity.   This is demonstrated especially well by some new research suggesting the importance of fractal geometry in biology, where very simple equations, acted out in the RNA and DNA code of plants and animals, effectively define certain conditions of life.  Examples are tree growth and a human heartbeat.

I’ll have a lot more about this debate at Technology Report where I’ll hope to have several more guest posts by the DARPA SyNAPSE researchers.


In a recent post we talked about Dr. Stephen Hawking’s concerns that we may encounter unfriendly aliens, and the idea that we don’t even want no stinking alien contact around our earth.     I disagreed and to my surprise cannot  find nearly enough support for what I think is an obvious notion – superintelligences are very likely to be friends not foes, or at least will just ignore us because we are, well, pretty unimpressive by the likely standards of the probably millions of intelligent species likely to be all over the place in our spectacularly large known universe.

Horatiox suggested I may be biased by what he suggests is  a “cute alien” Hollywood standard to come to my conclusion so I thought it would be fun to look at the top 20 Sci Fi movies from IMDB and see what kinds of Aliens appear in those.

1. 8.8 Star Wars: Episode V – The Empire Strikes Back(1980) 265,556
2. 8.8 Star Wars (1977) 309,465
3. 8.6 The Matrix (1999) 362,975
4. 8.6 Iron Man 2 (2010) 1,781
5. 8.5 Terminator 2: Judgment Day (1991) 225,768
6. 8.5 Alien (1979) 172,229
7. 8.5 WALL·E (2008) 168,690
8. 8.5 A Clockwork Orange (1971) 184,963
9. 8.5 Aliens (1986) 163,247
10. 8.4 Metropolis (1927) 37,463
11. 8.4 2001: A Space Odyssey (1968) 153,902
12. 8.4 Back to the Future (1985) 191,070
13. 8.3 Avatar (2009) 226,753
14. 8.3 Star Wars: Episode VI – Return of the Jedi (1983) 202,527
15. 8.3 Blade Runner (1982) 178,350
16. 8.2 District 9 (2009) 134,452
17. 8.2 The War Game (1965) 1,480
18. 8.2 Donnie Darko (2001) 200,859
19. 8.1 Ivan Vasilevich menyaet professiyu (1973) 2,905
20. 8.1 The Thing (1982) 74,264

Well, I’d hoped to make a stronger case that Hollywood aliens are mean, but it’s looking like you could make either case from these films.    Star Wars bad guys vs Star Wars cute nice guys,   Terminator bad vs Wall E good, etc.     I think Hollywood is all over the place on this though I guess Horatiox could point to AVATAR and note how cute they are and how mean the humans are.

Dear Aliens, please ignore Dr. Stephen Hawking. You are very welcome here anytime.

Stephen Hawking, the brilliant physicist who brings so much insight to physics, cosmology, and the study of the universe in general, seems to have been spent a bit too much time watching “Independence Day” or ABC’s new TV show “V”  before filming a recent segment on his new Discover Channel series.

In one of his most widely quoted statements in years Hawking noted (very correctly and obviously) that the math of the universe suggests there is almost certainly other life out there and probably other intelligent life, but then bizarrely adds this:

“Such advanced aliens would perhaps become nomads, looking to conquer and colonize whatever planets they could reach,” Hawking said. “If so, it makes sense for them to exploit each new planet for material to build more spaceships so they could move on. Who knows what the limits would be?”

He goes on to speculate that contacting aliens may well be a big mistake as the collision of our culture and theirs could be similar to when Columbus came to the Americas, with an outcome unfavorable to the indiginous populations.


I think I’ll give Hawking the benefit of the doubt and assume he’s been dipping into a legal marijuana prescription for some ailment (or more likely just hyping the alien connection for the show)  but this kind of dumb statement from smart people reminds me of the singularity folks who fret far too much that superintelligences will be malevolent.

There is very little reason to assume this and a lot of reasons to assume the opposite for the reasons I go into below.

Also important is the fact that aliens with the technological capability to visit our lonely little planet at the edge of the galaxy are very likely to have technology so powerful that we’d pose essentially zero threat to them, so friendship is a much better survival strategy than fighting and hoping for the preposterously stupid scenario of  the film  “Indendence Day” where a computer glitch, exploited via an Apple laptop Computer (!) , destroys a massive fleet of massive alien ships.

For example go back to the battle of Trafalgar where the British defeated France in a battle that would cement England’s global hedgemony well into the next century.    Then consider how a *single* WWII aircraft carrier  (representing only a +140 year military technological improvements vs the 1000s of years likely from the Aliens) could have crushed and destroyed both fleets in minutes without sustaining damage or casualties.    Whoever possessed that single ship could likely have dominated the globe for a century.

But.. I digress because I don’t think Aliens are likely to be mean, let alone threaten our existence.    In fact my greatest fear about Aliens is that we’ll be so profoundly uninteresting to them – still in our very early stages of intellectual development – that they will  …. just …. leave.

Why nice Aliens?  First, if we view human intellectual development  from an evolutionary, individual, or societal standpoint we see that progress generally means *better treatment* of others, not worse.     Note for example how the  common practices of child labor and  slavery are out of vogue, not increasing in popularity.     Although slavery is still practiced by dispicable folks it is an aberration, illegal, and generally fought by the powers that be rather than embraced as it was centuries ago.

In terms of evolutionary development I think most of us would rather find ourselves confronted by even the most vicious and uncaring Wall Street CEO than a hungry tiger shark or lion.    Evolution has “softened” our approach to hunting and gathering in ways that are less violent.    Even if the Aliens Hawking fears come with the intention of exploiting our resources, this is likely to happen much more as a peaceful economic transaction than a violent act of piracy.   For example they might trade something of huge value to us like cold fusion propulsion technology for something they can’t synthesize themselves.    However it also seems unlikely that they’d have any need of the resources we hold dear because they will probably be able to synthesize all their needs from basic raw materials available in uninhabited planets and stars in a galaxy nearer them.    Given even a hundred years of nanotechnology progress leads to innovations that are hard for us to imagine, and these Alien dudes are likely to be thousands of years beyond out technology, again making my case that they are likely to simply ignore us as uninteresting simple life rather than threaten us.     We don’t pay much attention to the worms, ants, spiders, and beetles in our yard even though they do have some very interesting capabilities.

….. more later …..

[ Singularity before Aliens / Edge of the galaxy problem / age = wisdom / more logic = less violence]

Artificial Sociopaths – Will Thinking Machines Go Bad? Not likely!

I’m in a fun email exchange with a bunch of clever folks talking about how “thinking machines” might come to be and might be mean to us so I wanted to post my thoughts about that.   I’m not posting the others because I don’t have their permission yet…

I really hope more folks will chime in here as this is the most important topic in the world  even though most folks don’t realize that yet.    It should become clear within a few years that we are likely to be interacting with self-aware computers in as few as 10-15 years.

The key point I wanted to make is optimistic.   We’ve seen how computer approaches dramatically improve our very limited abilities to calculate and analyze things, and I predict  that  when machines attain consciousness and the ability to communicate effectively with humans extraordinary improvements will become commonplace.

I’d also predict that the machines are very UNLIKELY to pose a threat to humanity.   Humans have tended towards greater compassion as we’ve progressed, and we’ll pose few threats to the thinking machines which will likely quickly find ways to protect themselves, so I think the worse likely case it that they will choose to ignore us.    I’m hoping they’ll help us out instead.   Note all AI efforts seek “friendly AI” so the programmers are working to make helpers not adversaries.    However I also believe (unlike most people) that our early approaches will not matter much in terms of what the superintelligence eventually becomes.   Humans will catayze the process of machine self awareness, but then our brains will process things too slowly to continue our participation in the evolution of intellect.

Philosophically speaking I’d suggest that computer thinking will NOT be “fundamentally different” because I think our rational thought is confined by the laws of the universe, most of which are well described by science and confined by mechanistic principles.  However the machines will be much faster than ours and proceed along more rational lines, unclouded by the emotions and cognitive biases that plague our thinking.   They’ll be better than us.

Is this optimism based on faith or science or ?    I’d say it’s speculation based on common sense observations of how the world works and trends in the world, many of which point to superintelligent, self aware machines within decades.   Faith – to my way of thinking – is an appeal to believe things that cannot be rationally deduced from the facts and data.  I’m not a big fan of that approach to knowledge.

To which somebody replied that I was expressing a lot of misguided techno faith and also that the machines would likely be sociopathic without the benefit of human thought approaches.

Wow, you really don’t like this idea of friendly artificial super intelligent machines?!   Come on, they’ll be more fun than the internet!     Also, unlike current chess programs they’ll often let us win to maintain our fragile human egos.

Interestingly your concerns about the potential for a sort of sociopathic AI are along the lines of some researchers in this area and also some concerns expressed earlier.   Although I’m not worried about that much, I see it as a very separate issue from how likely we are to see these machines – which I’d argue is “extremely likely”, almost to the point of inevitability because to me the enhancement of our intelligence via technologies represents a very “natural” (though dramatically accelerated) progression from our primal evolutionary heritage:

I”m surprised you see me as having “blind faith”.   I think faith approaches are irrational almost by definition and don’t offer much insight.  I also would argue that the advent of thinking machines and what I contend to be their likely friendliness are derived from human and machine observations and histories.    Note how humans already have merged with machines in several ways.    Contact lenses, Cochlear Implants, BrainGate and Emotiv headsets (which use brain waves to control computers), and many more.  I see the next level of interaction as intellectual enhancement devices.   It’s not a creepy sci-fi vision at all, rather the logical progression of how humans, pre-humans, and even many animals have used our intellects to develop and interact with useful tools.

Many (including me) think that thinking machines will come *after* many more rounds of slow merging of humans with computing devices.   If you are concerned about sociopathic computers this should come as some comfort because it’s most likely to be part of the ongoing process of co-evolution where humans and machines work together.  Currently only half of that equation can think autonomously but soon (I hope) both we and the machines will work together.

I may be wrong here, but I’m not using faith-based thinking.    In fact I think faith is one of the main impediments to people seeing the inevitable reality of what is to come.   As suggested in an earlier note the advent of thinking machines may challenge many of the conventional religious beliefs that many hold very dear.   I actually think this tension will be far more likely to create acts of violence than we’ll see from the thinking machines, who will very quickly evolve to a state where they could simply … leave the planet (another reason I don’t think there’s much to worry about here in terms of superintelligent machines gone bad.

Brain enhancement through technology – just say YES!

Over at Read Write Web, The most excellent Marshall Kirkpatrick was suggesting and continues to think that connecting our brains to the internet – things like Internet Brain Implants – are a bad idea.

As much as I don’t like to challenge a fellow Oregonian, I could not disagree with Marshall more on this issue for several reasons:

The first is practical.   Invasive technologies that are wonderful are here already in the form of cochlear implants for hearing enhancements and even crude artificial eyes using brain implants.    Less invasive technologies that use brain wave controller devices (e.g. Emotiv Headsets and some simpler fun games) are here and will be coming soon to a brain near yours.

Regardless of whether other brain enhancements are good or bad, why fight the inevitable rather than just working with it?     Although nobody yet offers internet access it should be available within a few years.

Think of the amazing advantages, especially when we can get the communication flowing in both directions at computer speeds – which are generally much faster than those obtained via organic transmissions.     Language enhancements alone suggest to me that this would have amazing value, and I think more than a few high schoolers will enjoy computing calculus equations without any study.

Will these new abilities make us lazy?    It’s impossible to know, but I’d guess that the intellectual explosion we’ll see as enhancements hit the marketplace will bring far more solutions than problems as people can spend the huge amount of time once spent *learning*, *doing things* instead.

Brain implants?   Sign me up, Scotty!

Killerspin Hardbat used for the Hardbat Classic

A controversial new paddle made by Killerspin for the Hardbat Classic in Las Vegas made it very hard for many experienced players to compete effectively with each other or with the lower ranked players who often had a huge point advantage as well as the equipment handicap from making everybody use either the paddle shown here or an even cheaper “junk paddle” version.

Although I approve of the handicapping process I think they need some modifications to make a paddle that favors defender play , produces long, quality rallies rather than the short rallies this blade tended to create.   I only saw a handful of  “great points”  where we would have seen hundreds with regular sponge rubber.     I’d favor a modification that would introduce a USTTA approved hardbat and/or a sponge slow version.     Another approach might be to move to the “big ball” format which is much slower but does not destroy the fun of watching high quality loopers arc the ball back and forth many times.

Inexperienced and non- players want high quality play and this blade does not give us much of that, even when it is weilded by some of the finest players in the country as happened at the Hardbat Classic.


Table tennis will probably *never* be a good TV sport but it’s the world’s greatest participation sport and I think the focus needs to be on bringing people into the game rather than changing it to fit TV better.

Future of Education Part II

In the coming years people are likely to experience the most profound transformation in all of history.  The  event is often called “The singularity” because it’s very hard to know what will happen after the the ongoing fast rise in machine intelligence fully surpasses human capabilities.  Computers are very likely to become conscious and “recursively self improving”, allowing them to reinvent themselves as frequently as they choose in various forms.

I agree with those who believe the coming conscious computers will be the last human invention as they will improve themselves at lightning speed and surpass human intelligence by *millions of times* within years or perhaps even minutes of developing consciousness.
It is clear that when this happens education as we know it in all forms will be completely obsolete as the computers will spawn sweeping and extremely rapid advances in all scientific fields including biology and engineering.  Many humans will choose to either merge with machines or simply “download” their entire consciousness into a machine.   This transition would be seamless, merely shifting the “substrate” we use to think from our existing electrochemical, carbon based neural structure to something more permanent – probably some combination of silicon, carbon, and thinking software programs.
Although some experts believe the machines are likely to pose an “existential risk” to humanity because they will see human irrationality as a threat, my view is that historically intelligence has bred greater compassion and we’ll first enjoy the benefits of the conscious machine’s vast intellectual and engineering capabilities and later merge with them by downloading our existing memories and full intellects into something somewhat analogous to a computer’s “hard drive”.    “Life” would then become what we chose to make it as we might simply simulate an earthbound existence in our new virtual world, or we might choose to simulate entirely different lives or experiences designed within a vast interconnected global intelligence.   The underlying technical infrastructure would continue to improve and maintain itself indefinitely, making these intelligences immortal if they chose that route.

Some interesting *current* developments along these lines are:

Singularity University in Silicon Valley – sponsored by Google and other tech leaders this school will teach about the sweeping changes coming as machine intelligence surpasses that of humans.

Blue Brain Project, Switzerland.   IBM and several researchers have completed a simulation of a neocortical column with Blue Gene, the world’s fastest supercomputer.This project will expand the simulation with the next generation of supercomputers coming within a few years and seeks to create a fully functional human-like brain simulation.

Synapse Project: This project was announced earlier this year is funded by the US Military’s DARPA division, which represents the best funded attempt to date to build a functional brain.  The SyNAPSE initial goal is to design a working version of a mammalian brain.  The approach differs from Blue Brain in that it’s largely based on finding a working “software solution” rather than using techniques to duplicate the brain’s hardware.

AI Primer from the New York Times

This piece at the NYT is not a very inspired article but it does outline some basic Artificial Intelligence history and issues.      I think it remains *nearly impossible* for many to grasp the implications of the coming convergence of human and machine capabilities – a convergence that is going on at this very moment in subtle ways but which will likely blossom into something amazing within a decade, perhaps less.     The first self-aware computer is likely to be the last significant invention of humankind.    Not because it will destroy us, but because it will make our intellects *obsolete*.

The following “science fiction” inventions are alive and well *right now*:

Braingate and Emotiv Headset:    Mind control of computers

DARPA Autonomous Vehicles:  Cars that drive themselves through complex city traffic with *zero* human input

Blue Brain:  Supercomputer working simulation of a neocortical column of a rat.