What do I want for Christmas? General Artificial Intelligence.


If you are interested in artificial intelligence (or even just philosophy) you’ll enjoy the discussion over at Neurdon.com and Ieee Spectrum about new computer chips that many researchers believe are a big step towards conscious computers.

Although it would take a miracle to see things pop this year, my view remains the same as it has been for some time – we tend to exaggerate the complexity of human thought and consciousnes, and machine  “self awareness” is probably more a function of the quantity of activity than the quality of activity.

Processor speeds and memory capacities are in some ways already “competitive” with capabilities of human brains, though it will likely take many more years of research to build the programs that can utilize these capacities effectively enough to duplicate most human-style thinking.

Evolution used simple tools and simple constructs to create complexity.   This is demonstrated especially well by some new research suggesting the importance of fractal geometry in biology, where very simple equations, acted out in the RNA and DNA code of plants and animals, effectively define certain conditions of life.  Examples are tree growth and a human heartbeat.

I’ll have a lot more about this debate at Technology Report where I’ll hope to have several more guest posts by the DARPA SyNAPSE researchers.

Artificial Sociopaths – Will Thinking Machines Go Bad? Not likely!


I’m in a fun email exchange with a bunch of clever folks talking about how “thinking machines” might come to be and might be mean to us so I wanted to post my thoughts about that.   I’m not posting the others because I don’t have their permission yet…

I really hope more folks will chime in here as this is the most important topic in the world  even though most folks don’t realize that yet.    It should become clear within a few years that we are likely to be interacting with self-aware computers in as few as 10-15 years.

———–
The key point I wanted to make is optimistic.   We’ve seen how computer approaches dramatically improve our very limited abilities to calculate and analyze things, and I predict  that  when machines attain consciousness and the ability to communicate effectively with humans extraordinary improvements will become commonplace.

I’d also predict that the machines are very UNLIKELY to pose a threat to humanity.   Humans have tended towards greater compassion as we’ve progressed, and we’ll pose few threats to the thinking machines which will likely quickly find ways to protect themselves, so I think the worse likely case it that they will choose to ignore us.    I’m hoping they’ll help us out instead.   Note all AI efforts seek “friendly AI” so the programmers are working to make helpers not adversaries.    However I also believe (unlike most people) that our early approaches will not matter much in terms of what the superintelligence eventually becomes.   Humans will catayze the process of machine self awareness, but then our brains will process things too slowly to continue our participation in the evolution of intellect.

Philosophically speaking I’d suggest that computer thinking will NOT be “fundamentally different” because I think our rational thought is confined by the laws of the universe, most of which are well described by science and confined by mechanistic principles.  However the machines will be much faster than ours and proceed along more rational lines, unclouded by the emotions and cognitive biases that plague our thinking.   They’ll be better than us.

Is this optimism based on faith or science or ?    I’d say it’s speculation based on common sense observations of how the world works and trends in the world, many of which point to superintelligent, self aware machines within decades.   Faith – to my way of thinking – is an appeal to believe things that cannot be rationally deduced from the facts and data.  I’m not a big fan of that approach to knowledge.

To which somebody replied that I was expressing a lot of misguided techno faith and also that the machines would likely be sociopathic without the benefit of human thought approaches.

Wow, you really don’t like this idea of friendly artificial super intelligent machines?!   Come on, they’ll be more fun than the internet!     Also, unlike current chess programs they’ll often let us win to maintain our fragile human egos.

Interestingly your concerns about the potential for a sort of sociopathic AI are along the lines of some researchers in this area and also some concerns expressed earlier.   Although I’m not worried about that much, I see it as a very separate issue from how likely we are to see these machines – which I’d argue is “extremely likely”, almost to the point of inevitability because to me the enhancement of our intelligence via technologies represents a very “natural” (though dramatically accelerated) progression from our primal evolutionary heritage: http://en.wikipedia.org/wiki/Technological_singularity.

I”m surprised you see me as having “blind faith”.   I think faith approaches are irrational almost by definition and don’t offer much insight.  I also would argue that the advent of thinking machines and what I contend to be their likely friendliness are derived from human and machine observations and histories.    Note how humans already have merged with machines in several ways.    Contact lenses, Cochlear Implants, BrainGate and Emotiv headsets (which use brain waves to control computers), and many more.  I see the next level of interaction as intellectual enhancement devices.   It’s not a creepy sci-fi vision at all, rather the logical progression of how humans, pre-humans, and even many animals have used our intellects to develop and interact with useful tools.

Many (including me) think that thinking machines will come *after* many more rounds of slow merging of humans with computing devices.   If you are concerned about sociopathic computers this should come as some comfort because it’s most likely to be part of the ongoing process of co-evolution where humans and machines work together.  Currently only half of that equation can think autonomously but soon (I hope) both we and the machines will work together.

I may be wrong here, but I’m not using faith-based thinking.    In fact I think faith is one of the main impediments to people seeing the inevitable reality of what is to come.   As suggested in an earlier note the advent of thinking machines may challenge many of the conventional religious beliefs that many hold very dear.   I actually think this tension will be far more likely to create acts of violence than we’ll see from the thinking machines, who will very quickly evolve to a state where they could simply … leave the planet (another reason I don’t think there’s much to worry about here in terms of superintelligent machines gone bad.

Killerspin Hardbat used for the Hardbat Classic


A controversial new paddle made by Killerspin for the Hardbat Classic in Las Vegas made it very hard for many experienced players to compete effectively with each other or with the lower ranked players who often had a huge point advantage as well as the equipment handicap from making everybody use either the paddle shown here or an even cheaper “junk paddle” version.

Although I approve of the handicapping process I think they need some modifications to make a paddle that favors defender play , produces long, quality rallies rather than the short rallies this blade tended to create.   I only saw a handful of  “great points”  where we would have seen hundreds with regular sponge rubber.     I’d favor a modification that would introduce a USTTA approved hardbat and/or a sponge slow version.     Another approach might be to move to the “big ball” format which is much slower but does not destroy the fun of watching high quality loopers arc the ball back and forth many times.

Inexperienced and non- players want high quality play and this blade does not give us much of that, even when it is weilded by some of the finest players in the country as happened at the Hardbat Classic.

hardbat2

Table tennis will probably *never* be a good TV sport but it’s the world’s greatest participation sport and I think the focus needs to be on bringing people into the game rather than changing it to fit TV better.

Future of Education Part II


In the coming years people are likely to experience the most profound transformation in all of history.  The  event is often called “The singularity” because it’s very hard to know what will happen after the the ongoing fast rise in machine intelligence fully surpasses human capabilities.  Computers are very likely to become conscious and “recursively self improving”, allowing them to reinvent themselves as frequently as they choose in various forms.

I agree with those who believe the coming conscious computers will be the last human invention as they will improve themselves at lightning speed and surpass human intelligence by *millions of times* within years or perhaps even minutes of developing consciousness.
It is clear that when this happens education as we know it in all forms will be completely obsolete as the computers will spawn sweeping and extremely rapid advances in all scientific fields including biology and engineering.  Many humans will choose to either merge with machines or simply “download” their entire consciousness into a machine.   This transition would be seamless, merely shifting the “substrate” we use to think from our existing electrochemical, carbon based neural structure to something more permanent – probably some combination of silicon, carbon, and thinking software programs.
Although some experts believe the machines are likely to pose an “existential risk” to humanity because they will see human irrationality as a threat, my view is that historically intelligence has bred greater compassion and we’ll first enjoy the benefits of the conscious machine’s vast intellectual and engineering capabilities and later merge with them by downloading our existing memories and full intellects into something somewhat analogous to a computer’s “hard drive”.    “Life” would then become what we chose to make it as we might simply simulate an earthbound existence in our new virtual world, or we might choose to simulate entirely different lives or experiences designed within a vast interconnected global intelligence.   The underlying technical infrastructure would continue to improve and maintain itself indefinitely, making these intelligences immortal if they chose that route.

Some interesting *current* developments along these lines are:

Singularity University in Silicon Valley – sponsored by Google and other tech leaders this school will teach about the sweeping changes coming as machine intelligence surpasses that of humans.

Blue Brain Project, Switzerland.   IBM and several researchers have completed a simulation of a neocortical column with Blue Gene, the world’s fastest supercomputer.This project will expand the simulation with the next generation of supercomputers coming within a few years and seeks to create a fully functional human-like brain simulation.

Synapse Project: This project was announced earlier this year is funded by the US Military’s DARPA division, which represents the best funded attempt to date to build a functional brain.  The SyNAPSE initial goal is to design a working version of a mammalian brain.  The approach differs from Blue Brain in that it’s largely based on finding a working “software solution” rather than using techniques to duplicate the brain’s hardware.

AI Primer from the New York Times


This piece at the NYT is not a very inspired article but it does outline some basic Artificial Intelligence history and issues.      I think it remains *nearly impossible* for many to grasp the implications of the coming convergence of human and machine capabilities – a convergence that is going on at this very moment in subtle ways but which will likely blossom into something amazing within a decade, perhaps less.     The first self-aware computer is likely to be the last significant invention of humankind.    Not because it will destroy us, but because it will make our intellects *obsolete*.

The following “science fiction” inventions are alive and well *right now*:

Braingate and Emotiv Headset:    Mind control of computers

DARPA Autonomous Vehicles:  Cars that drive themselves through complex city traffic with *zero* human input

Blue Brain:  Supercomputer working simulation of a neocortical column of a rat.

Friendly vs Unfriendly Artificial Intelligences – an important debate


As we quickly approach the rise of self-aware and self-improving intelligent machines the debates are going to sound pretty strange, but they are arguably the most important questions humanity has ever faced.    Over at Michael’s Blog there’s a great discussion about how unfriendly AI’s could pose an existential risk to humanity.

I remain skeptical, writing over there about Steve Omohundro’s paper:

Great references to make your earlier point though I remain very skeptical of Steve’s worries even though one can easily agree with most of his itemized points. They just don’t lead to the conclusion that a “free range” AI is likely to pose a threat to humanity.

With a hard takeoff it seems likely to me that any *human* efforts at making a friendly AI will be modified to obscurity within a very short time. More importantly though it seems very reasonable to assume machine AI ethics won’t diverge profoundly from the ethics humanity has developed over time. We’ve become far less ruthless and selfish in our thinking than in the past, both on an individual and collective basis. Most of the violence now rises from *irrational* approaches, not the supremely rational ones we can expect from Mr. and Mrs. AI.

Wait, there’s MORE AI fun here at CNET

Dear President Obama – Fund these projects FTW!


I’ve written about the remarkable Blue Brain project here and at Technology Report, but there is a new AI project on the block that some seem to think has more potential to attain “strong AI” or independent computer thinking and probably  machine consciousness.   That project is called SyNapse and the lead researcher explains some of the thinking behind this amazing effort:

The problem is not in the organisation of existing neuron-like circuitry, however; the adaptability of brains lies in their ability to tune synapses, the connections between the neurons.

Synaptic connections form, break, and are strengthened or weakened depending on the signals that pass through them. Making a nano-scale material that can fit that description is one of the major goals of the project.

“The brain is much less a neural network than a synaptic network,” Modha says.

There’s not much information yet about this new project but a Wiki that appears to be open to the public has started here.

IBM and five universities are involved in this with funding from DARPA, the US Military’s cutting edge technology folks.   I’m glad to see what appears to be a very open architecture approach here because there should be very real concerns that a militaristic AI would be less likely to be “friendly”, and once we open the Pandora’s box of machine consciousness and superintelligence there is little reason to think we’ll ever be able to close it again.

The upside of these projects is literally and quite simply beyond our wildest imaginations.    A thinking, conscious machine will solve almost every simple problem on earth and is very likely to solve major problems such as providing massive amounts of cheap  energy, clean water, and health innovation.   Although I’m guessing we’ll still run around killing other humans for some time it’s reasonable to assume that a thinking machine will be the last significant human innovation as it ushers in the beginning of a remarkable machine-based era of spectacular new technological innovation.