Computer interface = your brain


OK, I know what I want next Christmas:  An Emotiv headset, and I’m not even a gamer.   This is the next generation of gaming controllers, and although probably the final product will leave much brain-to-computer control to be desired I’d suggest that the type of human to machine interaction this headset is designed to popularized, combined with other research such as BrainGate with implanted electrodes, is the beginning of what we’ll some day view as a profoundly significant era in humanity during which we increasingly merge with our own machines.   

Sure this sounds a bit creepy, but we’ve been integrating with machines for, oh, at least as long as the species has been around the planet   (and unless you are Mike Huckabee that would be considerably more than 6,000 years). 

In a fairly short time humans have gone transitioned from simple tool use such as spears and fashioned rocks to more complicated tools such as cars and computers.   We’ve also made modest progress actually bringing tools into and onto our bodies – e.g. eyeglasses, contact lenses, corneal implants, prosthetics, cochlear implants for hearing, and most recently projects like BrainGate make it clear that we can communicate with machines using only signals from our brains.

None of this stuff should really startle people’s sensibilities.   There is nothing “magical” about being human.   We are a product of the same physical, chemical, and biological forces that brought us other interesting items on earth such as rocks, trees, toadstools, and chimpanzees.     Although it’s been popular for many years – even in otherwise scientifically sophisticated circles – to suggest humans have a very different relationship to things than other animals this notion will eventually fall into the dustheap of outmoded hypotheses, and we’ll begin to realize that despite our many notable attributes the most noticeable aspects of humanity are our …. limitations.

CNET Reports on Emotiv over at the Crave gadget blog.

Emotiv Website

Kurzweil on cellular level computing


Ray Kurzweil is shaking up our idea of what will be with his amazing predictions about the future of computing – a future he thinks will soon lead to the emergence of computers so small and powerful they’ll drive our own thinking processes from within.     Speaking to the gaming conference   today Kurzweil noted that the accelerating advances in computer technology will soon allow fully immersive virtual reality experiences which will be coming to a body near you.     Cool.

Engineering’s Grand Challenges


The National Academy of Engineering has suggested a list of the world’s greatest and most important engineering challenges, and it looks pretty comprehensive to me.   If we can solve all these problems we’ll really be taking life on earth up a few notches and kicking some globally sustainable problematic butt.   

I hope they add a priority and ROI component here.    My feeling is that reverse engineering of the brain will lead to general Artificial Intelligence and very rapid solutions to most if not all analytical problems.   Thus I’d like to see us devote, say, 1/100th of what we are poised to squander failing to solve CO2 problems to AI research.     But even if we forego that notion it’s questionable to spend in engineering as we currently do, especially on huge military technologies of questionable effectiveness.

 Here are the Grand Challenges for engineering as determined by a committe of the National Academy of Engineering:

  • Make solar energy economical
  • Provide energy from fusion
  • Develop carbon sequestration methods
  • Manage the nitrogen cycle
  • Provide access to clean water
  • Restore and improve urban infrastructure
  • Advance health informatics
  • Engineer better medicines
  • Reverse-engineer the brain
  • Prevent nuclear terror
  • Secure cyberspace
  • Enhance virtual reality
  • Advance personalized learning
  • Engineer the tools of scientific discovery

LEDs in Contact Lenses? Cool!


Technology continues to blur the line between our bodies and helpful gadgets.  CNET reports that the University of Washington is experimenting with embedding LEDs into contact lenses, a step in the direction of creating vision correcting contact lenses.  

In his powerful book “The Singularity is Near”, Ray Kurzweil notes how powerfully the technologies involved with Nanotechnology, Robotics, and Genetics will enhance our understanding of the way our human attributes work to create  awareness, intelligence, and consciousness. 

Conscious Computers and Friendly vs Unfriendly AI


As I’ve noted here in posts about AI many times I think we are within 15 years – probably fewer – of the most profound change in technology and humanity ever to hit the planet.   This will be the advent of conscious computers which we can reasonably expect to surpass us in all thinking and organizational skills within a very short time – probably months or even days of becoming conscious.    

Some AI folks believe that strong AI machinery will require a somewhat lengthy learning period, much like human intellects require, before becoming highly functional but I think the process will be very fast after consciousness happens.  In my opinion it is easy to exaggerate the significance of the intellectual complexity that comes from massive numbers of redundant, mostly simple processes.  Unlike humans, computer intelligences will grow extremly fast as soon as they “choose” that approach.   Initially those choices to expand will be programmed in by the human AI programmer, but it seems logical to assume that as computers design their own replacements they will continue to give the next generation “motivation”.     You don’t even need to assume it’ll happen in this proactive way though.   In a world with various forms of intelligences those that value their own survival will tend to increase in number simply through basic mathematical/evolutionary processes as those that do not value survival as highly simply are more likely to drop off the scene.

So, my cousin asked me today, why would a machine care much if at all about human welfare?    My gut says they will, and I think this is based on watching how humans care so much for their animals and even inanimate objects.    Also I think it’s important to note how crappily we take care of our fellow humans.    We consistently choose fighting and selfishness over harmonious existence. 

So I say give the computers a shot at making the world a better place!  

Are you biased?


My friend Marvin dropped in today on his way down to California and we were discussing artificial intelligence.    Like most of my programming pals he’s much more skeptical than I am about how soon we’ll have conscious computing, but they are also far more knowlegeable about the difficulty of programming complex routines, let alone consciousness.    Of course, they are not nearly as pretty as Google uber-Engineer Marissa Mayer who estimated 10-15 years, so I’m going with her estimate.  

I’m still trying to decide if programmers are viewing things too narrowly by generally assuming that the circumstances required for conscious thought are so very profoundly complex that engineering for them will be nearly impossible.   I prefer the idea that simply having brain-equivalent speedy and massive computational power is going to push machines very close to consciousness after (relatively) simple routines are developed that will create conversations within those systems.  

When I noted that many in the AI community are now wildly optimistic about the prospects for strong AI within 10-20 years, Marvin correctly noted that people in the AI community were predicting strong AI a *long* time ago.   This led to the interesting question of “prediction bias”.    How often in history are predictions  reasonably accurate, and how do the time estimates on those accurate predictions hold up?   This would be a fun mini-research project to do sometime though obviously it would itself be subject to a lot of bias depending on how you picked the criteria, the predictors, and the predictions.

Along those bias lines this great Wikipedia article popped up showing a huge number of cognitive biases.    All of us should take a look at these and reflect on how often we fall into these irrational traps.

Novamente – teaching virtual entities to “fetch”


A sign that things are starting to hop in the field of artificial intelligence is how a topic of conversation that would have been considered fanciful – or even insane – some 20 years ago would now be fair game at any Silicon Valley pub or coffee shop.    Novamente is a fascinating company doing fascinating development and research guided in part by the idea that the best path to computer general artificial intelligence (that is, intelligence much like we humans have) is through a similar-to-human-intelligence  learning path.    To this end Novamente is teaching virtual entities to fetch, recognize themselves, and other early stages in human learning.   This is taking place in part in the Second Life virtual world.  

Sounds crazy?   Just a game?  I don’t think so.  It may be optimistic to think that AI thinking can come about in this way, but it’s sure worth a try.   

Novamente

The Illusion of Will. Prisoners of the synapse?


This morning I stumbled on a reference to a book by Harvard Psychologist Daniel Wegner called “The Illusion of Conscious Will” which is one of those interesing books I’d like to read but probably won’t.    My coffee pal Roy had clued me in to this research some time aog, and the key point is available online via reviews and such, and it is simply this:

We don’t have conscious will.    Things happen to us, and we process them using our conscious mind, but we don’t *make them happen*.

Now, at first glance this deterministic view seems absurd.    Of course, one might say, I control my actions.    But determinist psychology folks point out that it’s increasingly clear that our actions are *preceded* by brain activity and events that would suggest – I think I’m getting this right – that by the time we are doing “conscious processing” about the thing we are doing, we are already engaged in the activity.   ie the “cause” of our actions comes before the conscious processing period.     From a nice review of Wegner’s book I understand he thinks we confuse this “after the fact” processing with “control”.

Although I am pretty much a determinist I am also uncomfortable with the idea that we are sort of passive players in a predetermined universal play.    The “gut test” says we control our actions and decide what to do.  

I think my ongoing hypothesis about this will be  similar to my idea that consciousness is a conversation between different parts of our brain.  These conversations, many of which are taking place during waking hours and some during sleep, allow us to process information very creatively and act on mental models of the world around us.   It seems we might not have control over our actions 0.1 seconds before them, but that we might have control via processes that happen seconds before as our brain runs through various scenarios.     Now, I think Wegner would say – correctly – that for any given conscious thought you can show there is a preceding electrochemical activity (synapse firing and such) that is not reasonably defined as conscious.  

However what if that initial spark of reflection is unconscious but then leads to a back and forth conscious conversation within your mind that in turn leads to the action. Would that be free will?

[my brain answers –   dude, no way, you have no free will.   Now, stop blogging obscurities and pass the damn M&Ms!]

What is “Intelligence” ?


Some good posts are popping up over the the Singularity Institute blog, though the discussions have been taking that odd “hostile academic” tone you often find from PhD wannabes who spend way too much time learning how to reference obvious things in obscure ways.

Michael Anissimov asked over there “What is Intelligence” and offered up a definition that could apply to human as well as artificial intelligence.    

I would suggest that intelligence is overrated as part of our evolutionarily designed, self-absorbed human nature, and in fact is best studied as separate from the states of “consciousness” and “self awareness” that are harder to define.    I think computers – and even a simple calculator – have degrees of intelligence but they do not have consciousness or self awareness.    It is these last two things that make humans think we are so very special.    I’d say consciousness is neat but probably a simpler thing than we like to …. um … think about.

Over there I wrote this in response to Michael’s post:

My working hypothesis about “intelligence” is that it is best viewed and defined in ways that separate it from “consciousness”.  I’d say intelligence is best defined such that it can exist without consciousness or self-awareness.   Thus I’d refer to a computer chess program as intelligent, but not conscious or self aware. 

I would suggest that intelligence is a prerequisite for consciousness which is a prerequisite for self-awareness, but separating these three things seems to avoid some of the difficulties of explanations that get bogged down as we try to develop models of animal and non-animal intelligence.  Also, I think this will describe the development curve of AIs which are already “intelligent”, but none are yet “conscious” or “self aware”.   I think consciousness may turn out to be simply a *massive number* of  interconnections carrying on intelligent internal conversations within a system – human or AI.

A stumbling block I find very interesting is the absurd notion that human intelligence is fundamentally or qualitatively different from other animal intelligences.   Although only a few other species appear to have self-awareness, there are many other “conscious” species and millions of “intelligent” species

——–

A good question about intelligence is “WHY is intelligence”.   The obvious answer is evolutionary adaptivity, which in turn helps explain why our brains are so good at some things and so bad at others.  e.g. Human survival was more a function of short term planning rather than long term planning, so as you’d expect we are pretty good short term planners (“Let’s eat!”  “Let’s make a baby!”  “Look out for that car!) and pretty bad long term planners (Let’s address Social Security shortfalls!, “Let’s fix Iraq!)

Why “recursive self improvement” could be the key to enlightenment.


This excellent article by Michael Anissimov describes two versions of how things could shake out in the coming Artificial Intelligence revolution, and suggests that it’s more likely strong AI (that is, computer-like devices that think pretty much like we do) will lead to an explosive increase in intelligence as a result of “recursive self improvement”.    The idea is that the intelligent machines will operate much faster than our brains can function, but will also tend to improve on their own designs.  

For humanity, design improvements on our brain architecture have been a very-very slow process governed primarily by evolutionary challenges.  Basic analytical intelligence almost certainly emerged in animals as an adaptive advantage in terms of survival.   Unlike our cousins the higher apes, human brain power has combined with community history to allow us to build technologies that last through many generations, and more importantly to *improve* as new people grapple with new problems.  This technological explosion is a fairly recent phenomenon but should still be considered a very slow process compared to the type of progress you would expect to see in an environment driven purely towards advancing the technologies surrounding “intelligence”.

If Anissimov and many others in strong AI research are correct, the time between the advent of conscious, recursively self improving computers and a massive explosion of intelligent machines could be very small – a few years or even possibly just a few moments.    

Currently, we humans do a handful of physical transformations that take us off of the slow evolutionary treadmill.   Glasses are a simple technology that changes us.   Corneal transplant and heart stints are “advanced” technological enhancments to our bodies.    Cell phones and computers are technological enhancements to our brains (and yes, the company called “BrainGate” has now connected computer chips directly to brains allowing human brains to directly interface with computers to do simple tasks).   

Still,  earth’s painstakingly slow evolutionary processes has yet to develop a creature that will be able to rebuild itself every few days into a vastly superior version of the former self.   We appear to be within a few  decades of that type of entity.

The implications of this re-evolutionary development cannot be overestimated.