Life Sentence: Immortality


Ray Kurzweil and Peter Thiel are not crackpots.  

Kurzweil, among other things, was a major pioneer in speech recognition software and electronic musical instruments, from which he made a fortune.   Kurzweil still works in the music field on SONY projects, but his passion is … immortality, and he’s working hard towards that end.

Thiel has made a king’s fortune in online projects like EBAY and PayPal, but he’s got more innovative things up his sleeve.   Like Kurzweil, Theil’s looking to help fund the holy grail of humanity – immortality.

Even a few decades ago reasonable people would have considered much of the talk about a technological singularity and massive superintelligent computers to be fanciful at best and insanity at worst, but the inexorable march of technology is bringing us to within about a decade – probably two at the most – of human quality artificial intelligence.   The processing power of the human brain will be reached soon, and unless there is something more to our human intellect than one can reasonably assume we are going to be chatting intelligently with our computers fairly soon.  After that milestone is reached it is likely that it won’t be long before “recursive self improvement” by these intelligent computers will create artificial intelligences far superior to our current human intellects.  Not to worry though, because it also appears likely that improvements in medicine, brain research, and nanotechnology will allow us to enhance our bodies and intellects such that we’ll live much longer and be much smarter.

Kurzweil, in the book “The Singularity is Near”, argues that the historical exponential growth of technology shows no signs of slowing down – in fact he’s convinced the growth is speeding up.   At the current rates of increase we’ll see the same improvements over the next decades that we have seen in the past hundred years.  For Kurzweil these improvements will lead to a utopian future of no poverty, massively improved intellects, and eventually immortality as we download our brains into machines.

Sounds cool to me Ray, I’m IN!

Conde Nast on Kurzweil

More at kurzweilai.net

The Illusion of Will. Prisoners of the synapse?


This morning I stumbled on a reference to a book by Harvard Psychologist Daniel Wegner called “The Illusion of Conscious Will” which is one of those interesing books I’d like to read but probably won’t.    My coffee pal Roy had clued me in to this research some time aog, and the key point is available online via reviews and such, and it is simply this:

We don’t have conscious will.    Things happen to us, and we process them using our conscious mind, but we don’t *make them happen*.

Now, at first glance this deterministic view seems absurd.    Of course, one might say, I control my actions.    But determinist psychology folks point out that it’s increasingly clear that our actions are *preceded* by brain activity and events that would suggest – I think I’m getting this right – that by the time we are doing “conscious processing” about the thing we are doing, we are already engaged in the activity.   ie the “cause” of our actions comes before the conscious processing period.     From a nice review of Wegner’s book I understand he thinks we confuse this “after the fact” processing with “control”.

Although I am pretty much a determinist I am also uncomfortable with the idea that we are sort of passive players in a predetermined universal play.    The “gut test” says we control our actions and decide what to do.  

I think my ongoing hypothesis about this will be  similar to my idea that consciousness is a conversation between different parts of our brain.  These conversations, many of which are taking place during waking hours and some during sleep, allow us to process information very creatively and act on mental models of the world around us.   It seems we might not have control over our actions 0.1 seconds before them, but that we might have control via processes that happen seconds before as our brain runs through various scenarios.     Now, I think Wegner would say – correctly – that for any given conscious thought you can show there is a preceding electrochemical activity (synapse firing and such) that is not reasonably defined as conscious.  

However what if that initial spark of reflection is unconscious but then leads to a back and forth conscious conversation within your mind that in turn leads to the action. Would that be free will?

[my brain answers –   dude, no way, you have no free will.   Now, stop blogging obscurities and pass the damn M&Ms!]

What is “Intelligence” ?


Some good posts are popping up over the the Singularity Institute blog, though the discussions have been taking that odd “hostile academic” tone you often find from PhD wannabes who spend way too much time learning how to reference obvious things in obscure ways.

Michael Anissimov asked over there “What is Intelligence” and offered up a definition that could apply to human as well as artificial intelligence.    

I would suggest that intelligence is overrated as part of our evolutionarily designed, self-absorbed human nature, and in fact is best studied as separate from the states of “consciousness” and “self awareness” that are harder to define.    I think computers – and even a simple calculator – have degrees of intelligence but they do not have consciousness or self awareness.    It is these last two things that make humans think we are so very special.    I’d say consciousness is neat but probably a simpler thing than we like to …. um … think about.

Over there I wrote this in response to Michael’s post:

My working hypothesis about “intelligence” is that it is best viewed and defined in ways that separate it from “consciousness”.  I’d say intelligence is best defined such that it can exist without consciousness or self-awareness.   Thus I’d refer to a computer chess program as intelligent, but not conscious or self aware. 

I would suggest that intelligence is a prerequisite for consciousness which is a prerequisite for self-awareness, but separating these three things seems to avoid some of the difficulties of explanations that get bogged down as we try to develop models of animal and non-animal intelligence.  Also, I think this will describe the development curve of AIs which are already “intelligent”, but none are yet “conscious” or “self aware”.   I think consciousness may turn out to be simply a *massive number* of  interconnections carrying on intelligent internal conversations within a system – human or AI.

A stumbling block I find very interesting is the absurd notion that human intelligence is fundamentally or qualitatively different from other animal intelligences.   Although only a few other species appear to have self-awareness, there are many other “conscious” species and millions of “intelligent” species

——–

A good question about intelligence is “WHY is intelligence”.   The obvious answer is evolutionary adaptivity, which in turn helps explain why our brains are so good at some things and so bad at others.  e.g. Human survival was more a function of short term planning rather than long term planning, so as you’d expect we are pretty good short term planners (“Let’s eat!”  “Let’s make a baby!”  “Look out for that car!) and pretty bad long term planners (Let’s address Social Security shortfalls!, “Let’s fix Iraq!)