Some good posts are popping up over the the Singularity Institute blog, though the discussions have been taking that odd “hostile academic” tone you often find from PhD wannabes who spend way too much time learning how to reference obvious things in obscure ways.
My working hypothesis about “intelligence” is that it is best viewed and defined in ways that separate it from “consciousness”. I’d say intelligence is best defined such that it can exist without consciousness or self-awareness. Thus I’d refer to a computer chess program as intelligent, but not conscious or self aware.
I would suggest that intelligence is a prerequisite for consciousness which is a prerequisite for self-awareness, but separating these three things seems to avoid some of the difficulties of explanations that get bogged down as we try to develop models of animal and non-animal intelligence. Also, I think this will describe the development curve of AIs which are already “intelligent”, but none are yet “conscious” or “self aware”. I think consciousness may turn out to be simply a *massive number* of interconnections carrying on intelligent internal conversations within a system – human or AI.
A stumbling block I find very interesting is the absurd notion that human intelligence is fundamentally or qualitatively different from other animal intelligences. Although only a few other species appear to have self-awareness, there are many other “conscious” species and millions of “intelligent” species
——–
A good question about intelligence is “WHY is intelligence”. The obvious answer is evolutionary adaptivity, which in turn helps explain why our brains are so good at some things and so bad at others. e.g. Human survival was more a function of short term planning rather than long term planning, so as you’d expect we are pretty good short term planners (“Let’s eat!” “Let’s make a baby!” “Look out for that car!) and pretty bad long term planners (Let’s address Social Security shortfalls!, “Let’s fix Iraq!)