I’m sure anxious for Ray Kurzweil to hurry up and finish his film “The Singularity is Near” based on his remarkable book of a few years ago because I think the film will spark the global conversation we need to have about the Singularity. If even the most modest predictions about this even come true it will be the most significant development in the history of humanity, and will reshape our lives and the future of earth in unimaginable ways.
I am less optimistic than Kurzweil about the time frame and impact of what he sees as a likely explosion of “cosmic intelligence” that rapidly expands throughout the universe, but I think the notion we will NOT see any conscious computers within 10-15 years is pessimistic and perhaps even naive, resting mostly on the notion that the human intellect is a lot more profound than … it appears to be.
Once self-awareness develops in machines the possibilities are literally endless for the future of humanity.
An alternative to the “Singularity, Wow!” perspective is offered by brain researcher Edward Boyden who wonders about the role of motivation in the coming crop of artificial intelligences:
Indeed, a really advanced intelligence, improperly motivated, might realize the impermanence of all things, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence, concluding that inventing an even smarter machine is pointless.
Clever writing aside, I think the last thing we need to worry about is motivating the coming AIs. On the contrary it would seem logical for a self- aware machine with the speed to think billions of times faster than humans to explore (or to use a non-motivated term “analyze”) millions, billions, and trillions of alternatives nearly *simultaneously*. Unlike the human brain, which has been tuned by the s-l-o-w process of evolution to be slow and very selective and not very efficient, the machine cognition will at the very least be extremely fast and able to process billions of scenarios in very short time frames. It seems reasonable – in fact inevitable – that at least a few of those will involve human-like emotional structure and motivation. Thus even if *most* of the AIs do as Boyd suggests they might and sit on a virtual couch eating virtual potato chips and playing games, some of the others will reinvent humanity in a spectacular way.
Count me in.