As we quickly approach the rise of self-aware and self-improving intelligent machines the debates are going to sound pretty strange, but they are arguably the most important questions humanity has ever faced. Over at Michael’s Blog there’s a great discussion about how unfriendly AI’s could pose an existential risk to humanity.
I remain skeptical, writing over there about Steve Omohundro’s paper:
Great references to make your earlier point though I remain very skeptical of Steve’s worries even though one can easily agree with most of his itemized points. They just don’t lead to the conclusion that a “free range” AI is likely to pose a threat to humanity.
With a hard takeoff it seems likely to me that any *human* efforts at making a friendly AI will be modified to obscurity within a very short time. More importantly though it seems very reasonable to assume machine AI ethics won’t diverge profoundly from the ethics humanity has developed over time. We’ve become far less ruthless and selfish in our thinking than in the past, both on an individual and collective basis. Most of the violence now rises from *irrational* approaches, not the supremely rational ones we can expect from Mr. and Mrs. AI.
Wait, there’s MORE AI fun here at CNET
I’ve written about the remarkable Blue Brain project here and at Technology Report, but there is a new AI project on the block that some seem to think has more potential to attain “strong AI” or independent computer thinking and probably machine consciousness. That project is called SyNapse and the lead researcher explains some of the thinking behind this amazing effort:
The problem is not in the organisation of existing neuron-like circuitry, however; the adaptability of brains lies in their ability to tune synapses, the connections between the neurons.
Synaptic connections form, break, and are strengthened or weakened depending on the signals that pass through them. Making a nano-scale material that can fit that description is one of the major goals of the project.
“The brain is much less a neural network than a synaptic network,” Modha says.
There’s not much information yet about this new project but a Wiki that appears to be open to the public has started here.
IBM and five universities are involved in this with funding from DARPA, the US Military’s cutting edge technology folks. I’m glad to see what appears to be a very open architecture approach here because there should be very real concerns that a militaristic AI would be less likely to be “friendly”, and once we open the Pandora’s box of machine consciousness and superintelligence there is little reason to think we’ll ever be able to close it again.
The upside of these projects is literally and quite simply beyond our wildest imaginations. A thinking, conscious machine will solve almost every simple problem on earth and is very likely to solve major problems such as providing massive amounts of cheap energy, clean water, and health innovation. Although I’m guessing we’ll still run around killing other humans for some time it’s reasonable to assume that a thinking machine will be the last significant human innovation as it ushers in the beginning of a remarkable machine-based era of spectacular new technological innovation.