As we quickly approach the rise of self-aware and self-improving intelligent machines the debates are going to sound pretty strange, but they are arguably the most important questions humanity has ever faced. Over at Michael’s Blog there’s a great discussion about how unfriendly AI’s could pose an existential risk to humanity.
I remain skeptical, writing over there about Steve Omohundro’s paper:
Great references to make your earlier point though I remain very skeptical of Steve’s worries even though one can easily agree with most of his itemized points. They just don’t lead to the conclusion that a “free range” AI is likely to pose a threat to humanity.
With a hard takeoff it seems likely to me that any *human* efforts at making a friendly AI will be modified to obscurity within a very short time. More importantly though it seems very reasonable to assume machine AI ethics won’t diverge profoundly from the ethics humanity has developed over time. We’ve become far less ruthless and selfish in our thinking than in the past, both on an individual and collective basis. Most of the violence now rises from *irrational* approaches, not the supremely rational ones we can expect from Mr. and Mrs. AI.
Wait, there’s MORE AI fun here at CNET