As I’ve noted here in posts about AI many times I think we are within 15 years – probably fewer – of the most profound change in technology and humanity ever to hit the planet. This will be the advent of conscious computers which we can reasonably expect to surpass us in all thinking and organizational skills within a very short time – probably months or even days of becoming conscious.
Some AI folks believe that strong AI machinery will require a somewhat lengthy learning period, much like human intellects require, before becoming highly functional but I think the process will be very fast after consciousness happens. In my opinion it is easy to exaggerate the significance of the intellectual complexity that comes from massive numbers of redundant, mostly simple processes. Unlike humans, computer intelligences will grow extremly fast as soon as they “choose” that approach. Initially those choices to expand will be programmed in by the human AI programmer, but it seems logical to assume that as computers design their own replacements they will continue to give the next generation “motivation”. You don’t even need to assume it’ll happen in this proactive way though. In a world with various forms of intelligences those that value their own survival will tend to increase in number simply through basic mathematical/evolutionary processes as those that do not value survival as highly simply are more likely to drop off the scene.
So, my cousin asked me today, why would a machine care much if at all about human welfare? My gut says they will, and I think this is based on watching how humans care so much for their animals and even inanimate objects. Also I think it’s important to note how crappily we take care of our fellow humans. We consistently choose fighting and selfishness over harmonious existence.
So I say give the computers a shot at making the world a better place!
>We consistently choose fighting and selfishness over harmonious existence.
You bet!! Otherwise those with whom we were harmonious would grow strong enough to choose fighting and selfishness.
This is the attitude we have towards those with Natural Intelligence and darn well better be the attitude we have for those with Artificial Intelligence!
FG could you expand on that a bit? You are talking social evolution here – saying that in a system where you’ve got good guys and bad guys the bad guys will prevail over time unless the good guys are willing to kick some ass every so often?
That idea comes up in Jared Diamond’s book Guns Germs, and Steel though I’m not sure he’d phrase it that way. He talks about two Polynesian Cultures – one nice, one warlike – and how the nice guys finished…last.