I am SO very interested in how people are going to process the upcoming film about the Singularity as defined by Ray Kurzweil, which is a pretty awesome future for humans:
Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge. Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity, full-immersion virtual reality …
And holy string cosmology, that’s not even the singularity part! Kurzweil predicts that around 2045, after we all become superintellects, the machine intelligences will surpass the total brainpower of planet earth by so much that it’s likely most of us will simply upload into the giant intelligent machine, or some other future we can’t know because….it’s hard to think what we’d do when we are 1,000,000,000 times smarter than we are right now.
Too optimistic? Too weird? Maybe, but Kurzweil is arguably the best thinker out there on artificial intelligence, and unlike the past where AI overhyped and underdelivered it is now clear that at least in terms of computational power and memory storage we’ll be reaching human capabilities soon.
So, are you ready?
Joe-
Very intriguing post. I tried to follow your link but think it might not be what you intended. I found this one to be more interesting:
http://www.edge.org/3rd_culture/kurzweil_singularity/kurzweil_singularity_p2.html
Thx Max – the link I intended was to the film site:
http://www.singularity.com
“Too optimistic”?
Perhaps. AI at its best (say chess-bot simulation such as Deep Blue, which defeated Kasparov a few times) has not produced that much: high-powered computers can perform high-powered computations, but that hardly means that a CPU actually possesses “consciousness.” An artificial agent could arise eventually (or perhaps some type of digital ego-construct ala William Gibson’s fiction), but only the most naive Radio-Shack sort of techie would welcome that scenario with open arms. (It’s amazing how after decades of obvious misuses of technology (and computing) that some tech-consumers still think new gear alone will suffice. A Matrix-sort of world of AI-controlled ‘bots might be as plausible scenario for 2050 as some hip, green cyber-topia.)
Indeed, it’s arguable the computational-theory of consciousness has in some sense been refuted (see also Searle’s Chinese Room argument for at least one skeptical view of AI). That AI has for the most part failed does imply that one returns to some Cartesian dualism (except perhaps in the sense of recognizing the power and complexity of human thinking). It might mean, however, realizing that technological development does not have some necessary relationship to economic or political progress. Kurzweil might write some stuff up when the cranial RJ-45 implants (or maybe wireless adapters) are ready for beta testing.
That AI has for the most part failed does imply that one returns to some Cartesian dualism (except perhaps in the sense of recognizing the power and complexity of human thinking).
That’s “does NOT imply.” (btw, the AI people have barely cracked the problem of intention (or “free will” in xtian-speak), though dozens hundreds of cognitivists and neuroscientists working on it.)
Kurzweilian Singularity? Cool. That and a WordPress comment-editing mod.
arguable the computational-theory of consciousness has in some sense been refuted
Hey, don’t ruin my day here! I love that model. We’d agree it’s not at all been proven, but it’s way too early to cast it out. In fact a new Stanford group is working along these lines with something called “neuro grid”, using silicon chips to simulate neurons and their interactions.
The brain is architecturally very redundant, and I *think* we’ll find that the complexity we call conscious thought comes not from anything profound in our little brains, but rather from the overwhelming level of connectivity between neocortical columns and individual neurons in those columns.
The good news is we probably only have about 10 years to wait to find out. We win either way – if Computers take over they’ll quickly solve most human problems. If they don’t, our egos stay intact and we can go on fighting wars and getting malaria.
if Computers take over they’ll quickly solve most human problems.
Would you trust AI enough to allow AI-guided vehicles on the streets or highway? Not sure. First off, the AI people have not come close to developing a reliable program and associated hardware that could drive a car (there are some apps for jeeps that they guide with a joystick–I believe the Stanford AI posse works on that as well–). That’s one of the arguments of some of the skeptics: computers are very good at performing calculations very rapidly, but they can’t do something like drive a car, or make a tasty tofu and bean-sprout fajita.
Consider all the other fairly complex gear needed–some type of visual sensor (stimulus-response warez?), robotic acceleration, brakes, steering wheel etc. A few geeks can pilot a jeep through desert roads with a joystick (or maybe an auto-Abrams), but that’s quite removed from having some fully-functional AI-guided vehicle on the street alongside human-operated vehicles. An override function would probably be prudent on AI-mobiles as well: imagine if the AI-mobile turned to the dark side and went on some Cujo-like rampage at the Thanatoid Galleria: Bad joss.
… imagine if the AI-mobile turned to the dark side and went on some Cujo-like rampage at the Thanatoid Galleria
Horatiox maybe it’s time for your Sci Fi novel – that sounds pretty intriguing to me…
Seriously though have you checked out the latest AI vehicles from this year’s robotic vehicle contest, sponsored by our Military tax dollars? I couldn’t believe it at first, but they now have *autonomous* units that can navigate hundreds of miles at 20-30 mph including *traffic*!:
http://www.darpa.mil/grandchallenge/overview.asp
“have you checked out the latest AI vehicles from this year’s robotic vehicle contest, sponsored by our Military tax dollars?”
Impressively dystopian. The Russians, chinese and other countries probably work on AI-vehicle R. & D. as well. Unfortunately, DARPA is as much a part of AI as Silicon Valley cyber-cafes are. Stanford geeks can virtually oogle each other’s avatars while sipping frappaccinos, and then presumably go back to work at the DARPA la-bor-a-tory.
AI obviously has no inherently ethical aspects. That’s what makes me slightly nervous about optimistic futurists such as Kurzweil: while we might be impressed with the potential power of AI, and the future of computing really, we should at the same time be wary of a certain type of techie-utopianism, I believe. Classic “dystopian” novels (whether Huxley’s Brave New World, or PKDick’s “A Scanner Darkly”, or cyber-pulp)–not to say the history of the last few decades—might serve as a reminder that technology (including computing) may advance the cause of totalitarianism (whether links oder rechts) as much as it might bring about the Eco-topia. Once one has seen the cyber-LAPD in full-force–guided by GPS (and the LAPD satellite), with the boys in Kevlar, with helmets, ARI-15s, a nano-tech equipped copter, and even their own tanks—conducting a raid on some suspected petty meth-manufacturer, you sort of start to question the Kurzweilopolis.
AI obviously has no inherently ethical aspects
Ultimately I think the truth or falseness of this statement this may be the most important human issue of the past 10,000 years. The Singularity Institute is probably the leading “think tank” on these issues and ethics is addressed there often. However we can’t know they’ll be the ones controlling the show.
For AI enthusiasts like me the idea is that we should work hard to make sure the first AI is an FAI or “Friendly Artificial Intelligence”. Unfortunately we may only get ONE shot at building an FAI, because an unfriendly AI may become so powerful so fast that the game would be over and humanity would be at the mercy of an intellect that could, for example, spin out nanobot armies we’d never even see but would rest inside us until activated by the unfriendly AI. Sounds very sci-fi, but these are now real issues as we approach the conscious computing that will likely happen within 15 years.
You are right to challenge the notion that AIs will be friendly. My gut says that superintelligence will breed win-win policies for human/machine interaction, but my thinking is hardly what we’ll get from the conscious computers that will be thousands of times smarter than we can ever hope to be.
Smarter but maybe not wiser … I hope not, because there may only be a few minutes in which we could unplug an unfriendly AI before it had copied and distributed itself in a million hidden locations.
… Oh my god a laptop is knocking at my door, and boy is he pissed! …
Obviously nanorobotics encompasses a great deal. Medical researchers seem hopeful that nano-bots will be able to perform micro-surgery and so forth. There are obvious nano-tech applications to computing (virtual drives, for one). The conscious nano-bot problem has not manifested itself too much, however: some malicious, replicating nano-bot’s could potentially develop decades or hundreds of years hence, but they will probably be more like ueber-malware, and not some Hannibal Lector-construct (or perhaps sort of very advanced viruses that can outsmart the security ware).
In a sense the consciousness problem of AI presents quite different issues than does the possible applications (or misapplications) of nano-robotics (though there are overlaps, presumably). The original cyber-punk vision was all about the mind-machine Interface, and that still fascinates some of us, hopefully.
I do not doubt that nano-robotic engineers will eventually design artificial brains of some sort (I have read they have duplicated a rat’s brain, to some success), but more interesting– and somewhat eerie–development would be a real Interface which could transfer human consciousness (which does have a small electric charge–thus MRIs) to software, or some type of file system or network, so that humans are really capable of jacking in, or perpetuating themselves in virtual or at least digital form—even, however sci-fi-ish it sounds, after bodily death. Bruce Sterling’s “Holy Fire”, a fairly trippy read, contains some thoughts along those lines.
Horatiox I didn’t meant the nanobots would be conscious, rather that an unfriendly AI could probably design many types of malicious anti-human devices, nanobots being only one of them. But I think the pressure will be to treat humans benevolently. We are arguably much “smarter” than our pets but we treat them pretty well.
Pingback: Conscious Computers and Friendly vs Unfriendly AI « Joe Duck
Pingback: Google: A Trillion URLs and counting « Joe Duck