Convergence08 was a great conference with many interesting people and ideas. Thankfully the number of crackpots was very low, and even the “new age” mysticism stuff was at a minimum. Instead I found hundreds of authors, doctors, biologists, programmers, engineers, physicists, and more clear thinking folks all interested in how the new technologies will shape our world in ways more profound than we have ever experienced before.
My favorite insights came from Monica Anderson’s presentation on her approach to AI programming, which she called “Artificial Intuition“. Unlike all other approaches to AI I’m familiar with Anderson uses biological evolution as her main analogs for conceptualizing human intelligence. I see this approach as almost a *given* if you have a good understanding of human thought, but it’s actually not a popular conceptual framework at all.
It has always surprised me how poorly many computer programmers understand even rudimentary biological concepts such as the underlying simplicity of the human neocortex and the basic principles of evolution which I’d argue emphatically have defined *every single aspect* of our human intelligence over a slow and clumsy, hit and miss process operating over millions of years. I think programmers tend to focus on mathematics and rule systems which are great modelling systems but probably a very poor analog for intelligence. This focus has in many ways poisoned the well of understanding about what humans and other animals do when they … think… which I continue to maintain is “not all that special”.
….. more on this later over at Technology Report …..
Interesting, but she’s still hampered by the notion of discrete logic, and thus by logic in general.
Personally, I view “intelligence” as a poorly debugged application running on Heaven’s own operating system, and the very little I learned about brain science 30-odd years ago leaves me very dubious about a logic-based simulation of intelligence. Simulating the fluid dynamics of an automatic transmission requires tremendous computational power, and I don’t see feasible transistorized logic that could simulate brain chemistry.
Interesting. I, for one, believe that many humans–whether in technology, academia, or Consumerland itself–lack an awareness of discreteness, or shall we say the implications of “truth functionality”, whether in terms of programming, or induction and verification. Instead of careful stepwise proofs (or objective, fact-based writing), Joe and Mary McSixpack go for the jugular and the immediate response or assessment whether in terms of politics or psychology.
You see this immediacy with a typical liberal blogger who chants “Republicans are Evil” 24/7: first, he assumes there’s some objective realm of “Evil” (the existence of which he could not prove–as even Hume effectively pointed out a few centuries ago), and secondly he assumes he knows the Truth about all the supposed evil deeds reported in papers, blogs, TV, etc. It’s sort of a gullibility maxim: if his favorite pundit said it, it must be true. Of course humans of all political flavors do that: for some, if Seymour Hersh or Naomi Klein or Couric said it, it must be True. Substitute in the moronic conservative–Limbaugh or Coulter—and same thing for most rightist. Orwell-land.
That said, I do not completely agree with your somewhat strict determinism in regards to the workings of the brain—and cogsci has barely begun to demonstrate how complex thinking processes operate (say programming, or playing chess). That’s not to say it’s some Cartesian mystery, but the AI views of a few years ago have mostly been discredited. Moreover a completely functional approach to mind has some shortcomings: for one, McSixpack can no longer chant that “Bush is Evil,” can he? Just “bad” programming, conditioning, genetics, what have you–or at least not to his taste. McSixpacks don’t view a Hitler as merely a poorly conditioned primate; they want Hitler the demonic agent.
I don’t see feasible transistorized logic that could simulate brain chemistry
I’m not sure I agree but I’d argue we probably don’t need that to attain strong AI. My understanding is that Projects like IBM Blue Brain are integrating chemical “artificial biology” processes into the computations. Although the Blue Brain neocortical simulations won’t be identical to a human structure there is no good reason to think it won’t work as effectively as a human brain.
I think Anderson – and others who have trancended the thinking that a brain is primarily a programmed process rather than a massively redundant interconnected mess – are correctly assuming that fairly simple processes acting in conjunction within a massive interconneted brain structure are at the root of animal intelligence.
Tommo I’m not clear where you think “thought” resides? Why not simply within a bunch of interconnected activities in our 3 pound pile of glop? At the very least it seems to me we should be able to build some sort of mirror of that structure and then train it in the way we train our kids.
I think there are three major problems to overcome to ever be able to create machine based intelligence.
The first major hurdle is the massively parallel nature of our senses combined with our brain conservatively measured at a raw computing power of 100 million mips. I believe the raw computing power will be achieved within the next decade but we will still be saddled with the channels (bus) that send the data to the different discrete components of the computing model. The parallel capacity needed there is going to be a major stumbling block.
The second is the unpredictable nature of humans. We like to think we can “program” a person to a level of predictability but in reality humans are by nature completely unpredictable. As the saying goes you truly never know a person. This is a major failing of the current computational models. They just don’t do well with unpredictability. Everything has to be predicted when programming. We have tried to achieve levels of flexibility with approaches like Finite State Machines and Quantum computing however in all cases 100% of the anticipated outcomes need to be “programmed”. Even with knowledge based or “learning” computational models the limitations are built into the foundation and thus ultimately create a finite set of results. Ironically an unpredictable result within a program is labeled a bug. We are backwards in this area and it is ultimately driven from the need to “control” the process.
The third major hurdle is the way we physically store information and retrieve it. The “R” system out of IBM heralded the relational data model out of calculus that allowed us programmers to have a “place” for everything and everything in its “place”. In the real world that would be like driving your car into your garage and then dissembling every component of the car and having a storage bin for each piece. Then the next time you wanted to drive the car you would collect up all the parts, re-assemble the car and drive away. Works great for controlling information and data but not so good when you really try to resemble real world actions and items.
So until we can conquer these frontiers we just won’t ever be able to assemble a computational model that will allow real artificial intelligence.
I have no doubt that eventually cogsci people and programmers will put together some human mind-machine, or “replicant” (in PK Dick terms), though it will probably just be following various algorithms–a more complex chess bot, in a sense—creating a biological brain–which develops over time, learns things, has a personality, etc– a rather more challenging project.
Another interesting aspect of AI relates to consciousness-interfaces for living humans: when the RJ-45 (or wireless adaptor, etc) cranial implants are ready and you can copy your Mind files to a data base not in your head (or create a “construct”), then cyberia starts. I doubt that will occur in our lifetimes, if ever.
when the RJ-45 (or wireless adaptor, etc) cranial implants are ready and you can copy your Mind files to a data base
Note that the early part of this scenario is underway in the form of the rudimentary but pretty intriguing Braingate project:
http://www.cyberkineticsinc.com/content/medicalproducts/braingate.jsp
A non-invasive neural input device is the Emotiv headset for gaming, which I’ll be getting and reporting on extensively when it comes out – probably within months. This device literally reads some of your brainwaves and allows you to move imagery on the screen by thinking. I’m surprised it has not had more press because this will arguably be the first mainstreaming of a viable brain/machine interface.
I agree that brain downloading may not happen within our lifetimes partly for the reasons Tommo is concerned about – computational power for this would be staggering and I’d guess far greater than the power needed to simply create an independent thinking machine. However, once that machine is created note that all intellectual heck breaks loose and we may see innovation happening at the speed of light in the minds of the new machines.
(6) Joe you should try this one now…it is readily available.
http://www.techpin.com/ocz-nia-brain-controlled-gaming-headset-has-vista-64bit-drivers/
Hmm. Maybe if I wasn’t trying to actually program a computer I would be more coherent 🙂
My main point is simply that discrete transitorized logic is a very different type of information-manipulation dingus than our ol’ gloppers, and concepts of “intelligence” try to measure the silicon against the biological, which is not unlike measuring how many Quatloos you need on Tuesdays in Dublin.
The “the underpinnings are simple” argument also seems much too simplistic to me. The amounts of information being processed are just staggering to contemplate, and most of the information isn’t used in the service of consciousness in any linearly independent fashion. It is possible to build a simulation of this process, sure, and with enough brute force one could arrive at any desired epsilon away from a glopper — but I assert that the brute force required to get to any interesting epsilon is many orders of magnitude beyond anything we have on the engineering horizon. My comments about the slushbox simulation are my only real data point, I do admit.
And simple doesn’t mean easy. I have a program that uses very simple techniques that I can explain to an interested layman in about a half an hour. That program does things that I cannot do, even though I know how it does it.
DOH!
s/you need on Tuesdays in Dublin/you need to win at darts on Tuesdays in Dublin/
Q. And the conclusion of that rant was, Harwood?
A. With that much digital logic, I can do much more interesting stuff than create a lame simulation that can’t do much except be a cool stunt.
PS Compukers is frustratin’.
(8) Tommo what languages do you use?
(9) Oh, whatever’s handy. Anything Turing-complete. I make up new ones sometimes when I’m bored.
Lately I’ve been working with the Mozilla Tamarin codebase, so it’s Java, C++, Javascript/ECMAScript, AS3, and ABC bytecode.
(11) Cool I am mostly in .net these days… c++, c#, java, javascript…
Tommo fair enough but I’m pretty much sticking to the “thinkin’ ain’t such a big deal” argument at least until Blue Brain fails to bring us a thinking gadget even after it’s got a human neocortex model humming along. I think Kurtzweil’s calculations suggest that in terms of raw processing and memory power we are within about a decade of human levels of processing power. I think we’ll find consciousness and self awareness are more artifacts of our evolutionarily derived survival and existence rather than “really significant phenomena”
Glenn thanks for the NIA link – I had not heard of that device although it looks like it’s tapping more into facial muscle signals rather than actual emotional states. This review has more on the differences, both look very interesting to me as early verions of input devices that may revolutionize the way we interact with machines:
http://www.xbitlabs.com/articles/multimedia/display/ocz-nia_7.html
Think about the fact we’ll eventually have direct relationship between our brains and the internet. We’ll be smarter then….much smarter.
Smarter?
I wonder.
How smart were the product managers that introduced that WalkieTalkie discussed over at your technology site? It shuts off at maximum power to avoid overheating, but everyone who uses it is subjected to such a nonsensical feature. How connected to reality were those product managers? If their brains get connected to the internet will anyone care?
We have learned a great deal about Neural Networks and the purists amongst us will keep chiming in about its being Artificial Neural Networks that we are dealing with as we are only learning about Neural Networks which are what nature deals with. Ofcourse as we add spikes and jolts and jitter to such ANNs we find we are indeed getting closer to nature in both processing and in results.
Computers using logic used to get to page 300 by starting at page one and recursively performing simple arithmetic. Now they use ANNs to suggest to you that what you are looking for might be on page 300 and go straight to that page. Is this a more intuitive process?
Soft computing requires soft programming and such computers require common sense.
Artificial intuition? Sure. Too much of artificial intelligence seems to have been squandered somewhere along the way. That darned robotic vacuum cleaner can probably do a good job as a vacuum cleaner but why doesn’t it know to vacuum the rugs PRIOR to the cocktail party guests arriving? If I have to remind it… is it really any better than the maid that won’t do windows?
You seek a direct connection to the internet from your brain? I wonder what the brainwaves will look like as you get a new Blackberry and can’t even figure out how to access your voicemail on it before being timed out with your password access subroutine that someone forgot to make friendly for users that don’t have tiny fingers suitable for tiny keyboards. What use will Artificial Intuition be in a world of gadgets that frustrate us? That robot may not be tying our shoelaces in the middle of a busy intersection, but its probably doing some other darn fool task when we most need it. Perhaps its trying to connect our brains to the internet when we would prefer to connect with some orange juice and toast instead?
The cyber-kinetics technology appears promising (certainly to handicapped people); at the same time, one could imagine various dystopian scenarios resulting from “unethical” applications.
The BCIs operate both ways (at least theoretically): a computer receives commands from the brain, or issues them. Consider militaristic uses: the Feds implant chips in the brains of soldiers, and then order them around on battlefields. (sounds like dozens of bad analog sci-fi stories).
The US military has recently expanded their robotics (micro spy-drones, etc.–now being used in Afghan, supposedly), and they probably have some dread neurotech. gear as well.
You ought to take part in a contest for one of
the finest websites on the net. I most certainly will highly recommend this site!