What do I want for Christmas? General Artificial Intelligence.


If you are interested in artificial intelligence (or even just philosophy) you’ll enjoy the discussion over at Neurdon.com and Ieee Spectrum about new computer chips that many researchers believe are a big step towards conscious computers.

Although it would take a miracle to see things pop this year, my view remains the same as it has been for some time – we tend to exaggerate the complexity of human thought and consciousnes, and machine  “self awareness” is probably more a function of the quantity of activity than the quality of activity.

Processor speeds and memory capacities are in some ways already “competitive” with capabilities of human brains, though it will likely take many more years of research to build the programs that can utilize these capacities effectively enough to duplicate most human-style thinking.

Evolution used simple tools and simple constructs to create complexity.   This is demonstrated especially well by some new research suggesting the importance of fractal geometry in biology, where very simple equations, acted out in the RNA and DNA code of plants and animals, effectively define certain conditions of life.  Examples are tree growth and a human heartbeat.

I’ll have a lot more about this debate at Technology Report where I’ll hope to have several more guest posts by the DARPA SyNAPSE researchers.

2 thoughts on “What do I want for Christmas? General Artificial Intelligence.

  1. Interesting paper.

    The philosopher John Searle is often cited for his Chinese Room argument against strong-AI in which he states that a nonbiological medium, such as a computer, is not conscious. Searle’s argument rests on the belief that specific biological mechanisms within human brains cause conscious perception to occur (Searle, 1984). Proponents of whole brain emulation (WBE) and “mind uploading”, on the other hand, do not view nonbiological reproductions of human brain structure and function as limiting factors for consciousness.

    I tend to agree with Searle’s skeptical views of AI (tho’ Herr Doktor Searle’s got some issues re political views). His Chinese Room argument has been confirmed in a sense, given the stalled program of AI. The intention, meaning, and the “embodiment” issues still pose a problem for the cog-sci/AI people.

    The AI people also tend to conflate mere info-processing–say a chess program–with actual human consciousness. Now chess apps such as Deep BLue (or even my Fritz engine) can defeat humans, even grandmasters at this point. But hardly anyone, even Duck, would claim they possess consciousness. They merely follow routines– like any program, or calculator– which has been inputted by humans.

    At best AI simulates, but does no actual thinking. Even a perfect simulation–imagine military bots, etc– does not think as a human thinks, nor does the ‘bot experience the world anything like humans (the “qualia” issue). Other philosophy people have brought that issue up as well–it’s not enough to be able to run programs, but to say…learn them, develop skills, and then execute the program, which AI cannot do.

    A somewhat cynical “matrix” view should also be considered, however trite. What if…machines do develop some type of intelligence, and free themselves from–override– human control ?? (computers already turn themselves on). There are no assurances they wouldn’t just …destroy for phunn.

  2. machine ”self awareness” is probably more a function of the quantity of activity than the quality of activity.

    That’s the strong AI view– which is to say reductionist-materialist view. Merely process info. as fast, or faster than a human, and voila, the chip or bot or CPU’s outdoing those old inefficient humans, and “thinks”. Nyet, JD. There are chess bots which can defeat Kasparov at this point. But they don’t really understand the game; they know nothing of the meaning or significance of chess–or the meaning of Beethoven, or a pleasant beach sunset, or WWII, etc. Mere information processing however efficient does not suffice for consciousness–that’s Searle’s central point (also see Chomsky’s criticism of the Skinnerians for a slightly similar view). You sound somewhat like the New Worlds techies in terms of promoting naive reductionism. Many techies–Holy Tandyco!– don’t take the time to understand subtle issues involved with Mind or “qualia”, so they Billy Ockham it, and ….voila–humans are just bots, meat puppets, primates (even ones who do zen). Alas, that’s mistaken. Neither a primate or ‘bot or zen tomtom player wrote Beethoven’s music.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s