Blue Brain Project Progress …

Arguably the world’s most significant science project along with DARPA SyNAPSE, Blue Brain is working to create a functioning NeoCortical column (think “how we think”) simulation via a supercomputer. Progress seems pretty steady although DARPA SyNAPSE has been getting a lot more press and money lately.

Artificial Sociopaths – Will Thinking Machines Go Bad? Not likely!

I’m in a fun email exchange with a bunch of clever folks talking about how “thinking machines” might come to be and might be mean to us so I wanted to post my thoughts about that.   I’m not posting the others because I don’t have their permission yet…

I really hope more folks will chime in here as this is the most important topic in the world  even though most folks don’t realize that yet.    It should become clear within a few years that we are likely to be interacting with self-aware computers in as few as 10-15 years.

The key point I wanted to make is optimistic.   We’ve seen how computer approaches dramatically improve our very limited abilities to calculate and analyze things, and I predict  that  when machines attain consciousness and the ability to communicate effectively with humans extraordinary improvements will become commonplace.

I’d also predict that the machines are very UNLIKELY to pose a threat to humanity.   Humans have tended towards greater compassion as we’ve progressed, and we’ll pose few threats to the thinking machines which will likely quickly find ways to protect themselves, so I think the worse likely case it that they will choose to ignore us.    I’m hoping they’ll help us out instead.   Note all AI efforts seek “friendly AI” so the programmers are working to make helpers not adversaries.    However I also believe (unlike most people) that our early approaches will not matter much in terms of what the superintelligence eventually becomes.   Humans will catayze the process of machine self awareness, but then our brains will process things too slowly to continue our participation in the evolution of intellect.

Philosophically speaking I’d suggest that computer thinking will NOT be “fundamentally different” because I think our rational thought is confined by the laws of the universe, most of which are well described by science and confined by mechanistic principles.  However the machines will be much faster than ours and proceed along more rational lines, unclouded by the emotions and cognitive biases that plague our thinking.   They’ll be better than us.

Is this optimism based on faith or science or ?    I’d say it’s speculation based on common sense observations of how the world works and trends in the world, many of which point to superintelligent, self aware machines within decades.   Faith – to my way of thinking – is an appeal to believe things that cannot be rationally deduced from the facts and data.  I’m not a big fan of that approach to knowledge.

To which somebody replied that I was expressing a lot of misguided techno faith and also that the machines would likely be sociopathic without the benefit of human thought approaches.

Wow, you really don’t like this idea of friendly artificial super intelligent machines?!   Come on, they’ll be more fun than the internet!     Also, unlike current chess programs they’ll often let us win to maintain our fragile human egos.

Interestingly your concerns about the potential for a sort of sociopathic AI are along the lines of some researchers in this area and also some concerns expressed earlier.   Although I’m not worried about that much, I see it as a very separate issue from how likely we are to see these machines – which I’d argue is “extremely likely”, almost to the point of inevitability because to me the enhancement of our intelligence via technologies represents a very “natural” (though dramatically accelerated) progression from our primal evolutionary heritage:

I”m surprised you see me as having “blind faith”.   I think faith approaches are irrational almost by definition and don’t offer much insight.  I also would argue that the advent of thinking machines and what I contend to be their likely friendliness are derived from human and machine observations and histories.    Note how humans already have merged with machines in several ways.    Contact lenses, Cochlear Implants, BrainGate and Emotiv headsets (which use brain waves to control computers), and many more.  I see the next level of interaction as intellectual enhancement devices.   It’s not a creepy sci-fi vision at all, rather the logical progression of how humans, pre-humans, and even many animals have used our intellects to develop and interact with useful tools.

Many (including me) think that thinking machines will come *after* many more rounds of slow merging of humans with computing devices.   If you are concerned about sociopathic computers this should come as some comfort because it’s most likely to be part of the ongoing process of co-evolution where humans and machines work together.  Currently only half of that equation can think autonomously but soon (I hope) both we and the machines will work together.

I may be wrong here, but I’m not using faith-based thinking.    In fact I think faith is one of the main impediments to people seeing the inevitable reality of what is to come.   As suggested in an earlier note the advent of thinking machines may challenge many of the conventional religious beliefs that many hold very dear.   I actually think this tension will be far more likely to create acts of violence than we’ll see from the thinking machines, who will very quickly evolve to a state where they could simply … leave the planet (another reason I don’t think there’s much to worry about here in terms of superintelligent machines gone bad.

Artificial Intelligence Global Luminaries

This great list is from the Accelerating Futures Website  from Michael Anissimov

Artificial Intelligence

Aubrey de Grey Aubrey de Grey
Chief Science Officer, Methuselah Foundation
Barney Pell Barney Pell
Search Strategist, Microsoft
Ben Goertzel Ben Goertzel
CSO, Novamente & Director of Research, SIAI
Bill Hibbard Bill Hibbard
Emeritus Senior Scientist, Space Science and Engineering Center
Bruce Klein Bruce Klein
President, Novamente & Director of Outreach, SIAI
Christine Peterson Christine Peterson
Co-Founder, Foresight Nanotech Institute
David Hart David Hart
Director of Open Source Projects, SIAI
Eliezer Yudkowsky Eliezer Yudkowsky
Co-Founder and Research Fellow, SIAI
Eric Baum Eric Baum
Founder, Baum Research
Hans Moravec Hans Moravec
Chief Scientist, Seegrid Corporation
Helen Greiner Helen Greiner
Co-Founder, iRobot Corporation
Hugo de Garis Hugo de Garis
Professor, Wuhan University
J Storrs Hall J Storrs Hall
President, Foresight Nanotech Institute
John Laird John Laird
Tishman Professor of Engineering, University of Michigan
Jonas Lamis Jonas Lamis
Executive Director, SciVestor Corporation
Jonathan Connell Jonathan Connell
Staff Member, T.J. Watson Research Center, IBM
Joscha Bach Joscha Bach
Author, Principles of Synthetic Intelligence
Jurgen Schmidhuber Jurgen Schmidhuber
Professor of Cognitive Robotics and Computer Science, TU Munich
Marcus Hutter Marcus Hutter
Associate Professor, Australian National University
Marvin Minsky Marvin Minsky
Toshiba Professor of Media Arts and Sciences, MIT
Matt Bamberger Matt Bamberger
Founder, Intelligent Artifice
Monica Anderson Monica Anderson
Founder, Syntience Inc.
Moshe Looks Moshe Looks
AI Researcher, Google Research
Neil Jacobstein Neil Jacobstein
Chairman and CEO, Teknowledge
Pei Wang Pei Wang
Lecturer, Department of Computer and Information Science, Temple University
Peter Cheeseman Peter Cheeseman
Advisor, Singularity Institute
Peter Norvig Peter Norvig
Director of Research, Google
Peter Thiel Peter Thiel
Founder, Clarium Capital
Ray Kurzweil Ray Kurzweil
Chairman and CEO, Kurzweil Technologies, Inc.
Rodney Brooks Rodney Brooks
Chief Technical Officer, iRobot Corp
Ronald Arkin Ronald Arkin
Regents’ Professor, College of Computing, Georgia Tech
Sam Adams Sam Adams
Distinguished Engineer, IBM Research Division
Sebastian Thrun Sebastian Thrun
Director, Stanford Artificial Intelligence Laboratory
Selmer Bringsjord Selmer Bringsjord
Chair, Department of Department of Cognitive Science, Rensselaer Polytechnic Institute
Stan Franklin Stan Franklin
Interdisciplinary Research Professor, University of Memphis
Stephen Omohundro Stephen Omohundro
Founder and President, Self-Aware Systems
Stephen Reed Stephen Reed
Principal Developer, Texai
Susan Fonseca Klein Susan Fonseca Klein
Chief Administrative Officer, Singularity Institute
Wendell Wallach Wendell Wallach
Lecturer, Yale’s Interdisciplinary Center for Bioethics

Friendly vs Unfriendly Artificial Intelligences – an important debate

As we quickly approach the rise of self-aware and self-improving intelligent machines the debates are going to sound pretty strange, but they are arguably the most important questions humanity has ever faced.    Over at Michael’s Blog there’s a great discussion about how unfriendly AI’s could pose an existential risk to humanity.

I remain skeptical, writing over there about Steve Omohundro’s paper:

Great references to make your earlier point though I remain very skeptical of Steve’s worries even though one can easily agree with most of his itemized points. They just don’t lead to the conclusion that a “free range” AI is likely to pose a threat to humanity.

With a hard takeoff it seems likely to me that any *human* efforts at making a friendly AI will be modified to obscurity within a very short time. More importantly though it seems very reasonable to assume machine AI ethics won’t diverge profoundly from the ethics humanity has developed over time. We’ve become far less ruthless and selfish in our thinking than in the past, both on an individual and collective basis. Most of the violence now rises from *irrational* approaches, not the supremely rational ones we can expect from Mr. and Mrs. AI.

Wait, there’s MORE AI fun here at CNET

Dear President Obama – Fund these projects FTW!

I’ve written about the remarkable Blue Brain project here and at Technology Report, but there is a new AI project on the block that some seem to think has more potential to attain “strong AI” or independent computer thinking and probably  machine consciousness.   That project is called SyNapse and the lead researcher explains some of the thinking behind this amazing effort:

The problem is not in the organisation of existing neuron-like circuitry, however; the adaptability of brains lies in their ability to tune synapses, the connections between the neurons.

Synaptic connections form, break, and are strengthened or weakened depending on the signals that pass through them. Making a nano-scale material that can fit that description is one of the major goals of the project.

“The brain is much less a neural network than a synaptic network,” Modha says.

There’s not much information yet about this new project but a Wiki that appears to be open to the public has started here.

IBM and five universities are involved in this with funding from DARPA, the US Military’s cutting edge technology folks.   I’m glad to see what appears to be a very open architecture approach here because there should be very real concerns that a militaristic AI would be less likely to be “friendly”, and once we open the Pandora’s box of machine consciousness and superintelligence there is little reason to think we’ll ever be able to close it again.

The upside of these projects is literally and quite simply beyond our wildest imaginations.    A thinking, conscious machine will solve almost every simple problem on earth and is very likely to solve major problems such as providing massive amounts of cheap  energy, clean water, and health innovation.   Although I’m guessing we’ll still run around killing other humans for some time it’s reasonable to assume that a thinking machine will be the last significant human innovation as it ushers in the beginning of a remarkable machine-based era of spectacular new technological innovation.

Mashup Camp and Convergence08

Looking forward to two upcoming conferences – Mashup Camp and the very first Convergence 08 conference.

Mashup Camps have been coming to Mountain View for over two years, bringing great startups for their product launches as well as lively discussions about innovations and new products to help the mashup community. There also will be mashup experts from Google, Yahoo, Microsoft, Amazon, and many more key players. Programmable Web has the best coverage of the Mashup topic.

Convergence will have even more provocative content as the first conference to address the intersection of four technologies likely to shape the world in extraordinary ways: Nanotechnology, Biological technologies (gene splicing, stem cells, DNS mapping, life extension) , Information technologies (internet and computing) and Cognitive technologies. This last would, I think, broadly include everything from brain enhancing drugs and devices to artificial intelligence. AI is the most exciting category for me, and I remain convinced that we’ll see conscious computers within about 20 years – hopefully and very possibly less. Conscious computing is likely to change the entire planetary game to such a degree it’s nearly impossible to predict what will happen *after that*, which is one of the issues that will be discussed at the conference.

My main concern is that proponents and predictions keep things real and this does not become a sort of brainstorming session for half-baked ideas and ideologies.

After millions of years of very slow biological evolution we’ve now entered a new age where technology is likely to eclipse most and probably all of our human abilities. Even that fairly obvious idea – which simply is an extension of current developments – leaves many people skeptical, cold to the idea, or even antagonistic about the changes that are coming. Like it or not … we are all in this together and it’s best to keep it that way as much as possible.

Intel: Computers Win by 2050

Intel’s Chief recently explored some of the innovations that are shaping technology, and suggested that computers will surpass humans in intelligence by 2050.    Although I think that is a pessimistic time frame, it is encouraging to see the notion of very intelligent and/or conscious computers discussed in the mainstream company news:

Computer Reads Minds, World Yawns

One of the fun parts of hanging out in the technology world is getting a good sense of the next big thing before folks really tune into how significant the next big thing will be.   I remember about 12 years back –  in the early days of the commercial internet – when it became clear to me that a huge shift was happening that would send virtually everybody online.   No amount of explaining or describing or showing people cool stuff could get most people to understand the massive transition they were about to experience.    As with so many technological innovations, the commercial internet had to be experienced by people at their own pace – often a painfully slow pace if you were watching this happen.   Few who loudly proclaimed their luddite pride ten years ago would admit this today – most are using email and internet, often with the same enthusiasm as the relatively small number of super early adopters in the tech and commercial communities who helped make it all happen. 

I did want to note why I’m talking about “commercial” internet vs “internet”.    Contrary to what is often claimed the internet is a pretty old structure, begun by the military after WWII and then adopted by academia where it pretty munch languished for about 30 years.    I would argue that cheap computing and ISP and online services (thank you Prodigy, Compuserve, AOL, more) then combined with graphical browsing (thank you Marc Andreessen and Mosaic friends) to create the backbone of the current “commercial internet” that has exploded onto the global scene as the key communication medium of all time. 

So, what is the *next* big thing?    Why, conscious computing of course!   And it’s not just *big* like the internet.    It’s super duper gigantic and earth shaking, and it’s coming soon to a planet very near us all.   Experts disagree about *when* conscious computing will happen, though I think very few who are paying much attention would suggest we won’t have it within 50 years.  However many experts, and I think the body of current projects such as Blue Brain, suggest that we will have conscious computers that exceed human intelligence within 20 years and perhaps even 10.    What happens *after* a machine becomes conscious is quite a new thought ballgame and it is very hard to speculate about how that machine will evolve and perhaps more importantly how they will view other machines and …. us.    Will the conscious machines get smarter slowly or almost explosively fast, surpassing all of humanity within months or even minutes of first attaining consciousness?

A simple way of understanding what many AI researchers are talking about in this respect is to simply recognize that the conscious machine is likely to be “recursively self improving” which means it will be able to build and/or program better versions of itself soon after consciousness, probably in something analogous to the way we humans improve our intellects and skills but much, much faster.   Humans pull this off as well.  I’m proud to say my wife and I have managed to create and program two impressive organic intellects who are now able to program themselves and we love them dearly.   However we were constrained by human organic evolution, so took us many years to do this.    Artificial intellects will likely be able to reproduce quite a bit faster and more effectively (no offense to any of you expectant parents intended).

Ironically for me, several of my favorite programming experts do not seem to conscious computing as something we can expect to happen anytime soon.   I’ve puzzled over this because they certainly know the mechanics better than I, but I remain convinced that they are putting too much faith – sometimes literally – into the idea that humans are somehow … fundamentally different …. from other physical manifestations of the world.    I’m confident we are not all that different, and in that light consciousness is probably best viewed more as a sort of tangential aspect of our lives than a key component. 

And speaking of tangents, this whole post was going to be about this Carnagie Mellon AI project where the computer was reading people’s minds.   Simple words, yes, but still a rudimentary form of  mind reading based on EEG output:


Of Rats and Men: Rat brains, Blue Brains, and the coming AI age.

SEED magazine reports on the Blue Brain, which IMHO is the most likely project to attain machine-based self consciousness.  This in turn will change everything completely and usher in a new era that will bring more change to humanity than any previous event in history.

“The column has been built and it runs,” Markram says. “Now we just have to scale it up.” Blue Brain scientists are confident that, at some point in the next few years, they will be able to start simulating an entire brain. “If we build this brain right, it will do everything,” Markram says. I ask him if that includes selfconsciousness: Is it really possible to put a ghost into a machine? “When I say everything, I mean everything,” he says, and a mischievous smile spreads across his face.

As I’ve noted many times before I believe that machine consciousness will bring profound changes to humanity which will be hugely positive.   Now, we allocate resources very ineffectively.   Conscious computers will be able to do vastly superior resource allocations and staggering design improvements. These alone will likely resolve all global resource issues such as energy, food, and water.   It’s not as clear if the AI age will bring a resolution to problems that have as a a core cause our human defects.   Health, Education should benefit enormously but some of the human thinking that creates war, intolerance, crime and suicide will persist and it will resist the improvements. 

 However the abundance that the AI age will bring to the world should allow us to manage many of these human problems much more effectively. 

Markram:  “What is holding us back now are the computers.”  
Markram estimates that in order to accurately simulate the trillion synapses in the human brain, you’d need to be able to process about 500 petabytes of data – about 200 times more information than is stored on all of Google’s servers. 
Energy consumption is another huge problem …. Markram estimates that simulating the brain on a supercomputer with existing microchips would generate an annual electrical bill of about $3 billion …. But if computing speeds continue to develop at their current exponential pace, and energy efficiency improves, Markram believes that he’ll be able to model a complete human brain on a single machine in ten years or less.

This 10 year estimate is even more optimistic than Ray Kurzweil’s but in the same league.    Although most of the computer programmers I know strongly reject this view, I think it’s also possible that AI could emerge with very limited human intervention from the massive parallel processing environments such as Google’s search server farm of hundreds of thousands of connected machines.    Consciousness and human intelligence, if it is as overrated as I believe, is best seen as something of a byproduct of simpler, evolutionarily derived mental processes and other mental activities.  As the number of interconnections in machines approaches the number we have in our brains (again we bump into a 10-20 year time frame), and machines are programmed with current routines to do the same mental tasks we do, I’ll be very surprised if machine consciousness will require more than a modest level of additional tweaking of the type they have already started at Blue Brain. 

So, I’m not buying my laptop a birthday cake quite yet, but remain cautiously optimistic about the end of the world as we know it.   

When computers can reason, will they want us around?

It is so encouraging to see maintream press, like the Financial Times, reporting on what I think will become the the key issue of our lifetime – conscious machines.   Although this article pretty much dodges the most intriguing aspects of the debate over AI, rational computers, and consciousness  it does offer some insights into the state of the science in the semantic web, where AI routines are used to create a better search experience.

One researcher suggests that he’s given up on the idea that simply creating a massive neural network and priming it with some info will lead to conscious thought but I still think that hypothesis has not been tested nearly enough because our computing capacity is still far short of what you and I have between our ears in the three pounder we call a brain.    Brains offer a spectacular number of individual neurons, and in turn a simply staggering number of interconnections between those neurons.   It will be another decade or so before we have that processing capacity in computers, but it will certainly happen.   I’ll be surprised if our consciousness and intellectual abilities are as profoundly amazing as we like to …. think they are.    In fact I’d wildly predict that we’ll have conscious machines within 20 years and that those conscious machines will surpass us in every imaginable intellectual and creative ability within months – probably days – of consciousness.    Is this because I’m hugely optimistic about technology?    NO!   It’s because I’m hugely confident we overrate our feeble human abilities, which I’d suggest are just a few shades richer than those of our dogs and cats.