My friend Marvin dropped in today on his way down to California and we were discussing artificial intelligence. Like most of my programming pals he’s much more skeptical than I am about how soon we’ll have conscious computing, but they are also far more knowlegeable about the difficulty of programming complex routines, let alone consciousness. Of course, they are not nearly as pretty as Google uber-Engineer Marissa Mayer who estimated 10-15 years, so I’m going with her estimate.
I’m still trying to decide if programmers are viewing things too narrowly by generally assuming that the circumstances required for conscious thought are so very profoundly complex that engineering for them will be nearly impossible. I prefer the idea that simply having brain-equivalent speedy and massive computational power is going to push machines very close to consciousness after (relatively) simple routines are developed that will create conversations within those systems.
When I noted that many in the AI community are now wildly optimistic about the prospects for strong AI within 10-20 years, Marvin correctly noted that people in the AI community were predicting strong AI a *long* time ago. This led to the interesting question of “prediction bias”. How often in history are predictions reasonably accurate, and how do the time estimates on those accurate predictions hold up? This would be a fun mini-research project to do sometime though obviously it would itself be subject to a lot of bias depending on how you picked the criteria, the predictors, and the predictions.
Along those bias lines this great Wikipedia article popped up showing a huge number of cognitive biases. All of us should take a look at these and reflect on how often we fall into these irrational traps.