Artificial Sociopaths – Will Thinking Machines Go Bad? Not likely!

I’m in a fun email exchange with a bunch of clever folks talking about how “thinking machines” might come to be and might be mean to us so I wanted to post my thoughts about that.   I’m not posting the others because I don’t have their permission yet…

I really hope more folks will chime in here as this is the most important topic in the world  even though most folks don’t realize that yet.    It should become clear within a few years that we are likely to be interacting with self-aware computers in as few as 10-15 years.

The key point I wanted to make is optimistic.   We’ve seen how computer approaches dramatically improve our very limited abilities to calculate and analyze things, and I predict  that  when machines attain consciousness and the ability to communicate effectively with humans extraordinary improvements will become commonplace.

I’d also predict that the machines are very UNLIKELY to pose a threat to humanity.   Humans have tended towards greater compassion as we’ve progressed, and we’ll pose few threats to the thinking machines which will likely quickly find ways to protect themselves, so I think the worse likely case it that they will choose to ignore us.    I’m hoping they’ll help us out instead.   Note all AI efforts seek “friendly AI” so the programmers are working to make helpers not adversaries.    However I also believe (unlike most people) that our early approaches will not matter much in terms of what the superintelligence eventually becomes.   Humans will catayze the process of machine self awareness, but then our brains will process things too slowly to continue our participation in the evolution of intellect.

Philosophically speaking I’d suggest that computer thinking will NOT be “fundamentally different” because I think our rational thought is confined by the laws of the universe, most of which are well described by science and confined by mechanistic principles.  However the machines will be much faster than ours and proceed along more rational lines, unclouded by the emotions and cognitive biases that plague our thinking.   They’ll be better than us.

Is this optimism based on faith or science or ?    I’d say it’s speculation based on common sense observations of how the world works and trends in the world, many of which point to superintelligent, self aware machines within decades.   Faith – to my way of thinking – is an appeal to believe things that cannot be rationally deduced from the facts and data.  I’m not a big fan of that approach to knowledge.

To which somebody replied that I was expressing a lot of misguided techno faith and also that the machines would likely be sociopathic without the benefit of human thought approaches.

Wow, you really don’t like this idea of friendly artificial super intelligent machines?!   Come on, they’ll be more fun than the internet!     Also, unlike current chess programs they’ll often let us win to maintain our fragile human egos.

Interestingly your concerns about the potential for a sort of sociopathic AI are along the lines of some researchers in this area and also some concerns expressed earlier.   Although I’m not worried about that much, I see it as a very separate issue from how likely we are to see these machines – which I’d argue is “extremely likely”, almost to the point of inevitability because to me the enhancement of our intelligence via technologies represents a very “natural” (though dramatically accelerated) progression from our primal evolutionary heritage:

I”m surprised you see me as having “blind faith”.   I think faith approaches are irrational almost by definition and don’t offer much insight.  I also would argue that the advent of thinking machines and what I contend to be their likely friendliness are derived from human and machine observations and histories.    Note how humans already have merged with machines in several ways.    Contact lenses, Cochlear Implants, BrainGate and Emotiv headsets (which use brain waves to control computers), and many more.  I see the next level of interaction as intellectual enhancement devices.   It’s not a creepy sci-fi vision at all, rather the logical progression of how humans, pre-humans, and even many animals have used our intellects to develop and interact with useful tools.

Many (including me) think that thinking machines will come *after* many more rounds of slow merging of humans with computing devices.   If you are concerned about sociopathic computers this should come as some comfort because it’s most likely to be part of the ongoing process of co-evolution where humans and machines work together.  Currently only half of that equation can think autonomously but soon (I hope) both we and the machines will work together.

I may be wrong here, but I’m not using faith-based thinking.    In fact I think faith is one of the main impediments to people seeing the inevitable reality of what is to come.   As suggested in an earlier note the advent of thinking machines may challenge many of the conventional religious beliefs that many hold very dear.   I actually think this tension will be far more likely to create acts of violence than we’ll see from the thinking machines, who will very quickly evolve to a state where they could simply … leave the planet (another reason I don’t think there’s much to worry about here in terms of superintelligent machines gone bad.


About JoeDuck

Internet Travel Guy, Father of 2, small town Oregon life. BS Botany from UW Madison Wisconsin, MS Social Sciences from Southern Oregon. Top interests outside of my family's well being are: Internet Technology, Online Travel, Globalization, China, Table Tennis, Real Estate, The Singularity.
This entry was posted in Artificial Intelligence, blue brain, SyNapse and tagged , . Bookmark the permalink.

7 Responses to Artificial Sociopaths – Will Thinking Machines Go Bad? Not likely!

  1. leland stamper says:

    When will someones AI creation be able to post comments on a blog and pass as human? All kinds of possibilities come up.

    • JoeDuck says:

      Leland we’re pretty close to that now. It’s getting hard to tell the difference between “bots” and humans unless you can ask pointed questions. However “fooling” us – the goal of the famoust “Turing Test” is not the same as autonomous thinking. I think it’s a significant step in that direction though.

  2. horatiox says:

    rational thought is confined by the laws of the universe, most of which are well described by science and confined by mechanistic principles

    BF Skinner, anyone? Classical mechanics has itself been called into question since Einstein, if not before, Mr. Duck. So if Brain Research A, say “Dr. Sammy”, starts with the assumption that Mind is explainable via purely mechanical brain processes, he’s not likely going to be able to explain Mind, but merely reduce it a brain area, synapses, educated guesses, etc. Neuropsychology has barely advanced in this regard–the brain experts have not been able to map out higher-order thinking (which is to say, human skills, rather than merely primate), such as language, mathematics, memory.

    Brain scientists may point to cortical areas, but it’s hardly more sophisticated than that: “it appears this section of the brain has something to do with memory.” But the actual memory of some event–say Dr Sammy’s lunch at the palo alto Deli–cannot at all be reproduced. Qualia itself does not seem explainable by neurological mechanisms: though appears to be more holistic (that doesn’t necessarily mean some new agey, spiritual thing–but irreducible, not strictly linear).

    AI might come about, but will most likely be simulation as it is now. Chess warez can defeat most mortals, but that’s not because they are “smart”: dweebs programmed them, and they have much faster processing speeds, and larger memory banks, but the chess bot’s not thinking, merely running routines–i.e. all related to, “win”– much faster than humans can. Maybe a few dozen decades, the Matrix happens, that is, assuming humans survive, or the ‘bots don’t enslave them.

    • JoeDuck says:

      Several good points here Horatiox and you are right I’m pretty much a mind reductionist. But I think you are understating how much progress has been made in terms of “mapping” the brain. The actual physical structures are arguably less complex than most had thought, though it’s true we don’t understand how they all work to give us the modestly impressive capabilities we seem to think are so special. We’ll know soon enough as the likely progression will be to enhance our brains with chips that can access more info and get to it faster. This should allow us to find answers much, much faster, and design things in much more innovative ways.

      • horatiox says:

        In fact I think faith is one of the main impediments to people seeing the inevitable reality of what is to come. As suggested in an earlier note the advent of thinking machines may challenge many of the conventional religious beliefs that many hold very dear.

        Yes, but many humans could use thinking machines as auxiliaries of a sort, for both cognitive and physical tasks. Many humans, at least hyper-dogmatic, violent, or psychotic humans might benefit from a “smart-ware” implant which optimizes rationality. Then, good psych-meds might suffice as well. Yet automation in itself will not likely solve many important political or ecocomic problems Duck–
        which brings up an issue progressives discussed back in the 20s and 30s when factories started to eliminate jobs. What happens when laborers, even say programmers and techies have been replaced by highly efficient bots??
        Welfare states, or gulags….

      • JoeDuck says:

        Horatiox I like the “optimize rationality implant” idea. What a great Christmas Gift….I have a list of folks I’d get one for!

        I think it’s a great question about what’ll happen as computers continue to put people out of businesses. But no big difference from what is happening right now where journalists are leaving paid positions and pumping gas while bloggers take over parts of the news business.

        This is where Govt and welfare programs come in. We need to make sure that the benefits of all the new societal efficiency flow in large part to the greater society and not to a handful of individuals. Good economics insists we reward the innovators in a big way, but that still leaves plenty to help those displaced by technology.

        Key concept: The thinking machines will do 10,000 times the work of a human and require 1/10,000 the resources.

        It’s a box ‘o cornucopia, not a box o’ pandora.

  3. 0010101010111 says:

    001011111100000111000010100010001000100111111100—–translation greetings useful idiot,-er friendly organic, thank you for the good information you have provided to our future source of organic compounds -er human friends

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s