Chinook’s Perfect Checkers Game


The best checkers player in the world is now a computer program, which will never again lose a game.   Chinook was designed just to play checkers but wound up solving the game with a database of every possible move combination.

Many would suggest that checkers and chess programs (which now are the best chess players in the world) are not a reasonable metaphor for human intellect but I disagree.   This type of program, *factually speaking*, is a vastly superior form of intellect in these limited game realms.

Our human abilities have evolved over millions of years to branch out in far more than a single direction and that is impressive.  It’s also fairly clear that these chess and checker programs are not “conscious” despite the fact that they are better than we are at the games.   However I don’t think it’s reasonable at all to assume that there is something “extra” that makes human intellect and consciousness unattainable for a mechanism.  On the contrary we are *defective* thinkers compared to machines doing comparable things.   Even a simple Wal Mart calculator can “outthink” the best mathematician in the world in most forms of mathematical problem solving.

As we start blending the power of our organic computing devices (aka brains) with mechanical computing devices I expect a more rational, resource optimized world where economic and environmental balances are met.  A world filled with happy, glowing faces and prosperity for all.   Yes, really I do!

11 thoughts on “Chinook’s Perfect Checkers Game

  1. Brute-force AI is no longer a theoretical problem, it’s an engineering problem. What we do with such entities and how they fit into our civilization (and our world view) is a different matter. AI as a purely rational phenomenon is prey to the limitations of rationality. I suspect that there will be a partnership that may eventually lead to diverse classes of “intelligence.”

  2. Computers are usually set to certain tasks such as ‘Hello World’ or singing Daisey. Tying shoelaces is difficult for a computer and most difficult is making sure some darned robot doesn’t try to tie your shoelaces in the middle of the intersection. As we develop more autonomous agents that cooperate with each other to sense the environment and pool their votes as to actions to be taken, we will have military vehicles able to traverse hostile terrain with the confidence and wisdom displayed by cockroaches but we will also have alot of the soldiers be unemployed meatcutters whose jobs were taken by edge-aware computers.

  3. I think you are both suggesting that I’m making something akin to a “leap of faith” to suggest that this type of “intelligence” tells us about our own intelligence. We won’t really know that until computers start ‘splainin’ things to us which I hope happens in my lifetime.

    Tharwood are you saying there is something about human intelligence that cannot be mechanistically defined? What’s an example of a phenomenon you sense – in any fashion – that does not seem to lend itself to materialistic interpretation?

    I’d suggest that in all likelihood the thing we call “consciousness” is a function of the number of interconnected processes rather than some “deeper” layer of stuff. This does not have to diminish what it means to be “human” vs a “sack of water”.

  4. are you saying there is something about human intelligence that cannot be mechanistically defined?

    For all X, such that X is a Gödel number… shall I continue? 😉

    Human brains are as much chemical as electrical in nature — at least, they were thought such back in the late 1970s when I last looked deeply into the state of the art. The neurological processes can process staggering amounts of information in ways I haven’t even begun to comprehend.

    Brute-force AI, on the other hand, I work with every day. So I have a fair degree of confidence that very sophisticated rational problems can be solved. In my little niche, it’s mostly a question of more memory and better bus speeds.

    However, every time our CTO comes back from a trip to the field, he shows us another pet peeve from a customer and asks us, “why didn’t we find this?” I’ll have to think of a sanitized example I can share.

  5. Tharwood – OK, but that kind of incompleteness is going to apply to *any* intelligence, human or artificial.

    So I’d rephrase the question to be “Is there something about human intelligence that leads you to think it cannot be reproduced … without humans? For me it takes only a small leap of logic to say “no”. C’mon, we really aren’t all that impressive!

  6. … diverse classes of “intelligence.”

    This is a really provocative idea as great grist for a sci-fi story.

    Even if it turns out we humans have something “special” that cannot be replicated without eggs and a grab bag ‘o spermatozoans, it seems almost certain that “intelligent machines” will eclipse our mental abilities in many areas. The most promising of these would be ways to optimize resources more effectively. Even in my own limited capacity it’s easy to see how much improvement we could bring to, say, the transportation sector in the USA if, for example, more standardization and more risks were tolerated.

  7. Is there something about human intelligence that leads you to think it cannot be reproduced … without humans?

    Oh, no, not at all! But brute-force AI is not it 🙂

    A more interesting question, to me, is, “what is that something, and how does it play with brute-force AI?” One big reason I don’t credit the AI Singularity theory is the inherent fragility of them brains in boxes. But could a smart brute-force AI and, say, a dog’s nervous system become a functionally integrated entity? I have no idea. I am also tired and it is Friday…

  8. But could a smart brute-force AI and, say, a dog’s nervous system become a functionally integrated entity?

    I say “sure!”, but feeding after 6pm will become a LOT more dangerous than now.

    Larry Page over at Google was recently suggesting that he thinks algorithms alone will be able to duplicate human thought but your point is important that a brain is a lot more than synaptic firings.

    However I’m inclined to think that *thought* is probably defined in totality by massive numbers of synaptic firings in various patterns.

    I’m not sure I understand Kurzweil’s singularity stuff (actually I’m sure I do *not* understand it.) However I think a challenge to his idea is that it’s not happened yet. With a universe some 16 billion years old you’d think “somebody out there” would have stumbled upon the ingredients for that AI singularity by now.

Leave a Reply to Joe Duck Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s