On Computers ‘Learning’

This post is one that has been restored after the hacking. All original comments were lost.

“It’s not listed”

The final act of the movie War Games shows a computer solving tic-tac-toe. This isn’t especially difficult. Given the symmetry, there are only three opening moves; given those only a small number of counter moves, and so on. Pre-high school students easily learn to program the game.

The computer in that movie, after taking an inordinately long time to “learn” that if both opponents have their wits about them the game ends in stalemate, then goes on to “learn” that playing war with nukes must result in the same outcome. Nobody wins, thus nobody should play.

What the computer didn’t reckon is that even games which are as dull as tic-tac-toe, games with outcomes anybody can guess, are still sometimes played—maybe to kill boredom. Or just to kill. Skip it.

Machine “learning” is hot these days. Word is that money is flowing fast into Silicon Valley start-ups which have “artificial intelligence”, “machine learning” and the like in their names. Once again computers are promised to solve the riddles of the ages. And people want in. (If Yours Truly had any business sense, he’d have figured a way to cash in on this. Skip this, too.)

One more anecdote. Scientific American (a political monthly) published recently an article under the headline and plug “Game Theorists Crack Poker: An ‘essentially unbeatable’ algorithm for Texas hold ’em points to strategies for solving real-life problems without having complete information.

The game itself, called “heads up”, is two-man with limited, fix bets, and is vastly simpler than other forms of poker. The article points out there are “3.16 × 1017 states in [the game], and 3.19 × 1014 possible points at which a player must make a decision.” Big numbers, but not huge, not these days. Most of the work done to “solve” this game was in figuring how to store and access these states in reasonable time.

But then that’s where all the work is done in solving games. Laying out the possible moves, counter moves, counter-counter moves, etc., and accessing them. This version of poker solved was no different than tic-tac-toe except for one thing: size. You might say that “randomness” or “chance” is another point of departure because in poker cards are shuffled and dealt, meaning you don’t know exactly where you’re starting from like you do in tic-tac-toe. But this is a mistake. You do know that the possible starting points are in a fixed set, points which are easily discovered.

Now to the “learning.” Has a computer “learned” to play tic-tac-toe after it has been told the simple set of rules where are of the type “If this, do that”? In one sense of the word, it has. But then a river can be said to have learnt the path downhill in that same sense. So that can’t be right, thus nor is saying “it has been told” right. There is nothing there to tell. There are only electronic versions of levers, buttons, and springs.

The computer is a machine that takes sense of its environment (“input”, “commands”) and chugs and churns in fixed pre-determined ways. If the environment is in state X, the computer must necessarily (excepting malfunctions) end in position Y. Each and every time. If you think not, then you’re probably imagining states X which are not identical.

So computers can’t learn. But we say dogs can. They can be trained, say, where and when to poop, an activity which they do well (excepting malfunctions). Yet dogs are machines, too. What makes computers different than dogs? For one, dogs are alive. And a dog’s “processing power” is larger than any computer’s. Much larger. And different because it is alive. And the dog takes incalculably greater input from its environment than any man-made machine. This extra versatility makes dogs more useful than computers for many things, but there’s no philosophical difference. It’s still state X is followed necessarily by state Y. “State X” of course is not readily delimitable—at the least it is time varying, accumulating, and of huge dimension—so it’s harder to know with certainty what “state Y” will be.

That means when we say a dog or computer “learns” we’re attributing to them something of which they are not capable. Anthropomorphization on our part. Dogs we say are intelligent, too. And this gives the sense that we’ll eventually build computers with similar intelligence. We might at that. Weak A.I.

Humans do learn, but then we aren’t dogs nor are we computers. We’re much more.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *