It helps every time you hear “AI” to think “statistical model”. Because, of course, that’s what AI is. Statistical models layered with clever data processing, that is. Curve fitting.
Saying “statistical model” is bound to put any audience, except that composed of the toughest nerds, fast asleep. You can’t terrorize or enthrall anybody about our “Coming Statistical Model Future”.
Imagine saying “Statistical models will bring many wonders. It may also destabilize everything from nuclear détente to human friendships” and thinking you’d hear anything except snores or giggles in return.
How about this?
Humanity is at the edge of a revolution driven by statistical modeling. It has the potential to be one of the most significant and far-reaching revolutions in history, yet it has developed out of disparate efforts to solve specific practical problems rather than a comprehensive plan. Ironically, the ultimate effect of this case-by-case problem solving may be the transformation of human reasoning and decision making.
This revolution is unstoppable. Attempts to halt it would cede the future to that element of humanity more courageous in facing the implications of its own inventiveness. Instead, we should accept that statistical modeling is bound to become increasingly sophisticated and ubiquitous, and ask ourselves: How will its evolution affect human perception, cognition, and interaction? What will be its impact on our culture and, in the end, our history?
Class A soporific. No one will panic hearing it.
Make it “AI” instead and sweat pops out on brows. The imagination flares. Computers that can learn are going to take over the universe!
Such, anyway, is Henry Kissinger’s worshipful attitude. And Eric Schmidt’s and Daniel Huttenlocher’s.
Just listen to the way these guys talk. Guys you’d think would know better (my emphasis).
…developers of AlphaZero published their explanation of the process by which the program mastered chess—a process, it turns out, that ignored human chess strategies developed over centuries and classic games from the past. Having been taught the rules of the game, AlphaZero trained itself entirely by self-play and, in less than 24 hours, became the best chess player in the world.
No it didn’t.
This glorified calculator instead fit some curves, the nature of which were part of its input from human beings, the fitting of such also part of the input from human beings.
The statistical model, i.e. the fitted curves, forecasted certain outcomes conditional on different game states, and it turns out some of those forecasts were better than some made by people.
Well, this is no surprise. Calculators have long been faster than men at adding and subtracting, just as bikes can beat men in a race. We don’t generally fear bikes. (Unless you live in NYC.)
Chess has trivial rules. Trivial. It takes no genius to encode them. Why shouldn’t fast calculators (computers) beat slow ones (men) at this simple game?
Another game with trivial rules is Texas Hold ’em (thanks to Victor Domin for the tip). The odds of getting a card are easily fixed knowing what is on the table. The odds of whether a person is bluffing are only slightly harder. Part of those odds are conditional on the bet sizes that came before, the size of the current bet, the remaining money of the players, and past performance of those players—in a mathematical way that has to be guessed by a man.
It doesn’t take much to guess. But it takes a lot to calculate once the guess has been made. And, lo, statistical models are now beating human players.
The amount of creative input in the modeling is minimal, in the programmer saying “I think the way past bet sizes influence the odds of hands in this mathematical way.” Maybe the guess is wrong at first, but some honing gets it in the ballpark.
Chess involves memorizing lots of things, and statistical models are better at that than humans. Poker is probably less memory-intensive, and if it’s anywhere humans will figure a way to beat a computer, once they learn the computer’s way of betting, it’s with this game.
But these kinds of tasks are, quite literally, child’s play. Less than child’s play. Consider instead of the question “Given the status of the board now, which piece should I move where?” this question: “How many times should President Trump visit Korea?”
Kissinger et al. think it easy, though:
Hardly any of these strategic verities [such as nuclear deterrence] can be applied to a world in which AI plays a significant role in national security. If AI develops new weapons, strategies, and tactics by simulation and other clandestine methods, control becomes elusive, if not impossible. The premises of arms control based on disclosure will alter: Adversaries’ ignorance of AI-developed configurations will become a strategic advantage—an advantage that would be sacrificed at a negotiating table where transparency as to capabilities is a prerequisite. The opacity (and also the speed) of the cyberworld may overwhelm current planning models…
More pointed—and potentially more worrisome—issues loom. Does the existence of weapons of unknowable potency increase or decrease the likelihood of future conflict?
Cyber warfare is not, I think, what they mean here. Securing access to calculators that run important things is important. (Usually easily done by unplugging the ethernet cable. In computer-boogey-man movies, no one ever thinks of shutting the power off at the substation.) The computer can only do what its told, and if somebody codes “Build a weapon like this”, it can do it. In bits. Building that weapon in steel still has to be done, though, and at that point we’re back at the regular arm’s race.
There are two real threats from statistical models, and one of them is not computers “learning”. The first is storage. You heard me: storage. Schmidt must have wrote this:
Google Home and Amazon’s Alexa are digital assistants already installed in millions of homes and designed for daily conversation: They answer queries and offer advice that, especially to children, may seem intelligent, even wise. And they can become a solution to the abiding loneliness of the elderly, many of whom interact with these devices as friends.
The more data AI gathers and analyzes, the more precise it becomes, so devices such as these will learn their owners’ preferences and take them into account in shaping their answers. And as they get “smarter,” they will become more intimate companions. As a result, AI could induce humans to feel toward it emotions it is incapable of reciprocating.
To a small extent, this is true. But statistical modeling will top out. Unless they is continuously augmented by human input. Real Turing tests can only be passed in trivial situations, or by those who want them to be passed.
Anyway, there’s a threat, all right. Google and Amazon employees listening to your conversations, storing them, and reporting them to authorities. Which we know they already do. Call this AI if you like, but self-bugging is a better term.
Governments and companies already store your cell phone data, which tells where you were and what you were doing for most of the day. We already know about what happens on line. Facial recognition (more curve fitting) is error prone, but storage is easy.
There will be nowhere to hide. Unless you give up your toys.
The second threat is spiritual.
AI will make fundamental positive contributions in vital areas such as health, safety, and longevity.
Still, there remain areas of worrisome impact: in diminished inquisitiveness as humans entrust AI with an increasing share of the quest for knowledge; in diminished trust via inauthentic news and videos; in the new possibilities it opens for terrorism; in weakened democratic systems due to AI manipulation; and perhaps in a reduction of opportunities for human work due to automation.
This is overwrought, except for the bit about deep fakes. You won’t be able to trust what you see. As long as people remembered that, it’s an improvement. They won’t.
The other thing is that people already believe its the computers doing the thinking, not men. No computer has ever thought like a man, and no computer ever will. “I should do this, because AI told me to” is spiritual death.
Chess and poker can be programmed because the rules are exact or approximately so. Knowing how many times to visit Korea can’t be, because there are no fixed rules. Somebody can make some up, sure, and “optimize” decisions based on these made up rules. But unless those rules exactly match reality, which they won’t, decisions will likely be worse, not better.
Meanwhile, people will come to rely on the “expertise” of statistical models too much.
To support this site using credit card or PayPal click here