This is Part II of our discussion of predicting individual crime. See Part I.
The moral question is this: should an authority take action against you if an algorithm spits out a sufficiently high probability that you will commit a crime or otherwise heinous or immoral act? If the answer is yes, then good algorithms must be sought. If not, then formal algorithms should be eschewed.
There are two considerations: the accuracy of the algorithm and the actions taken against you. Accuracy has two dimensions, predicting truly you will sin (to use a shorthand term) and guessing truly you won’t; inaccuracy is the opposite of these. Preventative actions run the gamut from verbal admonition to fines to incarceration to whacking.
We all already agree that preventative actions in the face of presin or precrime are good. Consider a mother learning her daughter will vote for Hillary. The mother uses an informal algorithm, which takes as input some of her daughter’s past behavior and importantly uses her daughter’s own promise of the daughter’s future behavior. Using these data the mother forms a judgement of the (non-numerical) likelihood of her daughter’s act. Then the mother forbids her daughter from acting using emotional force.
If you don’t like the voting example, substitute taking drugs, seeing an iffy boy, skipping homework and on and on. Parents know their offspring well and parental judgments are usually, but certainly not always, correct about what their children will do, especially when the children say they are going to commit some act.
Likewise, parents also usually correctly know when their kids are not going commit some act. There is a sharp asymmetry here. What acts are the children not going to commit? These are infinite, and no parent can start the day listing all the various acts her child might not commit. Acts must thus be brought into focus somehow. They must be made official, trackable, as it were, perhaps because of the child’s past behavior. This is why false negatives, if and when they are discovered, are shocking to parents, because the act was not in the parents’ thoughts. “I never even thought she’d do that!”
Family situations have high accuracy and have preventative actions tailored to the enormity of the act. Grounding a child or verbally warning her are small things, and these situations are kept within the family; they are not made official. Yet parents err, and children know it. The daughter above might have only been trying to get her mother’s goat by promising to vote badly, but the child still ends up (say) grounded. The daughter will resent this mistaken restriction of her liberty. But the daughter is likely to forgive her mother because she knows her mother has her best interests at heart; and it is true, the mother does have her daughter’s and the family’s best interests in mind.
Everything in these examples extrapolates to governmental algorithms and governmental preventative actions in the face of precrime.
A ravenous cannibal was imprisoned for his past crimes. He has sworn that when free he will kill and eat again, just as he has done every other time he was at liberty. Yet his sentence ends midnight tomorrow. Do we, i.e. society, free him or should we restrict him? Here the magnitude of the future crime and the certainty we have in its occurrence outweighs the potential error we might make in setting the man free and (say) he turns into a vegetarian.
This is not an unusual situation, and indeed we have many programs to restrict the liberty of previously convicted criminals because in part it is recognized these people are likely recidivists. We have criminals report to parole officers, or insist they register on lists or at the sheriff’s office, or we say they must regularly see counselors, or we don’t let them hold certain jobs, and so forth. This is (in part) because of precrime, yet where the future acts are left somewhat vague. We do this not so much because we have the criminals’ best interests at heart, but because we are concerned to protect ourselves. Significantly, some portion of these restrictions are seen as punishment for past crimes.
There will be many false positives in these programs, i.e. at least some criminals whose behavior is restricted will not commit new crimes. Yet we accept these errors, and the accompanying costs of these errors, because it is recognized that without the restrictions greater costs to society will be realized—and because of the continuing punishment angle False negatives will be somewhat rarer, but if and when they are discovered there will be no shock because the list of infractions is in the collective mind.
The accuracy of the algorithms used in restriction previously convicted criminals is high. But what of algorithms applied to screening ordinary citizens? Consider this recent news story:
…China wants to fight crimes before they happen. They want to know they’ll happen before they’re planned—before the criminal even knows he’s going to be part of them. Bloomberg Business reported that the Communist Party “has directed one of the country’s largest state-run defense contractors, China Electronics Technology Group, to develop software to collate data on jobs, hobbies, consumption habits, and other behavior of ordinary citizens to predict terrorist acts before they occur.”
The Chinese government wants to know about everything: every text a person sends, every extra stop they make on the way home. It’s designed for dissidents, but it means that they’ll know every time a smoker buys a pack of cigarettes, how much gas a car owner uses, what time the new mom goes to bed, and what’s in the bachelor’s refrigerator.
There is (Orewellian) data, there is a definition of the criminal act, albeit one unfamiliar to Western minds (to the Chinese government terrorism is very broad), there is an official algorithm which ties all these together. What’s missing is a direct statement of the restrictions imposed given the algorithm spits out a high likelihood of a citizen’s committing a terrorist act. Given that this is China, these restrictions are likely to be harsh and intrusive.
Suppose first the algorithm reaches or exceeds the accuracy of parents judging their natural children; suppose even the algorithm is flawless. Now I say “flawless”, but saying it does not make the statement completely believable, because we have heard claims of certainty many times and (most of us) have formed the judgement that it is an advertising word. “Flawless” in advertising means “somewhat but far from certain.” Here I mean utter certainty. So suppose God Himself tells us whether any person will commit a crime. Should we restrict?
God Himself knows whether any person will commit an evil act, yet God, except in miraculous circumstance, does not stop individuals. We do stop them (if we can), however, as shown above; we agree that restrictions in the face of some precrimes or presins are good. This theological asymmetry begs an answer, but as I don’t know what it is and exploring it would take us too far afield, let’s stick with human actions.
If the algorithm truly is perfect, then restrictions are good. What the restrictions are is another question. Consider a verbal warning. The precriminal, in much the same way a mother might lecture her daughter, could be shown the algorithm’s output; he’d be told of its inerrancy and informed of the punishment that awaits him were he to commit the eventual crime. There is little to object to in this, except that the precriminal becomes a perpetual official “person of interest”. Even if the perfect algorithm predicts the person will never commit another crime, officialdom would give in to the temptation to treat this citizen differently.
Again, this happens even if all acknowledge the algorithm’s inerrancy, because humans are weak. In any case, tacit in this example is that the algorithm does not take as input the precriminal’s knowledge of the algorithm’s prediction; or, rather, it assumes the crime might not be committed were the precriminal to be made aware of the precrime. The opposite may be true. Jesus used an inerrant algorithm to predict Peter would deny him thrice before the cock crowed. Peter was made aware of his presin. “I will never disown you,” he said when made aware. Yet he still committed the sin. We’re back to theology because this is a perfect example of an inerrant prediction tied with a verbal warning; this warning was the only restriction placed on Peter’s behavior.
We might call this an unconditional prediction. The event was foreordained; it was going to happen no matter what; it was predestined. The warning thus served no purpose in changing history’s course with regard to this sin; and it did not modify the behavior Peter (with respect to the presin) before the sin. But, oh my, did it have an effect afterwards. Point is, in this rarest of circumstance no action, even physical restraint, taken by any authority (even the largest!) can stop the crime or sin. Indeed, it would be foolish to try to stop it. Verbal admonitions are it, then, and only for the purpose of teaching the precriminal or presinner a lesson afterwards. This is the proof that in the face of a perfect unconditional algorithm, the only action worth taking is advisory.
A perfect conditional algorithm, which allows the chance the crime won’t be committed were the precriminal made known of the algorithm’s prediction (a prediction which, I emphasize, is not known to the algorithm itself so that no logical paradox arises), it seems that actions tailored to the enormity of the crime and its prevention would be allowable. If no action were taken, because the algorithm is perfect, we know with utter certainty the crime will be committed. But what power would be used to stop it? We cannot commit an evil to stop another; the ends cannot justify the means. Surely a verbal warming isn’t evil, but is physical restraint? Incarceration? Monetary fine? Something worse? Something beyond a warning appears punishing because, after all, the warning might work. Who’s best interest is at heart? Actions taken at the scene of the precrime are warranted, say officials place a guard around the item that would have been stolen, etc.
Let’s now return to realism, to the land of not just imperfect algorithms, but lousy ones, which are the norm. Predicting individual human behavior in the absence of the kind of data (say) a parent might have has proven to be extraordinarily error prone. There is no good reason to believe the Chinese (or us) will have hit upon marvelous insight into behavior that improves accuracy markedly. (See this discussion of algorithm accuracy.) The actions taken to prevent crime in the face of poor predictions thus must be further tempered. Anything beyond a warning would be difficult to justify; and even warnings might be too much. Why?
Even imperfect algorithms will have some successes. Suppose, as is likely in China’s case, physical restraint is used to stop precriminals. Some who would truly have committed a crime will thus be barred from evil acts and so the crime rate will decrease. This decrease will be touted as a good. Yet the decrease will happen so long as the algorithm has at least one true prediction and regardless of the numbers of its false positives. The algorithm has to only remove one true precriminal from the street for the crime rate to decrease, even though that same algorithm might falsely advise detaining any number of innocent people, and even though the algorithm misses any number of true to-be criminals.
The harm flawed algorithms do, because the innocent are (effectively) kept at ransom and also perhaps made perpetual “persons of interest”, is substantial. Preventative actions beyond warnings or wariness on the part of officials can’t be justified, and even warnings are problematic because of the temptation to misuse the algorithm as a judge. Protective actions at the target, however, are fine. Stationing an extra visible squad of cops around a target, for instance. These kinds of actions are only indirectly restrictive.
There is no difference here and between screenings for spies, drugs, breast or prostate cancer, or anything else where large numbers of individuals are fed into an algorithm and formal actions by authorities are taken. Much depends on the accuracy of the predictions, but in many of these cases, the accuracy is poor or worse, meaning the actions taken often cannot be justified. We’re nowhere near an accuracy great enough to act on precrime—for (new) crimes such as murder, rape, theft, terrorism and the like. I say “near” which implies we can cut the distance through vigorous effort. I doubt that possibility strongly.