The Ethics Of Precrime

15163257184_149fa2f5d1_o

This is Part II of our discussion of predicting individual crime. See Part I.

The moral question is this: should an authority take action against you if an algorithm spits out a sufficiently high probability that you will commit a crime or otherwise heinous or immoral act? If the answer is yes, then good algorithms must be sought. If not, then formal algorithms should be eschewed.

There are two considerations: the accuracy of the algorithm and the actions taken against you. Accuracy has two dimensions, predicting truly you will sin (to use a shorthand term) and guessing truly you won’t; inaccuracy is the opposite of these. Preventative actions run the gamut from verbal admonition to fines to incarceration to whacking.

We all already agree that preventative actions in the face of presin or precrime are good. Consider a mother learning her daughter will vote for Hillary. The mother uses an informal algorithm, which takes as input some of her daughter’s past behavior and importantly uses her daughter’s own promise of the daughter’s future behavior. Using these data the mother forms a judgement of the (non-numerical) likelihood of her daughter’s act. Then the mother forbids her daughter from acting using emotional force.

If you don’t like the voting example, substitute taking drugs, seeing an iffy boy, skipping homework and on and on. Parents know their offspring well and parental judgments are usually, but certainly not always, correct about what their children will do, especially when the children say they are going to commit some act.

Likewise, parents also usually correctly know when their kids are not going commit some act. There is a sharp asymmetry here. What acts are the children not going to commit? These are infinite, and no parent can start the day listing all the various acts her child might not commit. Acts must thus be brought into focus somehow. They must be made official, trackable, as it were, perhaps because of the child’s past behavior. This is why false negatives, if and when they are discovered, are shocking to parents, because the act was not in the parents’ thoughts. “I never even thought she’d do that!”

Family situations have high accuracy and have preventative actions tailored to the enormity of the act. Grounding a child or verbally warning her are small things, and these situations are kept within the family; they are not made official. Yet parents err, and children know it. The daughter above might have only been trying to get her mother’s goat by promising to vote badly, but the child still ends up (say) grounded. The daughter will resent this mistaken restriction of her liberty. But the daughter is likely to forgive her mother because she knows her mother has her best interests at heart; and it is true, the mother does have her daughter’s and the family’s best interests in mind.

Everything in these examples extrapolates to governmental algorithms and governmental preventative actions in the face of precrime.

A ravenous cannibal was imprisoned for his past crimes. He has sworn that when free he will kill and eat again, just as he has done every other time he was at liberty. Yet his sentence ends midnight tomorrow. Do we, i.e. society, free him or should we restrict him? Here the magnitude of the future crime and the certainty we have in its occurrence outweighs the potential error we might make in setting the man free and (say) he turns into a vegetarian.

This is not an unusual situation, and indeed we have many programs to restrict the liberty of previously convicted criminals because in part it is recognized these people are likely recidivists. We have criminals report to parole officers, or insist they register on lists or at the sheriff’s office, or we say they must regularly see counselors, or we don’t let them hold certain jobs, and so forth. This is (in part) because of precrime, yet where the future acts are left somewhat vague. We do this not so much because we have the criminals’ best interests at heart, but because we are concerned to protect ourselves. Significantly, some portion of these restrictions are seen as punishment for past crimes.

There will be many false positives in these programs, i.e. at least some criminals whose behavior is restricted will not commit new crimes. Yet we accept these errors, and the accompanying costs of these errors, because it is recognized that without the restrictions greater costs to society will be realized—and because of the continuing punishment angle False negatives will be somewhat rarer, but if and when they are discovered there will be no shock because the list of infractions is in the collective mind.

The accuracy of the algorithms used in restriction previously convicted criminals is high. But what of algorithms applied to screening ordinary citizens? Consider this recent news story:

…China wants to fight crimes before they happen. They want to know they’ll happen before they’re planned—before the criminal even knows he’s going to be part of them. Bloomberg Business reported that the Communist Party “has directed one of the country’s largest state-run defense contractors, China Electronics Technology Group, to develop software to collate data on jobs, hobbies, consumption habits, and other behavior of ordinary citizens to predict terrorist acts before they occur.”

The Chinese government wants to know about everything: every text a person sends, every extra stop they make on the way home. It’s designed for dissidents, but it means that they’ll know every time a smoker buys a pack of cigarettes, how much gas a car owner uses, what time the new mom goes to bed, and what’s in the bachelor’s refrigerator.

There is (Orewellian) data, there is a definition of the criminal act, albeit one unfamiliar to Western minds (to the Chinese government terrorism is very broad), there is an official algorithm which ties all these together. What’s missing is a direct statement of the restrictions imposed given the algorithm spits out a high likelihood of a citizen’s committing a terrorist act. Given that this is China, these restrictions are likely to be harsh and intrusive.

Suppose first the algorithm reaches or exceeds the accuracy of parents judging their natural children; suppose even the algorithm is flawless. Now I say “flawless”, but saying it does not make the statement completely believable, because we have heard claims of certainty many times and (most of us) have formed the judgement that it is an advertising word. “Flawless” in advertising means “somewhat but far from certain.” Here I mean utter certainty. So suppose God Himself tells us whether any person will commit a crime. Should we restrict?

God Himself knows whether any person will commit an evil act, yet God, except in miraculous circumstance, does not stop individuals. We do stop them (if we can), however, as shown above; we agree that restrictions in the face of some precrimes or presins are good. This theological asymmetry begs an answer, but as I don’t know what it is and exploring it would take us too far afield, let’s stick with human actions.

If the algorithm truly is perfect, then restrictions are good. What the restrictions are is another question. Consider a verbal warning. The precriminal, in much the same way a mother might lecture her daughter, could be shown the algorithm’s output; he’d be told of its inerrancy and informed of the punishment that awaits him were he to commit the eventual crime. There is little to object to in this, except that the precriminal becomes a perpetual official “person of interest”. Even if the perfect algorithm predicts the person will never commit another crime, officialdom would give in to the temptation to treat this citizen differently.

Again, this happens even if all acknowledge the algorithm’s inerrancy, because humans are weak. In any case, tacit in this example is that the algorithm does not take as input the precriminal’s knowledge of the algorithm’s prediction; or, rather, it assumes the crime might not be committed were the precriminal to be made aware of the precrime. The opposite may be true. Jesus used an inerrant algorithm to predict Peter would deny him thrice before the cock crowed. Peter was made aware of his presin. “I will never disown you,” he said when made aware. Yet he still committed the sin. We’re back to theology because this is a perfect example of an inerrant prediction tied with a verbal warning; this warning was the only restriction placed on Peter’s behavior.

We might call this an unconditional prediction. The event was foreordained; it was going to happen no matter what; it was predestined. The warning thus served no purpose in changing history’s course with regard to this sin; and it did not modify the behavior Peter (with respect to the presin) before the sin. But, oh my, did it have an effect afterwards. Point is, in this rarest of circumstance no action, even physical restraint, taken by any authority (even the largest!) can stop the crime or sin. Indeed, it would be foolish to try to stop it. Verbal admonitions are it, then, and only for the purpose of teaching the precriminal or presinner a lesson afterwards. This is the proof that in the face of a perfect unconditional algorithm, the only action worth taking is advisory.

A perfect conditional algorithm, which allows the chance the crime won’t be committed were the precriminal made known of the algorithm’s prediction (a prediction which, I emphasize, is not known to the algorithm itself so that no logical paradox arises), it seems that actions tailored to the enormity of the crime and its prevention would be allowable. If no action were taken, because the algorithm is perfect, we know with utter certainty the crime will be committed. But what power would be used to stop it? We cannot commit an evil to stop another; the ends cannot justify the means. Surely a verbal warming isn’t evil, but is physical restraint? Incarceration? Monetary fine? Something worse? Something beyond a warning appears punishing because, after all, the warning might work. Who’s best interest is at heart? Actions taken at the scene of the precrime are warranted, say officials place a guard around the item that would have been stolen, etc.

Let’s now return to realism, to the land of not just imperfect algorithms, but lousy ones, which are the norm. Predicting individual human behavior in the absence of the kind of data (say) a parent might have has proven to be extraordinarily error prone. There is no good reason to believe the Chinese (or us) will have hit upon marvelous insight into behavior that improves accuracy markedly. (See this discussion of algorithm accuracy.) The actions taken to prevent crime in the face of poor predictions thus must be further tempered. Anything beyond a warning would be difficult to justify; and even warnings might be too much. Why?

Even imperfect algorithms will have some successes. Suppose, as is likely in China’s case, physical restraint is used to stop precriminals. Some who would truly have committed a crime will thus be barred from evil acts and so the crime rate will decrease. This decrease will be touted as a good. Yet the decrease will happen so long as the algorithm has at least one true prediction and regardless of the numbers of its false positives. The algorithm has to only remove one true precriminal from the street for the crime rate to decrease, even though that same algorithm might falsely advise detaining any number of innocent people, and even though the algorithm misses any number of true to-be criminals.

The harm flawed algorithms do, because the innocent are (effectively) kept at ransom and also perhaps made perpetual “persons of interest”, is substantial. Preventative actions beyond warnings or wariness on the part of officials can’t be justified, and even warnings are problematic because of the temptation to misuse the algorithm as a judge. Protective actions at the target, however, are fine. Stationing an extra visible squad of cops around a target, for instance. These kinds of actions are only indirectly restrictive.

There is no difference here and between screenings for spies, drugs, breast or prostate cancer, or anything else where large numbers of individuals are fed into an algorithm and formal actions by authorities are taken. Much depends on the accuracy of the predictions, but in many of these cases, the accuracy is poor or worse, meaning the actions taken often cannot be justified. We’re nowhere near an accuracy great enough to act on precrime—for (new) crimes such as murder, rape, theft, terrorism and the like. I say “near” which implies we can cut the distance through vigorous effort. I doubt that possibility strongly.

19 Comments

  1. Scotian

    Briggs,

    “We have criminals report to parole officers …”

    As the name implies this is only for criminals who have been paroled for good behaviour and only is applied to the end of the original sentence. A better example is the sex offender registry where the restrictions are often more draconian and arbitrary and applied for life. The abuses of this registry give a good indication of what wood happen if pre-crime algorithms were widely used by the guns and badges. It is reminiscent of the use of psychiatry to imprison the politically incorrect in the Soviet Union.

  2. Matt,

    This issue is typical of law enforcement’s approach to interrogation.

    In interrogations, their goal is not to determine the truth. Their goal is to create confessions. There is no glory in determining someone is innocent.

    The glory for a cop is getting a signed confession.

    Their standard, decades-old, interrogation protocol is designed to trick, fool, cadge, badger, intimidate, lie to, and otherwise force a suspect to confess. And it is very, very effective.

    http://www.pbs.org/wgbh/pages/frontline/the-confessions/false-confessions-and-interrogations/

    The pre-crime algorithms are designed for cops. For them, a false positive is no problem whatsoever. Without false positives, they would have much less to do.

    Identifying potential criminals is much, much easier than actually solving crimes.

    This is part-and-parcel with the FBI’s premier “counter-terrorism” program. It consists of an undercover officer or snitch targeting semi-retarded losers. The snitch plants the idea of terrorism in their heads. The snitch then provides the plan. The snitch then provides fake weapons, and sends the schmucks out to do the deed. The schmucks are then busted with “bombs” or “anti-aircraft missiles” (all fake) at the scene. Makes great headlines, but totally useless. This just takes the pre-crime identification to its next logical step–empowering the pre-criminal to commit a fake crime so he can be arrested for something other than bad thoughts.

  3. Sorry, I disagree completely with parental judgment about their children. Parents are often the poorest judges because they are emotionally involved and invested. If this is where governmental algorithms are going, this is very, very bad.

    I definitely agree with the verbal warning idea. We actually do this a lot. Many now feel we can’t fix the USA, it’s too broken. But the admonitions continue anyway. Why? Because the admonitions make an indelible impression on people and are useful after the fact to put back together the broken society. Without them, recovery and behaviour change takes longer.

    The show “Law and Order” once had an episode on this, where experts testified an adolescent was bad and would always be bad. The kid finally just shouted out to jail him—he could see he was an awful person and should not be allowed in society. That was not what was intended (defense was going for “it’s not his fault”), but it was the outcome. I was surprised TV would even go there in a program.

    Scotian mentions the sex offender registry. Besides being very, very misleading in what is represents (there does not seem to be any objective criteria for who ends up on it—there’s no “test” for the likelihood of repeating the offense) it leads cops and others to look at the registry when a crime is committed, ignoring the fact that many, many sex crimes are not reported and the perpetrators roam freely, ignored by society because they are not on the list. Background checks give schools and day cares a false sense of security, because offenders often are not caught and can easily be hired by the school or day care even with a background check. It’s more a legal “not my fault” move than protection of society. It does serve to create a group which can be hated and discriminated against, which may be part of the appeal.

    If algorithms can show criminal behaviour coming, then they also should be able to show genius, success, etc. What we end up with is the Brave New World caste system. Something Americans claim they do not want. Except maybe they really do…..

  4. John B()

    Sheri:

    How scary was “The Bad Seed”?

    Or the updated version :
    The very young and precocious Tom Riddle
    when Professor Dumbledore went to meet him at the orphanage?

  5. Steve E

    Prevention of the pre-crime is indeed a sticky issue. The algorithm would have to distinguish between premeditated crimes and crimes of opportunity. In the case of crimes of opportunity, prevention would require preventing the pre-criminal from finding himself in the pre-crime situation/opportunity (something which might not happen naturally anyway). So, in other words, keep the pre-criminal away from the opportunity and no actual crime will occur.

    Premeditated crimes are a different kettle of fish. In the case, the pre-crime is almost entirely within the pre-criminal and the opportunity is most often created by him. Here the pre-criminal must be kept away from his own nature. How do you do that?

  6. Ray

    “The only power any government has is the power to crack down on criminals. Well, when there aren’t enough criminals, one makes them. One declares so many things to be a crime that it becomes impossible for men to live without breaking laws.”
    Ayn Rand

  7. That information about China is chilling.
    I have never read the Philip K. Dick book but have had several conversations about the film “Minority Report,” in which you can see the questions here in action. If you COULD predict crimes before they happen, thus saving people from murder, rape, etc., SHOULD you? If you were certain that you’d be right most of the time, but would condemn some innocent people every now and then, should you still do it? What if you were right ALL the time, but the system could be rigged by some people, or even just one person? When, if ever, would you be obliged to let people be killed, raped, and assaulted? When, if ever, would you be obliged to prevent it?
    A fall series based on the movie did not do well but tried to explore the issues using some of the same characters. It quickly devolved into a police show, because once the writers decided that despicable people would be doing despicable things, there isn’t a good way to show (in weekly series fiction) that good people should not attempt to stop them — or at least, they didn’t find one.
    I don’t think algorithms could ever really do this. But that does NOT mean that some maniacs or totalitarian governments (see my first sentence) won’t decide that they can.

  8. BrianH

    There are many who would make “climate-denialism” a crime, with a few even calling for the death penalty. I’m sure Briggs would be caught in such an algorithm even were he to never write or say another word about the subject.

  9. John B (): I am not familiar with either reference. (The second one is from Harry Potter, right?). Sorry.

  10. Ken

    RE: “The moral question is this: should an authority take action against you if an algorithm spits out a sufficiently high probability that you will commit a crime or otherwise heinous or immoral act?”

    Isn’t this premature to debate, or even ponder (except as a script sequel to a movie starring T. Cruise)? After all, some of the most sophisticated algorithms cannot accurately predict which team will win a sporting event, which horse will win the race, etc., or even which population demographic will purchase what proportion of whatever product.

    There’s a helluva lot more money invested in figuring such things out (and correspondingly much more data available to analyze & crunch thru) than to figure out who might commit some crime (heinous or otherwise).

    In other words, where the available data going into a model is both extensive (arguably even “comprehensive”) & high quality, “garbage out” is all to common.

    Garbage in will help ensure garbage out. Still (though, occasionally, some garbage input [regardless if the model is garbage or not] will generate a lucky hit…thereby ensuring much more faith & credibility is attached to the input and output and model…).

    There’s no debate about it.

    Unless one wants to debate philosophical scenarios…regarding which the movie was much better.

  11. JH

    The moral question is this: should an authority take action against you if an algorithm spits out a sufficiently high probability that you will commit a crime or otherwise heinous or immoral act? If the answer is yes, then good algorithms must be sought. If not, then formal algorithms should be eschewed.

    Taking actions against me according to the assessment results calculated by an algorithm? A definite answer of NO.

    Logically, an answer of no doesn’t imply that formal algorithms should be eschewed, unless taking action against someone is the only purpose of the algorithms. One may want to use an algorithm for precautionary actions such as tighter monitoring, though one may argue that precautionary actions are also “actions against someone.”

    Who still tell their child of age 18 or older whome to vote for and chastise them for their non-agreeable choice?

    God Himself knows whether any person will commit an evil act, yet God, except in miraculous circumstance, does not stop individuals.

    How about an example of such miraculous circumstance? If such example exists, does this mean that your God is able to stop evil but unwilling to stop all evil?

  12. Clay Marley

    The US already has a pre-crime system. Its called the “no fly list”. If you’re on this list, the gov’t thinks you are so evil, so dangerous, that given the chance you’d commit suicide and kill hundreds of people with you.

    So you can’t get on a plane. Today. Tomorrow who knows. Obama already pointed out that you could buy a gun. So why stop at planes? Probably only because that’s all the gov’t thinks it can get away with. The system will eventually become a system of punishing people the gov’t doesn’t like by restricting their access, movements, jobs, speech, purchases, schooling, and whatever else they can think of.

    Which leads to the first moral question. Creating an algorithm that can predict when someone will behave wrongly presupposes the creator of that algorithm knows the difference between right and wrong. When a gov’t rejects an objective foundation to morality then the gov’t is free to develop its own definition of right and wrong. And usually, right is what protects and grows the government. And the end usually justifies the means.

  13. Mactoul.

    “Everything in these examples extrapolates to governmental algorithms and governmental preventative actions in the face of precrime.”

    On the contrary NOTHING in the parental example extrapolates to govt algorithms. Firstly, parents do not use any algorithm. The term “informal algorithm” is an oxymoron.
    The parents know their children. The Govt or its algorithm knows nobody.

  14. Mactoul.

    The term “algorithm” is used very loosely, indeed here it entirely loses its meaning when the author writes
    “Jesus used an inerrant algorithm to predict Peter would deny him thrice before the cock crowed. ”

    Is this an orthodox interpretation of things? Or even a meaningful interpretation?
    Usually, it is said that God is all-knowing and existing beyond time, all past, present and future are equally present to God. Thus, God does not foresee but simply sees the future.
    So
    a) Jesus simply saw that Peter would deny him. He did not predict the way people do when they predict, informally, what other people would do.
    b) Even people do not use algorithms to predict other people’s actions. So, even if Jesus predicted Peter’s action like a man, he did not use any algorithm, inerrant or otherwise.

  15. Greg Cavanagh

    I believe Pol Pot conducted his interrogations of civilians under the assumption of a pre-crime (popular uprising to topple the government). Only 6 million people were found guilty of that particular pre-crime.

    And how can a court (or jury) convict a person of a crime they have not yet done?

    The no fly list is a very interesting example.

  16. Mactoul.

    “Jesus used an inerrant algorithm ”

    Did He? Which algorithm? How do we know that he used an algorithm to predict the denial of St Peter?

    I thought the traditional understanding was that Jesus, being God, knew future. He didn’t need to predict–algorithmically or otherwise. But perhaps theology has advanced now.

  17. kneel

    Let us suppose, as per the main post, that the system is developed and is 100% accurate. Let us further suppose that I am predicted to perform some evil or immoral act, and am imprisoned for this – let us say I am in jail for 10 years because of this prediction.
    When I get out, I do indeed perform the predicted act.
    Will I then be jailed for it? Have I not already served my penalty for the act? Wouldn’t punishing me for the real act after punishing me for the likelyhood of performing the same act amount to punishing me twice for the same crime?
    It is indeed much more complicated than it first appears.

Leave a Reply

Your email address will not be published. Required fields are marked *