Proof Cause Is In The Mind And Not In The Data

Proof Cause Is In The Mind And Not In The Data

Pick something that happened. Doesn’t matter what it is, as long as it happened. Something caused this thing to happen; which is to say, something actual turned the potential (of the thing to happen) to actuality.

Now suppose you want to design a clever algorithm, as clever as you like, to discover the cause of this thing (in all four aspects of cause, or even just the efficient cause). You’re too busy to do it yourself, so you farm out the duty to a computer.

I will take, as my example, the death of Napoleon. One afternoon he was spry, sipping his Grand cru, and planning his momentous second comeback, and the next morning he was smelling like week-old brie. You are free to substitute this event for one of your liking.

Plug into the computer, or a diagram in the computer, or whatever you like, THE EVENT.

Now press “GO” or “ACTIVATE” or whatever it is that launches the electronic beastie into action.

What will be the result?

If you said nothing, you have said much. For you have said your “artificial intelligence” algorithm cannot discern cause. Which is saying a bunch. Indeed, more than a bunch, because you have proven lifeless algorithms cannot discover cause at all.

End of proof.

“Very funny, Briggs. Most amusing. But you know you left out the most important element.”

I did? What’s that?

“The data. No algorithm can work without data. It’s the data from which the cause is extracted.”

Data? Which data is that?

“Why, the data related to the event your algorithm is focused on.”

Say, you might be right. Okay, here’s some data. The other day I was given a small bottle of gin. In the shape of a Dutch house in delft blue. You weren’t supposed to drink it, but I did. In defense, I wasn’t told until after I drank it that I shouldn’t have.

“What in the name of Yorick’s skull are you talking about? That’s not data. You have to use real data. Something that’s related to your event. What’s this Dutch gin house have to do with that?”

Well, you know what Napoleon did in Holland. And what’s my choice have to do with anything? We want the algorithm to figure out the cause, not me. Shouldn’t it be the business of the algorithm to identify the data it needs to show cause?

“I’m not sure. That’s a tall order.”

An infinite one, or practically so. Everything that’s ever happened, in the order it happened, is data. That’s a lot of data. That tall order is thus not only tall, but impossible, too, since everything that’s ever happened wasn’t, for the most part, measured. And even it if it was (by us men), no device could store all this data or manipulate it.

“Of course not! Why in the world are you bringing in infinity and all this other silly business? You can be obtuse, Briggs. No, no. The data we want are those measurements related to the event you picked.”

Related? But don’t you mean by related those measures which are the cause of the event, or which are not the direct causes, but incidental ones, perhaps measures caused by the event itself, or measures that caused the cause of the event, and that sort of thing? Those measures which a prominent writer called in his award-eligible book (chap. 9) “the causal path”?

“They sound like it, yes.”

Then since it is you who have partial or full knowledge of the full or partial cause of the event, or of other events in the causal path of the event itself, isn’t it you and not the algorithm that is discerning the cause? Any steps you take to limit the data available to the algorithm in effect makes the algorithm’s finding of cause (or correlation) a self-fulfilling prophecy. By not putting in my gin means you are going all the work, not the algorithm. It means you have figured out the cause and not the algorithm. That makes the cause in your mind and not the data, doesn’t it?

“Perhaps.”

The best any algorithm can do is to find prominent correlations, which may or may not be directly related to the cause itself, using whatever rules of “correlation” you pre-specify. Your algorithm is doing what it was told in the same way as your toaster. These correlations will be better or worse depending on your understanding of the cause and therefore of what “data” you feed your algorithm. The only way we know these data are related to the cause, or are the cause, is because we have a power algorithms can never have, which is the extraction of and understanding of universals.

“I guess.”

And all that that is even before we consider predictive ability or, more devastating to your cause (get it? get it?), under-determination, Duhem, Quine, and all that. The idea that even if we think we have grasped the correct universal, and have indeed used our algorithm to make perfect predictions, we may be in error and that another, better, explanation is the truly true cause.

“That seems to follow.”

Then it also follows is that the only reason we think algorithms can find cause is because we forgot the cause of causes, or rather the cause of comprehending causes, which are our own minds.

Note that this explanation, which is a proof, does not explain why most use algorithms in the hope of finding “causes” to repeated events, or events which are claim to be repeated. That’s a whole ‘nother story, which involves, at the end, abandoning the notion probability is a real thing.

16 Comments

  1. DAV

    Cause means little beyond the ability to predict when X happens you get Y. If it isn’t then what is ’cause’? The ‘reason’ for Y from X? What does that really mean? How does one establish this ‘reason’? An algorithm is merely a procedure. How is the establishment of a ‘reason’ not a procedure? Why isn’t it an algorithm? Is it a chance happening or is method involved? Are you under the impression that algorithms can only be definite? “When you see something scary, run or fight!” is an open-ended procedure — an open-ended algorithm.

  2. The ‘scientist’ believes in man-made global warming. So he gathers data he believes shows man-made global warming, ignoring data that disproves his theory. He even goes so far as to alter data that inconveniently doesn’t fit his assumptions. He then devises an algorithm that, when given his hand-picked and carefully altered data, produces the output he desires. The fact that it produces a nearly identical output when fed random data is ignored. The “scientist” then proudly proclaims man-made global warming is incontrovertible fact, after deleting the algorithm and all the data so they can not be examined by others.

    His political allies then proudly proclaim “The science is settled!” Then they initiate a witch hunt of “deniers”.

  3. Ye Olde Statistician

    The frame problem.

    from “Places Where the Roads Don’t Go” (in Captive Dreams)
    Original Sim
    Two weeks later we visited Kyle in St. Louis. He was still Vaporetti back then – vapors, cloud computing, get it? – and still pushing the eccentricity of “Silicon Prairie.” I gave the seminar on adjoint functors to his staff. I think two of them understood it and one of them may have eventually made something of it.
    Kyle wanted to apply the theorem to the frame problem. After each action, the AI has to update its “inventory” of what the world is like. But how does it know which items to update? The “common sense law of inertia,” also known as the “let sleeping dogs lie” strategy, is for the system to ignore all states unaffected by the action. The problem is: How many non-effects does an action have? Using the Harris-MacKenzie Theorem, that infinite set might be compactified in practice to a finite set, thus reducing response times.
    ###

    [Jared] “Humans – most animals, really – have intention. We seek out sensory stimulations and select among them to guide our decisions. We don’t just see, we look.”
    “You make it sound like a baleen whale,” I interjected, “seining the sensory ocean for the krill of information.”
    Jared laughed and slapped the table. “That’s good, Mac! I’m going to steal that for my lectures.”
    “Well,” said Kyle. “You do have a talent for putting your finger on the key points. It’s the frame problem again, isn’t it? How does my AI know which inputs are relevant and which it can ignore?” He stood and began pacing. “This is frustrating,” he admitted. “And the Turing test is just the first step. To get the electronic computer to mimic the performance of the human computer…”
    Jared shook his head. “It’s not that simple.”
    A multitude of responses chase themselves across Kyle’s face – impatience, irritation, dismissal. But then he folded his hands under his chin as he often did when he turned thoughtful. “I hadn’t thought I was describing something simple.”
    Jared smiled. “Visit me in Princeton, and I’ll show you.”
    #

  4. Brad Tittle

    I am a soccer referee for our local soccer clubs. I get to run around the field and absorb part of the pain of sports — Making the game fair.

    We study the rules, take tests and make sure we understand the rules. Then we go out onto the field and ref. Every game I ref is a little different than the previous games. The temperament of the players is different. The moods of the coaches are different. The knowledge of the audience is different. When you ref a boys game, it is different than reffing a girls game. Reffing U15 is different than reffing U9. The rules can change between age groups, but mostly they stay the same. The calls might seem to be mostly the same, but they aren’t quite. Every call is just a scosh different. Sometimes arms flying has to be reigned in. Sometimes you sort of want a few more arms to be flying. A perfectly civilized game is nice. Then there are the kids on the field who are a little too timid. I want the timid kids to be a little more aggressive. Where the line is between Aggressive and too Aggressive differs. A person running full speed down the field can be in perfect control. That person can also be out of control. Maybe if computers managed to do all the calculations on the trajectory of every object in the field of view, they would be able to “predict” everything. I am not sure they can do all the calculations. So many of the calculations are variations of running into infinity. Sometimes the out of control run is irrelevant. Some times it isn’t.

    Some of the automated car systems are attempting to learn from those that drive well. The AI has access to all of the inputs and “watches” was the “good” driver drives. It then has a ‘chance’ to create the ‘model’ that matches the driver.

    In soccer though, I am wanting the kids to be aggressive without being too aggressive. When I teach safe driving, I try to teach “Back the F off” of people who really need to get somewhere fast. There is no way you are going to change that person’s mind. If you want to drive in Southern California though, you better be able to press the gas and get up to speed and be willing to crowd the guy in the lane to let you in. Backing OFF does not work.

    We dance. The dance never ends. The more we automate the dance, the more difficult the dance becomes because dancing is feeling and requires repeated use. When we get rid of the points of interface, we lose touch with the feeling. The feeling is knowing when to punch the accelerator and when not to. So many systems try to take feeling out and replace it with Lidar, radar, Sonar, and more, but they keep running into the inversion of the analog hole. Can we fill that hole? Yes… But there will be more holes created when we do.

  5. Brad Tittle

    @YOS — I swear I was pointing at something that resembles your comment.

  6. Ken

    RE: “The other day I was given a small bottle of gin. In the shape of a Dutch house in delft blue. You weren’t supposed to drink it, but I did. In defense, …

    … You might have used this as the reason: http://www.youtube.com/watch?v=Aj8uw71Pf4w

    It’s a classic.

  7. Ken

    RE: “Proof Cause Is In The Mind And Not In The Data”

    This is another lengthy missive about correlation-does-not-[necessarily]-mean-cause.

    The problem in the essay, today, is is that some implicit assumptions are made — which are often incorrect, and often very significant.

    Sometimes the data does establish proof of cause.

    But what about the myriad of errors made consistent with the essay (where correlation is taken as a basis for establishing cause)?

    Therein lies another implicit assumption, the analyst is making a mistake out of ignorance.

    Too often, the truth is the analyst endorses a particular outcome and the data/analysis are used, willfully irresponsibly and/or with deceit and malice aforethought, to support a particular desired conclusion. The relatively recent “housing bubble” is one such example where wildly irresponsible investments were made with a mix of emotion and underlying deceitful marketing about the risks of mortgage bonds (author Michael Lewis has a book or two on this, and, some articles available on-line). Financial speculative “bubbles” are almost always partly the result of such machinations exploiting emotional reasoning by the patsies that get fleeced. The old but sill relevant book, ‘Extraordinary Popular Delusions and the Madness of Crowds,’ is a must-read for anyone playing in finance.

    Briggs consistently ignores overt deceit, presenting a rationale & explanation that has at its heart a presupposition that the analyst is making a mistake susceptible to ‘remediation by education’ rather than addressing the analyst’s use of manipulation and deceit. As such, the recurring educational benefits are doomed to minimal effect; recognizing the elements of a flawed analysis helps, but there are other indicators one ought to be alert to that help identify use of deceit.

    One tip off is use of emotional reasoning, which while often illogical often enough has a fundamentally sound logic leading to dubious conclusions/actions. Briggs has an example of that: https://www.wmbriggs.com/post/1285/

  8. DG

    Good points there Briggs! Evidently a computer cannot subjectively figure out causes behind events even if it’s programmed to give the right answer to questions about causes. That’s like pointing out that a calculator can only give answers to math problems without any understanding of the math.

  9. Mactoul

    DAV,
    The hypothesis of anthropogenic global warming is that human actions are CAUSING the global temperature to increase.
    Both the climate scientists and the skeptics do not find the claim mysterious to understand.
    Many other causal statements are standard in physics. Matter causes the spacetime to curve. Nobody will say the reverse. Curvature of spacetime does NOT cause the matter.

  10. Pedro Enrique

    This is misleading. Causes are neither in the data nor in the mind, they’re out there in the world. You obviously agree with that, so I’m not sure why the article is titled the way it is. Perhaps you want to emphasize that a causation algorithm would rely on built-in assumptions that only the human mind can apprehend. True, but then you’d have to say that correlation is not in the data either, since it also depends on such assumptions. The algorithm may, for example, return that the correlation between X and Y is 0.5, but this is meaningless without apprehension of mathematical universals or universals that stand for X and Y. Plus, if knowledge of causation is underdetermined by the available evidence, knowledge of correlation *also* is, for it’s always possible that the available evidence is incorrect, etc. In fact, even “data” itself relies on human understanding, since data without interpretation is meaningless from the “point of view of the universe” (the meanings of the symbols representing the data are also underdetermined according to Quine et al.)

    The goal of Pearl’s formal work on causation, if you’re alluding to that, is to make those assumptions explicit (and eventually testing/reformulating them), not to ignore them.

  11. Mactoul

    “When you see something scary, run or fight!
    is neither a procedure nor an algorithm unless precise instructions are given for resolution of the decision “run or right”.
    Algorithm is not merely a procedure. To be worthy of the name, algorithms are highly formal procedures. They need to be tightly defined so that a person carrying out the algorithm can do so mindlessly.

  12. Briggs

    Pedro,

    Thanks. Yes, of course, cause is in things. But knowledge of cause, the point of this post, is in the mind and not data. And since it’s not in any data, it’s not in any algorithm. Yes, correlations are not in data, either, in the sense that there is no meaning of those correlations in the data. The meaning is external. The idea here is to counter arguments that AI will ever become aware, etc. See the links.

    All the tools Pearl and others use to better understand (complex) cause are nice. Use them. But when they work it’s only because the understanding of cause, the menial unthinking labor parts of the understanding, has been outsourced to a dumb machine.

    YOS’s point below is also excellent. We can’t have the algorithm take note of only those things changed, because when the changes take place, and where, aren’t known from data. So again it’s only by the extraction of universals etc. that we can understand cause.

  13. DAV

    neither a procedure nor an algorithm unless precise instructions are given for resolution of the decision “run or right”. [Algorithms] need to be tightly defined so that a person carrying out the algorithm can do so mindlessly.

    You forgot determination of ‘scary’. What does that mean? Why should only selection of the resulting actions need precise definition? Why can’t an algorithm be generic?

    As in, ‘see something scary’? n: [‘proceed normally’] y: ‘decide to run or fight’ { run decided: ‘run away’ fight decided: ‘fight’}

    All of those things in quotes are “to be determined”; may be distinct algorithms in themselves; and may actually change over time or perhaps as a result of experience.

    You are confusing algorithm description with implementation.

  14. Pedro Enrique

    Thank you for the clarification, Briggs

Leave a Reply

Your email address will not be published. Required fields are marked *