*Read Part I*

*We’re taking a small digression to answer a question put by Deborah Mayo in Part I, pointing to this article on her site. Mayo should be on everybody’s list because she has good critiques of orthodox Bayesian statistics (which I don’t follow; we’re logical probabilists here), and because she has many named persons in statistics who comment on her articles. The material below is worth struggling through to see the kinds of arguments which exist over foundations.*

Loosely quoting Mayo, a hypothesis (proposition) h is confirmed by x (another proposition) if where d is any other proposition (this will make sense in the example to come). The proposition is disconfirmed if $latex \Pr(h|xd) < \Pr(h|d)$. If $latex \Pr(h|xd) = \Pr(h|d)$ then x is irrelevant to h. Lastly, h' means "h is false," "not h," or the complement of h. Mayo (I change her notation ever-so-slightly) says "a hypothesis h can be confirmed by x, while h' disconfirmed by x, and yet $latex \Pr(h|xd) < P(h'|dx)$. In other words, we can have $latex \Pr(h|xd) > \Pr(h|d)$ and $latex \Pr(h’|xd) < \Pr(h'|d)$ and yet $latex \Pr(h|xd) < \Pr(h'|xd).$" In support of this contention, she gives an example due to Popper (again changing the notation) about dice throws. First let d = "a six-sided object which will be tossed and only one side can show and with sides labeled 1, 2, ...", i.e. the standard evidence we have about dice.

Consider the next toss with a homogeneous die.h: 6 will turn up

h’: 6 will not turn up

x: an even number will turn up.

The probability of h is raised by information x, while h’ is undermined by x. (It’s probability goes from 5/6 to 4/6.) If we identify probability with degree of confirmation, x confirms h and disconfirms h’ (i.e., $latex \Pr(h|xd) > \Pr(h|d) and \Pr(h’|xd) < \Pr(h'))$. Yet because $latex \Pr(h|xd) < \Pr(h'|xd)$, h is less well confirmed given x than is h'. (This happens because $latex \Pr(h|d)$ is sufficiently low.) So $latex \Pr(h|xd)$ cannot just be identified with the degree of confirmation that x affords h.

I don’t agree with Popper (as usual). Because $latex \Pr(h|d) = 1/6 < \Pr(h|xd) = 2/6$ and $latex \Pr(h'|d) = 5/6 > \Pr(h’|xd) = 4/6$. In other words, we started believing in h to the tune of 1/6, but after assuming (or being told) x, then h becomes *twice* as likely. And we start by believing h’ to the tune of 5/6, but after assuming x, this decreases to 4/6, or 20% lower. Yes, it is still true that h’ given x and d is more likely than h, but so what? We just said (in x) that we saw a 2 or 4 or 6: h’ is two of these and h is only one.

“Does x (in the presence of d) confirm h?” is a separate question from “Which (in the presence of x and d) is the more likely, h or h’?” The addition of x to d “confirms” h in the sense that h, given the new information, is now more likely.

No problems so far, *n’est-ce pas*? And Mayo recognizes this in quoting Carnap who noted “to confirm” is ambiguous. It can mean (these are my words) “increases the probability of” or it might mean “making it more likely than any other.” Well, whichever. Neither is a difficulty for probability, which flows perfectly along its course. The problems here are the ambiguities of language and labels, not with logic.

No real disagreements yet. Enter the so-called “paradox of irrelevant conjunctions.” Idea is if x “confirms” h, then x should also “confirm” hp, where p is some other proposition (hp reads “h & p”). There are limits: if p = h’, then hp is always false, no matter which x you pick. Ignore these. As before we can say p is irrelevant to x if . Continuing the example, let p = “My hat is a fedora”; then and so is .

The next step in the “paradox” is to note that if x “confirms” h in the first sense above, then . In our example, this is 1/(1/3) which is indeed greater than 1. So we’re okay. Now we assume p is irrelevant, so . Divide this by , then because so too does . Ho hum so far; just some manipulation of symbols.

Then it is claimed that x, since it “confirmed” h, must also “confirm” hp. Well, this is so. Then Mayo says (still with my notation):

(2) Entailment condition: If x confirms T, and T entails p, then x confirms p.

In particular, if x confirms (hp), then x confirms p.

(3) From (1) and (2), if x confirms h, then x confirms p for any irrelevant p consistent with H.

(Assume neither h nor p have probabilities 0 or 1).

It follows that if x confirms any h, then x confirms any p.

That’s the “paradox.” I don’t buy it. Like most (all?) paradoxes, there was a trip up in evidence along the way.

In our example, in (2), h does *not* entail p, but hp does entail p. What does entail mean? Well, . The paradox says x confirms p just because hp entails p. Not a chance.

What’s happened here is the conditioning information, which is absolutely required to compute any probability, got lost in the words. We went from “x and hp” to “x and p”, which is a mistake. Here’s the proof.

If x confirms h, then (using the weaker sense of “confirmed”). Because p is irrelevant to h and x, then and and . But if p is confirmed by x, then it *must* be that . But doesn’t exist: it has *no probability*. Neither does exist.^{1} What does wearing a hat or not have to do with dice? Nothing. You can’t get there from here. This is a consequence of p’s irrelevancy.

So p can’t be confirmed by x in the usual way. What if we add h to the mix, insisting ? Not much, because again neither of those probabilities exist. You can’t have inequalities with non-existent quantities. And when we “tack on” irrelevant p, we’re always asking questions about or and *not* or .

Result? No paradox, only some confusion over the words. Probability as logic remains unscathed. If anybody thinks the paradox remains, she should try her hand at stating the paradox purely using the probability symbols and not the mixture of words and symbols. The exercise will be instructive.

**See the necessary comment by Jonathan D and my reply. Looks like JD found the mistake actually starts earlier in the problem.**

————————————————————–

^{1}Thinking every probability has a unique number is a mistake subjectivists make. They’ll say “Well I *believe* ” or whatever, but what they really have done is inserted information and withheld it from the formula, i.e. when they make statements like that they’re really saying for some mysterious q that forms their belief. Given q that probability might even be right, but just is not . Still no paradox.

Categories: Philosophy, Statistics

Briggs, I think you are barking up the wrong tree here. If we are to say that Pr(p|d) doesn’t exist (the point is an important one, but in other contexts I think you would make it differently*), then neither does Pr(hp|d). On top of that, if we were to change d to the usual die stuff

andBriggs’ hat is either a fedora or a baseball cap (ha!), and be quite happy to give a probability for p given d, etc., and yet the “paradox” would not be any different.It does just seem to be confusion over words, though. If we start with the relevant meaning of “confirms”, then there’s nothing to suggest that the entailment condition should be true. It may or may not be a good thing to treat such a condition as a desirable property of a measure of support, such as “confirmation”, but Mayo’s post does not go into this. Instead, it describes confirmation of the conjunction as confimation of

bothh and p, making (mis)use of the ambiguity of the scope of “both” in natural language. But then, hiding arguments in cooked up paradoxes is a habit of philosophy.(*The contrasts between your approaches in this post and the previous are another topic which is interesting in itself!)

Jonathan,

Hmm. I agree with your second paragraph and but let me think about your first. I’m leaning on agreeing with you. If you’re right, then the mistake starts way, way back when we “tack on” the p. It ends there too, because, as you say, Pr(hp|d) = Pr(h|pd)Pr(p|d) and Pr(p|d) does not exist. Thus no paradox because the tacking makes no sense at all. If I wasn’t jet lagged, I might be in your camp entirely. But I want to think more on it.

The move to make d = ” dice and this or that hat” interesting, but does the paradox still exist? True that Pr(h|pd)Pr(p|d) = 1/6 * 1/2 = 1/12. But for x to “confirm” p it must be that Pr(p|xd) > Pr(p|d). Well Pr(p|d) = 1/2. And Pr(p|xd) = Pr(x|pd)Pr(p|d)/Pr(x|d) = 1/2*(1/2)/(1/2) = 1/2. So no paradox, even when we add to d.

But your first point I think might be so. Still no paradox, and an early, not noticed mistake. I like it.

Adults really make these mistakes?

Mr. Briggs,

I agree with Jonnaha about your barking up the wrong tree.

I want to first point out Fitelsonâ€™s assumption (

A) thatJ is an irrelevant conjuct to H, with respect to just in case P(x|H)=P(x|J & H)and your assumption (

B) thatJ is irrelevant to H and Xare not the same. B implies A, but not vice versa. (Changing notation slightly can cause some difficulties on discussion.)

Mayo first describes the problems when using the probability as a measure of confirmation via the toy die example.

Then she further expresses her concerns about using R(H,x) = P(H|x)/P(h) as a measure of confirmation. To explain the problem, letâ€™s first go thru simple probability algebra,

(C)R(H, x)= P(H|x)/P(H) = [ P(x and H)/P(x) ] / P(H)= [ P(x and H)/P(H) ] / P(x)= P(x|H)/P(x).

Note how x and H can be switched conditionally, i.e., R(H, x) = R(x, H)

(It goes without saying that we need to assume conditions such that all the probabilities are well-defined. Otherwise, everything is garbage.)

Under the assumption A and applyingC,R(H,x) = P(x|H) / P(x) = P(x|JH) / P(x) = P(JH|x) / P(JH) = R(JH, x).

Yes, R(H,x)=R(JH,x). Hence the problem (paradox) and Mayoâ€™s statement that

I think one solution to this is to use a different measure M of confirmation, one at least that M(H,x) > M(JH, x)

I think Mayo is also asking statisticiansâ€™ input on its implication on practice. One thing that comes to my mind is Bayesian variable selection method using

Posterior odds = Bayes factor * prior odds ,

Which, in a way, represents a measure of confirmation. Well, I need more time to think about this. Can someone grade for me?

Perhaps others have brought up some of these points which I sketched last night:

There are two issues, â€œentailmentâ€ and irrelevant conjunction. I meant to focus my blog remark on the latter. Do you accept this? I think it’s a disaster in its own right. I explain:

1. First there’s the business of entailment. You wrote:

â€œThe paradox says x confirms p just because hp entails p.â€

To clarify what is being said, add the underlined phrases:

(One aspect of) The paradox says x confirms p just because hp entails p and x confirms hp.

You concur that â€œ x confirms p just because hp entails pâ€.

In any event, this was the standard Glymour point, B-boosters have an answer to this much.

2. B-boosters are within their rights to deny entailment (philosophers call it â€œspecial consequenceâ€). I should have indicated, by the way, that quite a lot, including this, was clarified in the long comments that accompany this post. While true, Bayesian epistemologists–like you, I presume–are trying to give a â€œlogicâ€ of inference, evidence, confirmation that captures intuitions about these things. Well, is it intuitively plausible to say that

x is evidence for (H and J) but x is irrelevant to J?, or â€œWe deny J has been confirmed in the least by x, but we are correct to report that x confirms the conjunction (H and J)â€?

3. So, even putting â€œentailmentâ€ to one side, I think the step to claiming x confirms (H and J), on account of x confirming H is bad enough (whether itâ€™s called a paradox or not). Itâ€™s a maximally unreliable move Iâ€™d say. I canâ€™t tell whether you are accepting the irrelevant conjunction part (which was my intended focus).

4. What do you mean by saying Pr(p|d) and Pr(p|xd) donâ€™t exist?

No underlining, so just add “and x confirms hp”. The end of the 4th line in #1.

Hi Jonathan, I’d like apologize for misspelling your name. No idea what happened.

Good morning Dr. Briggs,

As I’ve mentioned both in comments and correspondence, much of what you write gives me the impression that there’s no use in trying to understand a phenomenon (or the lack of a phenomenon) using the putative tools of statistical analysis. I’m reasonably sure that you don’t think that to be the case (my hypothesis, subject to uncertainty of course), and it’s quite possible that I’m dense.

That being said (and as I’ve said before) what would really help me, both practically and philosophically, to understand would be an example at least resembling something from the real world. For example (and this is NOT something I’m particularly interested in but just an example), suppose one had heard that living near high-voltage power lines increased the risk of certain cancers. Such a thing could not, at least in my understanding, either be ruled in or out by known cause and effect relationships of electromagnetics and biology. What could an experimenter and statistician do to answer the question “is it the case that there’s a causal relationship between time in proximity to high-voltage power lines and incidence of cancer?” Clearly, there are many factors: What is proximity?; How long is long?; How high is high-voltage?; and many others. Given the funding (with no preconditions on the expected result) would you tackle the problem with a reasonable expectation of enlightening results, or would you throw up your hands and say “no possible set of data and method of analysis can shed light on this question.” Of course, I assume your honesty here, that you wouldn’t take the money knowing that the latter to be the case. And again, it’s NOT the particular question in which I’m interested, it’s the approach. So if you prefer a better (but still relatively significant – in the colloquial sense of significant) question, that’s fine with me. But questions about dice and coins don’t satisfy me in this regard, though they certainly elucidate many very fundamental probability questions.

On the other hand, I’m asking a lot for nothing, I know.

Briggs,

Yes, my poiint in adding to d was to show that the issue is not any sort of paradox, but just that Mayo objects to using the ratio as any sort of measure of “confirmation”, because of what happens with irrelevent extras. (It really doesn’t matter whether you want entailment, or just no confirmation of such conjuctions – they come together.)

The thing is, for Mayo to be interested in your answer, you have to start talking about some sort of hypothesis testing, and I suspect that’s not going to happen.

All,

Back to computer. Working on new post. Answers later. But…

Jonathan D,

Yep. It ain’t never gonna happen. Who cares whether a hypothesis which has probability 0 (assuming continuous parameters; “has probability 0” in the mathematical sense) is true or false? It is and always has been silly.