Control Groups With No Cancers In Hormesis Data Sets

Control Groups With No Cancers In Hormesis Data Sets

A well known paper by Duport et al. on radiation hormesis makes a statement about control groups which is not quite right. The paper is “Database of Radiogenic Cancer in Experimental Animals Exposed to Low Doses of Ionizing Radiation”, in Journal of Toxicology and Environmental Health, Part B, 5, 186–209, 2012.

The authors gathered radiation trial data with varying ranges of doses and radiation types, including no dose controls, for various animals for various kinds of cancers. The idea was to see if very low doses of radiation provided protection, it being undisputed that large doses are damaging.

If low doses are, sometimes, protective, for whatever biological reasons, the linear no-threshold model used by regulatory agencies everywhere would have to be tossed. This LNT model says any dose of any radiation above zero is harmful.

Duport says:

No cancers were observed in some control groups. The proportion of control groups without cancers ranged from a maximum of 11.9% in experiments with alpha radiation to 0 (zero)% in experiments with gamma radiation. That observation is important because when there are no cancer outcomes in control animals, it is only possible to detect either no increase or an increase in cancer risk, but not a decrease in risk, following radiation exposure.

Not so.

Suppose the control group was n = 1 with 0 observed cancers, and we’re expecting 10 new animals/patients, whatever. (The number of new animals expected is not important.) Then the probability of at least 1 new cancer out of 10 new animals is 0.83.

This follows from a calculation on the origin of parameters you can find in this award-eligible book. The only thing the probability calculation presumes is that there is the possibility of cancers or of no cancers in any group.

The idea is simple. It’s obvious the observed cancer rate in the control group is 0/1 = 0. But that control of size n = 1 does not give much evidence against the idea there will never be future cancers.

I must emphasize that the 0.83 is a predictive probability conditional on the premise that there could be cancers or no cancers in future animals/patients, and on the observation of no cancer in the n = 1. If you had other outside knowledge about control cancer rates, then this model is not for you. This model assumes we’re starting from scratch (well, like all frequentist analysis does). Just remember all probability is conditional—on the premises you choose! (The code below works for any number of observed cancer cases in the control or any group, and not just 0.)

Notationally:

     Pr(at least 1 cancer out of 10 new | 0 cancers observed, n = 1, future populations can have cancers or no cancers, with no other background information, and no exposure).

(Recall 1 – Pr(no new cancers out of any new| all those premises) = Pr(at least 1 etc.| etc.) )

Suppose instead a control of n = 1, we had a control of n = 10, all no cancers, still with 10 new expected. Then the probability of at least 1 new cancer is 0.48. Smaller, because now we have more evidence that no exposure predicts no cancers.

With control of n = 100, still with 10 new, the probability of at least 1 cancer falls to 0.09.

And so on. The greater the n in the no-cancer control group, the smaller the chance of seeing cancers in the new animals/patients. That should make sense, since the evidence grows that cancer is not associated with no exposure.

But suppose now that you had a control group of, say, 10, all with no cancer, and an exposure group at some low level of radiation with a larger n, say 20, also with no cancers, then there is a larger probability of no cancer (in future observations) in the low-exposure group.

Indeed, Pr(1+, n_new =10 | n_old = 10, no exposure) = 0.48, but Pr(1+, n_new =10 | n_old = 20, yes exposure) = 0.32.

There is a smaller chance of new cancers given the evidence of the exposure group compared to the evidence of the non-exposure control group.

Meaning “protection” of the low exposure is a possibility—-but this is strictly correlational. Probability models can’t find cause.

This model also works if there were cancers in the control group as well as the exposure group. I mean we can get the predictive probabilities based on all available sample evidence. This is not testing in the frequentist or Bayesian sense!

Note, too, that the number of new animals/patients expected is not crucial. The same philosophy holds whether this is 1 or a million. (If there are no new animals/patients expected, then you don’t need to model!)

Homework

Here’s the very simple code that makes the calculations. For the theory, see Uncertainty.


new.p = function(x, n.new, k.old, n.old){
   # Probability of x new "successes" given old data and guess of how many new data points there will be n.new, and with k.old "successes" out of n.old trials.
   # beta-binomial; the priors are deduced and not subjective
   a = k.old+1
   b = n.old-k.old+1
   (ans = exp(lchoose(n.new,x)+lbeta(x+a,n.new-x+b)-lbeta(a,b)))
}

To get the examples above, do things like this:


1 - new.p(0, 10, 0, 10)

You could also put for x 0:10 and see the whole distribution. In our case, this would give the probability of 0 new total cancers, 1 new total, etc., up to n.new (here equaling 10).

Now for the real homework. Find some real examples in practice, and post them below. My experience with hormesis literature is not large, and I’d be curious to learn if tossing out no-cancer non-exposure control groups is a common practice.

To support this site and its wholly independent host using credit card or PayPal (in any amount) click here

11 Comments

  1. Sheri

    ALL books are award eligible. Language has meaning.

    As someone with ZERO risk factors who fell into the cancer group, all I can say is voodoo is a better predictor of cancer than science.

  2. Ken

    A test with mismatched group sizes (exposed vs control/not exposed) is bad test design. Using that to argue that a larger ‘exposed’ group size not showing cancer is flawed for that reason, and, for neglecting the time factor — groups with low exposure may be at increased risk but over a much longer period than studied. The original authors are correct to assert that decreased risk cannot be imputed.

    The math might suggest such, but the physics/biology do not support such applications of the math. Put another way, applying the math in that manner effectively creates a flawed model of reality.

  3. Brian (Bullaren)

    Are we meant to conclude that low volume exposure to radiation has a “Mithrodatic” effect?

  4. Petras

    On what page in the Uncertainty book would I find this example?

  5. Briggs

    Petras,

    Around p 145 and thereabouts.

  6. Uncle Mike

    This was exactly the question I was asked in my Masters of Stats oral exam way back in the last Millennium. I got the answer wrong, but they gave me the silly sheepskin anyway. Lot of good it’s done me…

    Get Dr. B’s book. It may not get you a job, but you might achieve some measure of enlightenment. And that probably can’t hurt you.

  7. SteveBrooklineMA

    Suppose we are to toss an 8 sided die, the sides of which are numbered from 0 to 7, or in binary from 000 to 111. Just one side will
    show. As you have presented many times, we deduce that the
    probability of any particular side showing up is 1/8. What about the
    probability conditional on an even number showing, i.e. conditional on
    the 1’s bit being a 0? That’s 1/4 for each of the possible outcomes
    0,2,4 and 6 i.e. 000,010,100, and 110.

    Now suppose we have a population of N people, and to each person a bit
    (either a 0 or a 1) has been assigned. Without any other information, we
    conclude that the probability of any particular bit configuration
    occurring is (1/2)**N. We are being presented with a 2**N sided die! If we
    sample 1 person, we’ve learned one bit, thereby eliminating half the bit
    configurations. The probability of each remaining bit configuration
    drops to (1/2)**(N-1).

    Of course, this last analysis is very different from the one you present in this post. Why not apply the deduced die method?

    Best Regards from a long-time reader.

  8. aGrimm

    Having a career in Health Physics (HP), I studied the effects of radiation in depth. I too recommend Ray’s link to Calabrese. Anecdotally, my first HP job was an inspector of radioactive material licensees. I inspected the U of Washington’s Fisheries Department where they were giving low doses from an irradiator to salmon eggs, which were then hatched, tagged, and released. The study indicated that more salmon returned and were, overall, of bigger size and better health than non-irradiated fish eggs. This study started my questioning of the Linear No Threshold (LNT) theory. I soon became convinced that the LNT is incorrect. I wondered why it held such sway in the face of thousands of empirical studies that demonstrated the beneficial effects of low doses.
    Another anecdote. I attended a meeting of the Campus Radiation Safety Officers (CRSO) wherein we had the head of the EPA’s Radon group speak. The EPA promoted the LNT vigorously. It was asked why the EPA did not take into account the studies that refuted the “threat” of low doses of Radon. He cited the Precautionary Principle. Later I chaired a session and asked the approximately 120 HPs attending to raise their hands if they agreed with the EPA – four people raised the hands (they all worked for a State Radon Department). The rest of the HPs raised their hands to the question, “do you disagree with the EPA regarding Radon? I wondered no more as to why the LNT held sway. I became convinced that the only knowledgeable people who seriously agreed with the LNT are those whose job depended on scaring the ignorant masses. I’m only so-so at statistics and math, but know enough to spot BS. I studied the EPA’s rationale for their Radon program; it is math and statistics obfuscation based on the absurd Precautionary Principle.

    I would not argue with Briggs on his analysis of the faulty statistics in the study, but I can say with certainty that there are tons of studies showing the beneficial effects of low doses. Much of the debate leads back to the age old philosophical question posed by the Precautionary Principle: If something has the potential to harm one individual but is beneficial to the bulk of the individuals, do we not worry about the potentially harmed individual? It is always a fun discussion, but I’m going with – if it is good for most people, then the odds are it will be good for me.

    My expertise is not often a subject of discussion so I don’t comment here to often. However, I thoroughly enjoy Brigg’s juxtaposition of philosophy/theology with science and culture. The LNT controversy falls into this juxtaposition. My university required philosophy and theology courses no matter the degree. I am eternally grateful that these were required as they imposed a certain discipline to my thinking. Unfortunately, that university no longer requires the courses.

    FWIW: if you hear that one hit of radiation can cause a cancer, the odds of this happening are one in a quadrillion.

  9. Briggs

    aGrimm,

    Thanks for the comment.

    Yes, it does seem to me that, for many things, LNT is wrong. What concerns me is that evidence against could be made stronger if we weren’t “throwing away” evidence from these control groups. I just have no clear idea how often it happens.

    I’ve also met Ed, who is very good on this.

  10. I am going to bring up something about every multicellular biological organism that I am aware of, which is this: their survival curves are all parabolic. Below optimal amounts of whatever is limiting, their survival rates increase as we approach optimal. At optimal, no more improvement. Past optimal, survival begins to decrease. Think about it, and consider the contingency of it all.

Leave a Reply

Your email address will not be published. Required fields are marked *