The Randomized In Randomized Controlled Trials Is Pure Superstition; Bad Magic

The Randomized In Randomized Controlled Trials Is Pure Superstition; Bad Magic

Why You Need To Read This

My dear readers, a complex subject today, presented in the guise of a book review.

We are increasingly beset by lunatic psychotic sociopathic rulers wielding The Science like a club. Not always, but certainly most of the time, this The Science is bad science. So we must grasp how this bad The Science is created.

Some of the methods are easy to see, and we have gone through many. Some are subtle, and far from simple. But there is one method that is particularly beloved by almost all scientists, and which all think is dandy. That is the so-called gold standard Randomized Controlled Trial.

There is nothing in the world wrong, and everything right, with a controlled trial. But randomization is pure superstition, no different than cargo cult science, as I have explained in great detail in Uncertainty (scroll down here). And will explain here, too.

Randomization does nothing except give a false sense of certainty that cause has been proved in an experiment. Randomization is treated like a magic wand that is waved over data and blesses it. Which you might think is yet another hilarious joke, of the kind for which I am so famous. Alas, no. It is in earnest.

The book we are reviewing is The Tangle of Science: Reliability Beyond Method, Rigour, and Objectivity by Nancy Cartwright, Jeremy Hardie, Elenora Montuschi, Matthew Soleiman, and Ann Thresher. I do not like this book, but wanted to.

There is tremendous effort required on your part to follow this post. But follow it you must.

You can skip the Review and go right to RCTs.


It is unlikely any book written by a committee will come off sounding other than that it was written by a committee. Alas, that is true here, too. Too much of the book reads like a transcript from a discussion group. I at times pictured a group of women taking turns giving their best concerned faces—you’ve seen them—saying “I feel…”, while somebody took notes.

The arguments are therefore not tight: there are too many words. The longest chapter of the book asks Wither Objectivity? As interesting a philosophical as that question might be, the reader is bludgeoned into not caring. Same with the chapter defining “rigorous”. Does it matter whether a piece of science is called rigorously objective or objectively rigorous? Maybe. But I simply could not love these topics when presented like this.

The “tangle” itself is obvious. Most measured things have lots of causes which operate under a plethora of changing conditions, so it’s damned hard, and even impossible, to keep track of it all in complex phenomena. Which everybody already knew. We’ll have more on this subject later, on what I call the causal chain.

That’s all I’m going to say of the book. Which makes this review unfair, but I needed to save space for its central mistake about RCTs.

Randomized Controlled Trials

I will prove to you that RCTs cannot prove cause. The authors of Tangled will also prove that, inadvertently, while trying to prove RCTs can prove cause.

First off, controlled trials can prove cause conditionally. If a scientist says, whether or not he believes it, “Here are the only possible causes of O (the outcome), and I shall control or account for all of them, including this new one, X.” Then if the uncertainty that O takes some value changes depending on X, then X might be a cause of O. And it is a cause of O, conditionally, if all those other causes do not in turn cause X in all circumstances.

Of course, the scientist might be wrong. His list of causes could contain lacuna, or it might have spurious entries. No matter: conditionally, his judgement is correct about X being a cause. Unconditionally he might be wrong.

With me so far?

Trials which are highly controlled, usually of the very small or common, are good at identifying cause. Which is why they are used so well and often in physics, chemistry and in everyday life. You shoot a slingshot at the window. It breaks. You caused it to break. One form of cause. The rock penetrating the window is another cause. Cause has aspects.

Your controlled experiment has proved cause. Conditionally. Because ackshually, somebody will chime in, “Yeah, well, an alien from Klygorg could have shot a secret space ray at the window at the same time! That could be the true cause.” Well, you can go on like that imagining causes forever. And if you’re short of material for a peer-reviewed publication, that’s what you do. The rest of us will spank you for breaking the window.

It was, therefore, obvious that control can prove cause, at least conditionally, on the belief that the thing under consideration is a cause. You see, I trust, the circularity. But it is not a vicious circle.

All right. Let’s quote Tangled on RCTs.

An RCT is an experiment that uses a correlation…between a treatment T and a later outcome O in a population to draw causal conclusions about T and O in that population…T may be correlated with a later O and yet not cause it if T is correlated with other factors that cause O. Such factors are called ‘confounders’. For the moment we call the next effect of the confounders C…

I’ll pass over the loose way of speaking of correlation (most think only linear). They miss that T may also be correlated with O if it has nothing in the world to do with it or any confounders.

Now say that T is orthogonal to C is the two are probabilistically independent: Prob (C|T) = Prob (T|C). At its simplest what an RCT does is try to ensure orthogonality between T and C in the population enrolled in the experiment.

This is not independence as it’s usually defined. The usual is this: Pr(CT|E) = Pr(C|E)Pr(T|E). Which is to say, knowing C tells you nothing about T, and knowing T tells you nothing about C, given some background evidence E. Which must be present; E, that is. There is no such thing as unconditional probability.

Their definition is odd. It’s the probability confounders are, what, operative? given that a treatment has been applied, which is supposed to equal the probability the treatment is applied given confounders have been, what, applied? This does not make any sense to what we want to know, which is how much of an effect, if any, T has, in the presence of C, given E.

We can’t know these probabilities anyway, because we cannot see the C! (Unlike sailors.) So, even if this criterion is right and proper, how can you know if Prob (C|T) = Prob (T|C)? Answer: you cannot.

What we want to know is if, in this trial, the confounder causes operated along with the treatment cause, or vice versa, or that only one set of causes worked, or etc. In other words, what we want to know is Pr (O | TCE) and Pr (O | T’CE), which is the probability the Outcome (takes some value) given the treatment is applied (T), in the presence of the Lord knows how many confounders C, and whatever background evidence E we have. (Yet we must take C as part of E, since we can’t see C.) Or the same probability but assuming the treatment is not applied (T’, using one way of writing “not-T”).

Here comes the bad move:

The first stage in trying to ensure orthogonality is random assignment.

My friends, random only means unknown, or unknown cause. So that random assignment means what caused a person to be assigned the treatment or control is not known. That’s it. Nothing more.

Random assignment is the opposite of control. So that randomized control is a sort of oxymoron. In reality, the control is acknowledgement that some causes, or potential causes, are already known or assumed, which is why the persons in, say, a medical trial are pre-separated by sex. Well, I mean in the old days, when medicine still acknowledged biological sex.

Here comes the magic wand:

The population is randomly assigned, half to the treatment group, where everyone receives it, and half to the control group, where no one receives it. Random assignment…ensures that T is orthogonal to all confounders at the point of assignment…

No it doesn’t.

In the group you assign the treatment you have no idea what the confounders are. If you did, you would control for them. They are confounders because you don’t know what causal powers they have. There is absolutely no guarantee, whatsoever, that your trial will have an equal split of confounding causes in your treatment and control groups. Further, since you have no idea what these confoungers are, there is no way to know what fraction of the list of confounders are in each group. For you do not know the list. If you did, you would control for them.

You don’t even know how many confounders there are: there may be one, there may be none, there may be plenty. Your treatment group may have all of one counfounder and none of all the others, and the control group may have none, some, or all.

“Randomization” does not, and cannot, produce an equal split. How could it, since you don’t know what the confounders are, or how many? Randomization does nothing to help you. Except take from experimenters the ability to assign people to groups. Which is not a bad thing, because as I say all scientists believe in confirmation bias, but they all believe it happens to the other guy. But you can get the same thing with blinding, and without the bad magical thinking of randomization.

Here, now, is the self-own, and admission that the magic wand is powerless:

Recall, however, that probability is an ‘infinite long run’ notion.

No, it isn’t. Unique, single, or finite propositions can be assigned probability. But let that pass.

Drum roll, with my emphasis:

This means that you shouldn’t expect an equal distribution of C factors amongst the T (i.e. treatment) and not-T (i.e. control) groups in any single randomisation of the study population, but rather that if you repeat the experiment, doing a random assignment again and again, over and over, on exactly the same population with exactly the same characteristics, the sequences of relative frequencies of Cs in T and of Cs in -T [not T, or the control] will converge to the same limit.

I’m sure I’m don’t have to tell you I do not give a damn about trials I have not done, but theoretically might, and that the trials that I did not do, cannot give power or knowledge to the one that I did do. Which is what “randomization” purports to do!

Whatever happens out at infinity is of no interest to what happens to my single experiment. I can’t do an infinite number of experiments. Notice there is no notion, and can be no notion, about rate of convergence, either. Leading to the Large Enough Fallacy: the belief that, say, 20 is close enough to infinity—when any number you can think of is infinitely far away from infinity.

And anyway, just what does their use of exactly mean? No, I’m asking. Stop and think of this question.

Did you stop and think? Admit it if you didn’t.

If a new trial is exactly the same as the old, then how in the unholy hell could there be more or fewer C in the new trial group as in the old trial group? The only way is if they are not the same! If the trials were exactly the same, then you’d get exactly the same answers every time. Ah, so how can they think they would be differences? Because of the superstition of “randomness”, which is allowed to vary in some mystical way, though everything else is exactly the same. Great nonsense.

See what I mean about given probability magical powers, or treating it like a superstition? If you do not see what I mean, you had better figure it out. The mixing up of probability and cause, or knowledge of cause, is rife in science. It is why so many “studies show” obvious falsities.

We finally understand why some think “randomization” can prove cause. It is only that old conditioning we met above. T is thought as a cause because T is conditionally thought as a cause. That’s it, and nothing more.

So ignore all the hoopla about “randomized” controlled trials. Each study stands or falls on the control it had, and not only the “randomization” given to it.

If you still haven’t grasped the key, I will follow this up with an article There Is No Such Thing As A Fair Coin.

Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email:, and please include yours so I know who to thank.


  1. Yes with respect to the trees, but no wrt to the forest. As you say an RCT does not usually demonstrate causation in medicine (it does for surgery: shoot half a group in the head and their deaths do allow you to impute some conclusions about causation). However, in medicine correlation is usually sufficient.

    e.g. treatment X on Y patients is withheld from Z patients who are otherwise treated the same. Y patients live longer than Zs – so is the benefit wholly or partially caused by X? maybe, but we don’t care – what we care about is that X correlates with better outcomes.

  2. Johnno

    If THE SCIENCE ™ cannot tell us anything, we must get rid of all THE SCIENtists.

    Fire them at random, while keeping their paranoid hersteria under control.

  3. If they cut all the BS and went back to divination via chicken entrails it would be a lot cheaper, easier, quicker, and more repeatable.

  4. Rudolph Harrier

    I’ve never really thought about the notion of convergence before, except in the broad case. What I mean is that I’ve thought about how in a frequentist setting you could have a 50/50 coin flip repeated 1,000,000 times in a row, every time showing heads, and it could still be a valid random variable for that distribution if these were cancelled out later by 1,000,000 more tails down the road. As you say any finite number is infinitely far from infinity, so there is always infinitely much room to “correct” unusual behavior and preserve the distribution.

    But what this article has got me thinking about is whether convergence can be given any useful meaning in a mathematical sense. What I mean is, can we preserve both the ideas of “good convergence” and “independent observations?”

    We can talk about convergence rates of sequence of observations. For example, we might desire that after n observations the difference between the observed average and the real average is at most 1/n. Thus if we assign 0 to tails and 1 to heads, in our coin flip observation after the third observation we would want the observed average to be within 1/3 of the real average of 1/2 meaning between 1/6 to 5/6, after four observations we want an average between 1/4 and 3/4 etc. But if we set up our bounds in this naive fashion, then the observations cannot be independent. Suppose we have two heads in a row. Then a third heads would put the average to 1, not in between 1/6 and 5/6. Thus in order to have our arbitrary “sufficiently high” level of convergence the coin would be forced to show tails (leading to an allowed average of 2/3) but this is not allowed in the frequentist interpretation (or most other interpretations.)

    Now the bounds I choose were arbitrary, but if we choose any restrictions at all then eventually we will not allow an average of 1, meaning that if we had only heads before then the coin would be forced to be tails on the next flip.

    So the naive approach doesn’t work. But is there any more sophisticated approach that allows us to force some convergence at any stage before the jump to infinity (which we can never do in practice)? I know that we can say things like “the sequences where the average was very far away from the true average after a large n number of observations would be very unlikely” but this is unsatisfying to me since we need to use the notion of convergence to define what probabilities mean in practice to begin with.

  5. Rudolph Harrier:

    Re: “But is there any more sophisticated approach”

    Yes – see causal colliders as used in physics.

  6. Gunther Heinz

    If you flip a coin and it lands on its edge, that PROVES that a woman can have a penis.

  7. The BEST that randomization can do is to make unknown confounders relatively spread out among treatment groups; a bit of fiddling with a hypergeometric distribution gives this insight. But it’s RANDOM! It guarantees nothing, only tilting the odds in the experimenter’s favor. I’m not sure why folks who can swallow the fiction of a sampling distribution can’t understand the limitations of randomization.

Leave a Reply

Your email address will not be published. Required fields are marked *