A person who Nature is calling a “whistle blower” has written a brief confessional in the British Medical Journal admitting to what might be termed statistical fiddling at a “major” drug company. (The just-over-one-page article requires a subscription to view.)
It is always difficult to trust fully any gossip which is reported anonymously, for if a man can hide behind a letter he may say anything, even that which is not so, without fear of reprisal. Too, the person or organization who publishes the gossip has no way of verifying it.
Now Mr X, if we may call him that, speaks of a company’s post-FDA-approval observational studies on drugs, studies whose primary purpose are to tout the drugs. “Patented Xlimicorconaphil is better than the generic! Ask your doctor if Xlimicorconaphil is right for you. Hint: it is.”
Mr X criticizes the statistical methods of these observational studies. He claims, “the truth is that these studies had more marketing than science behind them.” Worse:
Since marketing claims needed to be backed-up scientifically, we occasionally resorted to “playing” with the data that had originally failed to show the expected result. This was done by altering the statistical method until any statistical significance was found. Such a result might not have supported the marketing claim, but it was always worth giving it a go to see what results you could produce. And it was possible because the protocols of post-marketing studies were lax, and it was not a requirement to specify any statistical methodology in detail. On the other hand, the studies were hypothesis testing (such as cohort studies, case-control studies) rather than hypothesis generating (such as case reports or adverse events reports), so playing with the data felt uncomfortable.
The dreadful, should-be-banned term “statistical significance” means a publishable p-value, i.e. one less than the magic, never-to-be-questioned number, a number given to us (rumor has it) by Merlin himself. The number is sacrosanct, it is written into the law. Studies which cannot produce the required number are shunned. Those that find wee p-values are glorified.
Now especially in observations studies, this desirable creature, the wee p-value, can always be found, as long as one is willing to rummage around the data for a sufficient length of time. Mr X claims that is what his drug company has done. He appears to think this practice unusual and a bit shifty. Shifty it may be, but unusual it is not. It is not confined to observational studies, but appears even in designed experiments. And this is to be expected when success is defined in terms of p-values.
Statistics in this way is like a machine into which is fed data, a crank is turned, and out pops a rotten egg or one made of gold. Turning the crank longer increases the chance of gold. Success is trivially identified, but so is failure. The process requires no thinking (except by the nameless mechanics who keep the machine running).
Mr X also claims:
Other practices to ensure the marketing message was clear in the final publication included omission of negative results, usually in secondary outcome measures that had not been specified in the protocol, or inflating the importance of secondary outcome measures if they were positive when the primary measure was not.
Which sounds like standard politics. But I wonder. How often do drug companies try to hide negative results? Truly negative, I mean. Like discovering that widows who eat Xlimicorconaphil stroke out at rates exceeding the general population? What happens when this aberration finally outs? Smells like jail time.
Instead it’s more likely that the kind of “negative” results Mr X means are slight increases of slightly higher blood pressures in some subset of a subset of the population of those who take especially high doses of Xlimicorconaphil. Not a good thing, but not as awful as death or disfigurement.
Anyway, those negative findings are just that: findings. Produced using the same questionable statistical procedures as the positive findings which Mr X isn’t so keen about. How robust are they then? Probably not very.
The truth for most drugs is usually something like this: Xlimicorconaphil was found, via the usual FDA process, to be marginally better than the generic in some subset of the population. Xlimicorconaphil produces slightly different side effects, or of different intensity or frequency. The drug company, having to recoup its investment, takes this information, dresses it up, and sells the pill as New and Improved!
Nothing shady about this, especially in our all-marketing-all-the-time culture where such behavior is expected of everyone. The real worry is if doctors cease being skeptical gatekeepers.
Thanks to Brad Tittle who suggested this topic.