Remember the old stock scam? Call up 1,000 people and tell half Stock A will rise, and tell the other half it will fall. Five hundred people will be slightly impressed by your picking prowess.
Next day call those 500 and tell half of them the stock will rise, and tell the other half it will fall. You’re down to 250, but you have two hits in a row with these people.
Now you can dump the 250 that heard the wrong advice, but you can also try them again with the third call. These folks won’t be as impressed by the results, but why throw away good phone numbers? Use the same routine and tell half of whomever you call the stock will rise, et cetera.
Keep this up and after a few days you have an audience primed to hang on to your words. Only now you’re charging for them.
Charge whatever you like! Justify your large fee on the research that you put into your picks. Time is money. Only be sure to append this notice about side effects with every forecast: past performance is no guarantee of future results; plus, you might go broke.
Keep that in mind as you consider the nifty new paper: “New drugs: where did we go wrong and what can we do better?” It’s in the British Medical Journal by Beate Wieseler, Natalie McGauran, and Thomas Kaiser.
Gist: “More than half of new drugs entering the German healthcare system have not been shown to add benefit.”
Between 2011 and 2017, IQWiG assessed 216 drugs entering the German market following regulatory approval—152 new molecular entities and 64 drugs granted a new indication. Almost all of these drugs were approved by the European Medicines Agency for use throughout Europe. Thus our results also reflect the outcome of European drug development processes and policies.
Only 54 of the 216 assessed drugs (25%) were judged to have a considerable or major added benefit. In 35 (16%), the added benefit was either minor or could not be quantified. For 125 drugs (58%), the available evidence did not prove an added benefit over standard care for mortality, morbidity, or health related quality of life in the approved patient population (fig 1). Table 1 provides examples of assessment outcomes in the different categories of added benefit. As the effects of drugs often vary between patients, there might be subpopulations benefiting despite no relevant effects in overall study populations. However, IQWiG already considers subgroups by age, sex, disease severity, and further disease specific factors. Of the 89 drugs with an added benefit, 52 (58%) showed an added benefit in the whole approved patient population, and 37 (42%) had an added benefit in only part of the approved patient population.
The IQWiG is the Institute for Quality and Efficiency in Health Care. Figure 1 tops the post. That 25%, or 54 drugs, are busted out into “considerable” (32) and “major” (22) effects.
In any case, the 75% of drugs showing little or no benefit, or even “negative” benefits, are the key. How could such a thing happen, when the people putting out these drugs and molecules are surely smart?
Wait! Don’t answer yet! Because I also have to layer some p-values and parameters on you. I haven’t read every paper introducing the “effect size” of every molecule, but I’d bet that most, and probably all, the evidence showing efficacy is p-value or parameter based. Meaning it is exaggerated—and by a lot. I’m not going to rehash why for the eighty-two millionth time here. Read “Reality-Based Probability & Statistics: Ending The Tyranny Of Parameters!” for why.
The point is that even the 25% is too high. The real number is—and this is a wild guess based on my experience—half that, or less. Just for argument’s sake, make it 10%. Ten percent of all new drugs that make it trough the research or regulatory process are good—and really only in oncology, says the evidence—the rest mostly useless or even harmful.
How could this happen? The authors of the paper say more or better regulations are needed. Some of that is surely true.
On the other hand, we also have our stock example married to over-certain statistical methods. The analogy is the researchers selling themselves their own stock tips, in part. In mistaking statistical correlation for cause.
Researchers start with a bunch of chemicals and start testing them. Every time classical statistical methods are used to infer correlation is causation to winnow down the list for the next stage in testing, over-certainty is created.
Some molecules which are only correlational eventually make it through, like the stock picks, and because correlation and causation are sometimes married, some causative molecules pass, too.
It had to be this way, especially because causation is becoming harder and harder to identify. All or most the low hanging molecules have already been picked. The work required to find something truly useful, and not overly harmful, will only increase.
To support this site using credit card or PayPal click here