The title today is identical to the peer-reviewed paper of the same name in Theory and Society by BVE Hyde.
Hyde begins: “This study begins by outlining the transparency paradox: that trust in science requires transparency, but being transparent about science, medicine and government reduces trust in science.” A solution?
[I]t is revealed that transparency about good news increases trust whereas transparency about bad news decreases it, thus explaining the apparent paradox. The apparent solution: to ensure that there is always only good news to report, which might require lying.
This summarizes the Rochelle Walensky school of philosophy. She and other Experts in the name of The Science during the covid panic mercilessly wielded the Noble Lie.
Hyde acknowledges that the public engages in “overidealizing science” and holds “the naïve view of science as infallible”, which are the roots of the problem. This folk belief originates from the Neil DeGrasse Tyson school of philosophy, whose motto is “Science is true, whether or not you believe in it.”
That slogan, beloved by innocent atheists, earnest academics and Unitarians the world over, creates the problem of what to do when it is discovered, as it frequently is discovered, that The Science is false, its claims exaggerated, its promises unfulfilled. But of course it isn’t a problem: this group never loses its trust in science.
Yet why is it desirable to have “trust in science”? Hyde says, “Governments rely on scientific evidence to inform public policy so, if people don’t trust science, they won’t trust policy either and this leads to noncompliance.”
Hyde isn’t aware noncompliance is the correct decision when The Science is wrong. Noncompliance is usually the correct decision when The Science is not strictly wrong but exaggerated. Noncompliance can be the correct decision when we can’t be certain The Science is right. Demanding compliance because The Science demands compliance is scientism, a fallacy. The right or wrong thing to do is never because The Science.
A prime example from Hyde is Climategate: “In November 2009, the Climactic Research Unit at the University of East Anglia was hacked and more than a thousand emails and two thousand documents were leaked … one of the most cited phrases was ‘hyde the decline,'” which regular readers will recall. Hyde tells us committees of the same scientists who sit on the same grants committees and funded by the same governments and who travel in and out the same departments investigated the flap and “found no evidence of wrongdoing”. Yet this transparency, Hyde thinks remarkably, caused a decrease in trust in “climate change.”
Hyde brings up the covid panic fiasco and says “It’s widely thought that vaccines should be safe if they’re mandatory so, if they’re not, that would be bad news.” Hyde says, correctly, that bad news is what the public “consider untrustworthy”.
And so we come to his point:
Speaking solely in terms of public trust in science, the answer is to make sure you always have good news to be transparent about; that is, when you don’t, you lie. If you have conflicts of interest, or even some interests that might give the impression of a conflict, don’t refuse to declare it, and definitely don’t declare it. Instead, lie about it and assert that you have no conflicts of interest. The same goes for the messiness of science. If you’re uncertain, say you’re certain; if there’s disagreement, label them conspiracy theorists; if data is missing, make it up and fill in the gaps. The people think that science is perfect, so let it be perfect…
…An example of a related phenomenon is recorded by Wendy Parker (2014), who shows that climate scientists have the choice between providing policymakers with honest figures, which are uncertain and therefore less likely to encourage policy action, or effective ones, which will probably stimulate action but about which scientits [sic] have claimed more certainty than they really should.
I have long said, and have given a speech, that the need for “solutions” is what drives a lot of “climate change”, and indeed many sciences. Though there’s room for discussion of loss functions, decision analysis and all that, there is no need for a “solution” when you don’t even know you need one. It is wrong to advocate for “solutions” when the stated need for them doesn’t exist, or isn’t known with any great certainty to exist.
That people do advocate for “solutions” even in the face of great uncertainty is scientism at its purest. Scientists in these situations believe “solutions” ought to be had for the sake of “solutions”, because they, the scientists with great brains, desire them. Scientists fallaciously declare the “solutions'” value as science.
Hyde realizes that if scientists are caught lying, then trust decreases more, which would “result in a proliferation of conspiracy theories”—true conspiracy theories at that.
What would be even worse would be if the public found out, more than that they’ve been lied to, the truth, especially if that truth isn’t pretty. Then it certainly looks like an evil conspiracy is in place, particularly when it comes to sensitive topics like vaccination or economic policies like quantitative easing. This could result in the complete loss of trust in all our institutions, like science, medicine, government, banks, and anyone else who might be thought to have contributed to the conspiracy, whether they did or not.
Which I would say is the just, correct, and rational outcome. To Hyde, though, “The obvious solution to this is to just not get caught.”
Now Hyde realizes where all this lying would lead, and in the end says he’s not really for the resulting chaos and tyranny. And here I’m with him, up to a point:
We should aim to educate the public to have trust in science despite its imperfections, to understand what objectivity and rigour really look like and why ‘the tangle of science’ is reliable and trustworthy (see Cartwright et al., 2022). They should understand that science ‘proves’ nothing, that conflicts of interest don’t automatically entail bias or fraud, that anomalous studies don’t falsify a whole body of literature and, above all, the public needs to understand that the scientist is human. They make mistakes just like everyone else. Many scientists know this; they’ve known this for over a century.
That last line makes for a terrific punchline.
Proving that scientists are far more fallible than propaganda indicates is good. But this would not make science more reliable and trustworthy, and shouldn’t. It should make it less trustworthy, which is to say, in line with its increasingly error-prone status.
The real lesson that needs to be taught is the difference between science and scientism. Which is our goal.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use PayPal. Or use the paid subscription at Substack. Cash App: \$WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank. BUY ME A COFFEE.
Discover more from William M. Briggs
Subscribe to get the latest posts sent to your email.