Anybody who has spent any time in a university library amidst the papers of his specialty knows that the absolute last thing which is needed is more of them. Journals abound and apparently breed—asexually, by dividing—when librarians turn their heads.
The reason is obvious: academics must publish whether they want to or not, whether or not they have anything useful to say, and whether or not anybody reads what they write.
The glut appears across all areas of knowledge, but the effects are different in the humanities and sciences. In the former, the world would be a far better place had many of its practitioners obeyed the ancient truism that silence is golden. Over-supply in the sciences is less troublesome because poor and inconsequential works are ignored. The presence of this chaff only makes it difficult to discover the wheat.
In the humanities (which I take to incorporate the gooier sciences, like education) one can say anything, the more outré the better. Not so in the hard sciences where at least some passing resemblance to the truth is expected.
Too much resemblance, as a matter of fact. Editors, reviews, and authors follow a rigid positivistic philosophy: only good news shall find its way into print! Papers with “statistically significant” effects are vastly likelier to be published than are works which admit there’s nothing to see. Failures with billets are as rare as Republicans in English departments.
Then because traditional statistical methods used are fertile in labeling results positive, even when they are not, there exists a tremendous publication bias. Many false things are believed true.
All this is known and of concern to the seventy-plus signatories to the article Trust in science would be improved by study pre-registration in The Guardian. This open letter proclaims “We must encourage scientific journals to accept studies before the results are in.”
The eminences lament publish and perish and say the “publishing culture is toxic to science.”
Recent studies have shown how intense career pressures encourage life scientists to engage in a range of questionable practices to generate publications — behaviours such as cherry-picking data or analyses that allow clear narratives to be presented, reinventing the aims of a study after it has finished to “predict” unexpected findings, and failing to ensure adequate statistical power. These are not the actions of a small minority; they are common, and result from the environment and incentive structures that most scientists work within.
It’s worse than just that. “[J]ournals incentivise bad practice by favouring the publication of results that are considered to be positive, novel, neat and eye-catching.” Although there is no conceivable universe where the string of letters which comprise “incentivise” should be used when ladies are present, we cannot help but agree that the situation is grim.
The “file-drawer” problem adds to the misery. This is when a study which is not a success or isn’t sexy or part of the consensus rests in a lonely file in the forgotten reaches of a scientist’s computer. Lack of negative results in print gives an over-optimistic picture of scientific progress.
The solution the Guardian writers have is to publish “pre-registration” papers, outlines of the studies which are not yet conducted. Journal which air these outlines must agree to publish the eventual results whatever they may be. Thus “questionable practices to increase ‘publishability'” will be “greatly reduced.”
I doubt it. Authors will still aim for high “impact factor” journals for their “pre-registrations.” The “impact factor”, incidentally, is an “arguably meaningless as an indicator of scientific quality”, though always a matter of bragging rights.
There will be a minor flood of papers pre-registering sketchy theories, and these will be all that is remembered. Some authors will publish their negative results, but many will forget them and move on to more fertile grounds. The bulk of these maybe-so works will be taken as positive evidence even if positive effects are never found or if negative effects are published.
Journalists, by nature not very inquisitive, will tout “If these promised results hold…”, and again these reports will be all that is remembered. Retractions will never appear. What’s to retract?
And worst of all will be the huge increase in papers that must be navigated to get to the good stuff. Pre-registration papers will only be “read”—i.e. their abstracts will be glanced at on PubMed—by other authors looking to pad their bibliographies.
No. The real solution is to judge a fellow by the quality and promise of his work, not by its quantity, and not by even a hint of a numerical rating.
Thanks to Bryan Davies for pointing us to this.