However, when I saw contained in the body of this wrongly neglected jewel a marvelous joke—and not a few typos—the memory of authorship returned to me.
Another journal has only recently stopped a bounty hunter from dogging my tracks to collect page charges for an article I wrote perhaps fifteen years ago. Page charges, you ask? Why, many journals require the author to pay for the privilege of publishing. Science is the original vanity press. The economics of publishing are complicated. Journals recover the expense of charging authors by also charging readers. Publishers do add value, though. Example: they provide the service of unburdening the author of his copyright.
David Banks, a statistician at Duke, and eminence of the American Statistical Association, has asked for comments about the publishing process (it was from him that I stole today’s title):
I fear our current approach to publishing does not serve us well. It takes too long, so our best scientists are driven to other journals in faster disciplines. Refereeing is noisy and often achieves only minor gains. And the median quality of reviews is deteriorating due to journal proliferation, pressure on junior faculty to amass lengthy publication lists, and the slow burnout of conscientious reviewers.
All true. So’s this: “Published research often does not replicate”. For papers which rely on statistics, this is the largest sin, as regular readers are well aware.
Banks reminds us the system today was not always thus.
Today’s publication process was essentially invented by Henry Oldenburg, the first corresponding secretary of the Royal Society. He received letters from members describing their research, copied them out in summary form, and mailed those summaries to other members.
It was also the habit of pre-journalified scientists to correspond with one another; letters were passed in lieu of official publication. Yet we admit journals initially were a boon, especially when there was limited reader- and authorship.
Today, though, in statistics alone, there are dozens upon dozens of publications, with more appearing regularly. An advanced computerized statistical model predicts there will be 1.2 journals per statistician by the year 2023—none of which will or need be read. Why the increase? The depressing desire for quantification of the unquantifiable (a particularly dismal trait in statisticians).
It is publish or perish: paper count is the sine qua non of success within the university. Without it, departments would be aswim, unable to decide on promotion or hiring. Remove paper count—the statistic everybody uses while simultaneously decrying—and there will be no objective basis to decide who stays or who is booted.
Trouble is, with an increasing multitude of outlets, anybody can achieve a pleasing sum. This causes other metrics to be sought. Like citation count, or the sounds-like-advertising “impact” factor. Trouble with the latter is that the “best” journals have limited space. And then, because of the charmingly naive view that peer-review is a rigorous filter of truth, authors spend just as much time editing the work of others as they do writing their own papers. And then the true definition of random is found in considering why papers are accepted or rejected.
Are there alternatives to our stultifying system? Sure. I figured the world deserved to read my jocose but rejected jottings. So instead of enduring the desultory review process again, I stuck the paper on this page and on arXiv. Where, to my delight, it was actually read.
Larry Wasserman (whose books on mathematical statistics are highly readable), commenting on Banks’s plea, agrees and said:
I think we should abandon journals completely and just use arXiv.
We should eliminate refereeing completely and let the marketplace of ideas decide the value of a paper.
Sounds nice. But how do you get credit for a letter? Or a blog post? Or an arXiv dump? The worry is somebody suffering from latent accountancy will suggest number of downloads or the like—as if that would not be easy to manipulate.
Well, you shouldn’t get numerical credit. Each person’s work, or potential for same, should be judged on its own as a whole. This requires extra effort for reviewing committees, who would actually have to read instead of count papers, but tough.
This ploy isn’t perfect, either. No system is. For example: Article popularity is a weak gauge of quality. It’s easy to write many papers quickly in “hot” areas (I once attended a conference where everybody started their talk “Wavelets are…”). But some topics are more experimental or are foundational, areas which may never pay off but which are worthy of investment. And there will 1,000 arXiv wavelet-neural-net-“big”-data-of-the-day papers to every probability-really-means-this work.
The system of books, blogs, and backups to arXiv is probably the least worst.
Update Corrected thanks to the ever-watchful eye of JH.