Skip to Main Content

As fears of the novel coronavirus 2019-nCoV continued to spread last Friday, an inflammatory new paper appeared on bioRxiv, a preprint server, where scientists post work that hasn’t been vetted.

Titled “Uncanny similarity of unique inserts in the 2019-nCoV spike protein to HIV-1 gp120 and Gag,” the paper claimed to find similarities between the new coronavirus and HIV, the virus that causes AIDS. The use of the word “uncanny” in the title, together with “unlikely to be fortuitous” in the abstract, led some to think that the authors were suggesting the virus had somehow been engineered by humans.

advertisement

The paper, from academic institutions in New Delhi, India, was critical and alarming, if true. Except that it wasn’t.

The paper was almost immediately withdrawn, but not before plenty of handwringing from researchers who complained that the appearance of such shoddy work on a preprint server without vetting by peer reviewers is precisely why the hoary old model of science publishing is better at keeping junk science out of the literature.

Support STAT: STAT is offering coverage of the coronavirus for free. Please consider a subscription to support our journalism. Start free trial today.

Except that’s not true, either. The old model has its advantages, to be sure, but it, too, is prone to the menace of pseudoscience, bad data, and other flaws — despite traditional academic journals’ army of peer reviewers. And when these publications publish bad or erroneous research, it can take months or years for the papers to be corrected or retracted — if they ever are.

advertisement

In contrast, the reaction from the scientific community to the bioRxiv paper was swift. In a nutshell, commenters on bioRxiv and Twitter said, the author’s methods seemed rushed, and the findings were at most a coincidence. By Saturday morning, bioRxiv had placed a special warning on all papers about coronavirus. Later Saturday, the authors commented on their paper, saying they were withdrawing it. And on Sunday, a more formal retraction appeared: “This paper has been withdrawn by its authors. They intend to revise it in response to comments received from the research community on their technical approach and their interpretation of the results.”

All of that happened before a single news outlet with any reach covered the paper, as best we can tell. But none of it was quite fast enough for some critics. “This is why preprints can be bad,” said one scientist on Twitter. That scientist, Michael Shiloh, said he had even used bioRxiv to post preprints. “What bugs me about this preprint is that had this manuscript undergone legitimate peer review, these flaws would have led to a swift rejection and it wouldn’t be contributing to the conspiracy theories and fear surrounding this outbreak,” Shiloh continued.

History suggests that Shiloh’s confidence in peer review’s ability to suss out pseudoscience may be a bit misplaced. The fraudulent 1998 paper that set off the vaccine-autism scare was published in The Lancet, one of the world’s leading peer-reviewed medical journals. Other examples — including a paper by an intelligent design advocate questioning the validity of the second law of thermodynamics as it pertained to evolution — abound. Papers claiming a link between autism and vaccines pop up nearly every year.

And even when peer-reviewed journals do realize they’ve been had, retractions can take months or years. The Lancet took 12 years. Another journal took five years to retract a paper claiming that HIV did not cause AIDS. We could go on, and the list includes papers that have never even been corrected.

Those who claim preprint servers are dangerous because they lack peer review — bioRxiv has a perfunctory screening process — sometimes acknowledge that journals have had to speed up their game to meet the pressures of an outbreak like coronavirus, or SARS in the early part of this century. Angela Cochran, president of the Society for Scholarly Publishing, a trade group for publishers, said on Twitter: “Earlier this week, folks celebrated that coronavirus papers were popping up in preprint servers. Now there is a reminder not to use them to guide clinical practice because they haven’t been reviewed. Journals ARE reviewing coronavirus papers and getting [them] pub’d quickly.”

Kent Anderson, another publishing industry veteran, put it more bluntly: “Journals Win The Coronavirus Race.”

Publishers have been looking for ways to score points against — and shut down — preprints for at least half a century. Journals have speedily published a number of important papers on the new coronavirus already, no doubt. Publishing industry champions are often quick to say that speedy peer review does not mean sloppy peer review — even in cases that require massive corrections.

It now turns out that one of the world’s leading peer-reviewed medical journals, the New England Journal of Medicine, published a letter last week about apparent coronavirus transmission from an asymptomatic person that turns out to be wrong. Such letters to the editor are not typically peer-reviewed, or at least not as rigorously as a full study. But it’s a glaring example of how a peer-reviewed journal can end up getting things wrong. Now we get to wait and see how long it takes NEJM to correct the record.

We’ll see whether peer-review champions, who are often unwilling — with some notable and welcome exceptions — to acknowledge how slow and ineffective correction in science can be, note this apparent failure of one of their gatekeepers. Doing so might, after all, make some people question the expensive subscription deals universities agree to with publishers, as well as the article processing charges that can run into the thousands of dollars for open access publications.

Peer review can add a valuable filter. But those who work in publishing seem to be so wedded to the existing process that they can’t admit its flaws — or that it might be a good idea to also embrace preprint servers that could upend their business models. Just like in politics, maybe it’s time to agree that the publishing process is a messy one, and stop using single episodes, free of context, to score points against one’s rivals.

This article has been updated with information about a flawed report in the New England Journal of Medicine.

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.