REED: How “Open Science” Can Discourage Good Science, And What Journals Can Do About It
In a recent tweet (or series of tweets) Kaitlyn Werner shares her experience of having a paper rejected after she posted all her data and code and submitted her paper to a journal. The journal rejected the paper because a reviewer looked over the data and had “a hunch” that there was a mistake.
Werner states that she was just about to change her stance on open science when, after several checks of her data and code, she realized the reviewer was right. There was a mistake in the coding of the data.
The lesson the author learned from this experience?:
“Fortunately, I think this error will actually make my paper a lot stronger. And as upset that I am about the 3 months of review that are now lost, I am happy to know that you didn’t publish a misleading paper. And from now on, I will always share my data.”
To read her full set of tweets, click here.
But there is another lesson here. If papers with data and code are more likely to be rejected (because they have more things that reviewers can find fault with), then they face a higher standard of getting published. If one believes that making data and code public makes researchers more careful, and the associated research is higher quality and more likely to be “true”, then “open science” will enable discrimination against higher quality research, and tilt the playing field towards lower quality research.
In this particular case, the journal’s actions were not compatible with good science.
If journals don’t want to discourage good science, and if some papers submit data and code and others do not, then at the very least, the journal should create a level playing field. Papers with data and code should not face a higher threshold of acceptance than papers without data and code.
One way they could do that is to inform their reviewers that they should never reject a paper based on the data and code. If a reviewer finds a mistake, but the rest of the paper seems publishable, the journal should allow the author to resubmit their research with corrected data and code.
Further, if journals wanted to tilt the playing field in favor of good science, they could build in a higher probability of acceptance for papers that supplied data and code. This is a reasonable policy for a journal to follow if one believes that these papers will tend to be higher quality: Researchers who make their data and code transparent know that they run a higher risk of having their mistakes uncovered. As a result, they will go to extra lengths to make sure their research is mistake-free and “true”.
Kaitlyn Werner is a noble scientist who cares about truth more than getting a publication in a prestigious journal. The lesson she drew from her experience made her more committed to open science.
However, if open science is to lead to better science, journals are going to have to figure out how to avoid penalizing open science practices.
Bob Reed is Professor of Economics at the University of Canterbury in New Zealand and co-founder of The Replication Network. He can be contacted at firstname.lastname@example.org.