Academia has been abuzz in recent years with new initiatives focusing on research transparency, replication and reproducibility of research. Notable in this regard are the Berkeley Initiative for Transparency in the Social Sciences, and the Reproducibility Initiative which PLOS and Science Exchange are involved, but there are many others. Psychology and political science have had a number of new initiatives that are shaking up the scientific research and publication process. In economics, there are laudable endeavors by The Institute for New Economic Thinking, which funds the “Replication in Economics” project at Gottingen University; and 3ie, which initiated a replication initiative that includes funding for replication studies. And, of course, there is The Replication Network, which started a little over a year ago.
In this blog I would like to highlight a particular initiative that is concerned with the distorted incentive structure of the academic peer-review process. It is well known that the scientific literature rewards novel, ground-breaking findings that are sometimes at odds with how the scientific research process works. Novel findings are exciting, but we can only judge the true effects of something if we amass evidence from a variety of sources and these might not always be novel or exciting.
This is where the idea of registered reports, and relatedly, registered replication reports. The way the registered report models works is very simple: Researchers submit a report setting out the research questions and proposed methodology before embarking on any data collection and analysis. This report is peer-reviewed to ensure certain quality criteria are met. Once the submission is accepted, publication in the journal where it was accepted is almost guaranteed, assuming researchers have followed through with their registered methodology.
This initiative is the brain child of Alex Holcombe, Bobbie Spellman and Daniel Simons and was started in 2013 in collaboration with the journal Perspectives on Psychological Science. The first registered replication report was published in 2014. The Center for Open Science has actively promoted registered reports. According to Daniel Simons, Professor of Psychology at the University of Illinois, “Registered reports eliminate the bias against negative results in publishing because the results are not known at the time of review”. Adds Chris Chambers, chair of the COS-associated Registered Reports Committee, “Because the study is accepted in advance, the incentives for authors change from producing the most beautiful story to producing the most accurate one.” The idea of registered reports has quickly gained much traction. Brian Nosek, Professor of Psychology at the University of Virginia, is now piloting registered reports with over 20 journals.
A related initiative is that of “results-free” reviewing (RFR) where studies are reviewed without reviewers knowing the results of the analysis. The journal Comparative Political Studies recently published a special issue that featured a pilot study of RFR.
The move towards registered replication reports somewhat mirrors that of registered trials in the medical sciences where trials are registered before embarking on the study to minimise reporting biases, enhance transparency and accountability (see here). 3ie has established a similar registry with the aim to register international development impact evaluations (see here).
All these initiatives are important in the quest for more research transparency. The medical sciences, psychology, and political science have been at the forefront of these efforts. It would be good to see similar initiatives in economics.
Maren Duvendack is a lecturer in development economics at the University of East Anglia and co-organizer of The Replication Network.