Reproducibility in Registered Reports: Lots of Room for Improvement

[From the paper, “Analysis of Open Data and Computational Reproducibility in Registered Reports in Psychology” by Pepijn Obels, Daniel Lakens, Nicholas Coles, & Jaroslav Gottfried, posted at PsyArXiv Preprints]
“Recently, scholars have started to empirically examine the extent to which data is shared with published articles, and, if so, whether it was possible to reproduce the data analysis reported in the published article.”
“Hardwicke et al. (2018) examined the reproducibility of 35 articles published in the journal Cognition. Eleven articles could be reproduced without assistance from the original authors, and 13 articles contained at least one outcome that could not be reproduced even with author assistance.”
“Stockemer, Koehler, and Lentz (2018) analyzed reproducibility in all articles published in 2015 in three political science journals. They … found that for 71 articles for which they received code and data, one could not be reproduced due to a lack of a software license, and 16 articles could not be reproduced with access to the required software. For the remaining articles, 32 could be exactly reproduced, 19 could be reproduced with slight differences, and 3 articles yielded significantly different results.”
“Stodden, Seiler, and Ma (2018) analyzed data availability in the journal Science in 2011-2012 and found that 26 of 204 (or 13%) of articles provided information to retrieve data and/or code without contacting the authors. For all datasets they acquired after e-mailing authors for data and code, 26% was estimated to be computationally reproducible.”
“We set out to examine the data availability and reproducibility in Registered Reports published in psychological science.”
“To find Registered Reports published in psychology we drew from a database of registered reports maintained by the Center for Open Science.”
“…we limited ourselves to statistical software packages that we had experience with (R, SPSS, and JASP) and excluded studies that required expertise in software packages we are not trained in (e.g., dedicated EEG software, Python, Matlab)…”
“We set out to reproduce the 62 papers that met our inclusion criteria.”
“Of these 62 papers … 40 shared the analysis script and data, for which 35 were in SPSS, R or JASP. Of these, the analysis script could be run for 30, of which 19 reproduced all main results.”
“One of the main goals of our project was to identify ways to improve the reproducibility of published articles. … This leads to 4 points researchers in psychology should focus on to improve reproducibility, namely (a) add a codebook to data files, (b) annotate code so it is clear what the code does, and clearly structure code (e.g., using a README) so others know which output analysis code creates, (c) list packages that are required in R and the versions used at the top of your R file, (d) check whether the code you shared still reproduces all analyses after revisions during the peer review process.”
“The main aim of this article was not to accurately estimate mean reproducibility rates, but to see what current standards are, and how the reproducibility of research articles using the Registered Report format could be improved. … the main contribution of this article is the identification of common issues that can be improved.”
To read the paper, click here.

 

 

 

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: