Have Registered Reports Uncovered Massive Publication Bias? Evidence from Psychology
[Excerpts taken from the preprint, “An excess of positive results: Comparing the standard Psychology literature with Registered Reports” by Anne Scheel, Mitchell Schijen, and Daniël Lakens, posted at PsyArXiv]
“Registered Reports (RRs) are a new publication format…Before collecting data, authors submit a study protocol containing their hypotheses, planned procedures, and analysis pipeline…to a journal. The protocol undergoes peer review, and, if successful, receives ‘in-principle acceptance’, meaning that the journal commits to publishing the final article following data collection, regardless of the statistical significance of the results.”
“The authors then collect and analyse the data and complete the final report. The final report undergoes another round of peer review, but this time only to ensure that the authors adhered to the registered plan and did not draw unjustified conclusions…”
“Registered Reports thus combine an antidote to QRPs (preregistration) with an antidote to publication bias, because studies are selected for publication before their results are known.”
“The goal of our study was to test if Registered Reports in Psychology show a lower positive result rate than articles published in the traditional way (henceforth referred to as ‘standard reports’, SRs), and to estimate the size of this potential difference.”
“For standard reports we downloaded a current version of the Essential Science Indicators (ESI) database…and used Web of Science to search for articles published between 2013 and 2018 with a Boolean search query containing the phrase ‘test* the hypothes*’ and the ISSNs of all 633 journals listed in the ESI Psychiatry /Psychology category. Using the same sample size as Fanelli (2010), we randomly selected 150 papers…”
“For Registered Reports we aimed to include all published Registered Reports in the field of Psychology that tested at least one hypothesis, regardless of whether or not they used the phrase ‘test* the hypothes*’. We downloaded a database of published Registered Reports curated by the Center for Open Science…and excluded papers published in journals that were listed in categories other than ‘Psychiatry/Psychology’ or ‘Multidisciplinary’ in the ESI.”
“Of the 151 entries in the COS Registered Reports database, 55 were excluded because they belonged to a non-Psychology discipline, 12 because we could not verify that they were Registered Reports, and 13 because they did not test hypotheses or contained insuffcient information, leaving 71 Registered Reports for the final analysis.”
“146 out of 152 standard reports and 31 out of 71 Registered Reports had positive results…see Fig. 2…this difference…was statistically significant…p < .001.”
“We thus accept our hypothesis that the positive result rate in Registered Reports is lower than in standard reports.”

“To explain the 52.39% gap between standard reports and Registered Reports, we must assume some combination of differences in bias, statistical power, or the proportion of true hypotheses researchers choose to examine.”
“Figure 3 visualises the combinations of statistical power and proportion of true hypotheses that would produce the observed positive result rates if the literature were completely unbiased.”
“For example, assuming no publication bias and no QRPs, even if all hypotheses authors of standard reports tested were true, their study designs would need to have more than 90% power for the true effect size. This is highly unlikely, meaning that the standard literature is unlikely to reflect reality.”

“It is a-priori plausible that Registered Reports are currently used for a population of hypotheses that are less likely to be true: For example, authors may use the format strategically for studies they expect to yield negative results (which would be difficult to publish otherwise).”
“However, assuming over 90% true hypotheses in the standard literature is neither realistic, nor would it be desirable for a science that wants to advance knowledge beyond trivial facts. We thus believe that this factor alone is not sufficient to explain the gap between the positive result rates in Registered Reports and standard reports. Rather, the numbers strongly suggest a reduction of publication bias and/or Type-1 error inflation in the Registered Reports literature.”
To read the article, click here.
Like this:
Like Loading...
You must be logged in to post a comment.