In a recent article in Slate entitled “The Unintended Consequences of Trying to Replicate Research,” IVAN ORANSKY and ADAM MARCUS from Retraction Watch argue that replications can exacerbate research unreliability. The argument assumes that publication bias is more likely to favour confirming replication studies over disconfirming studies. To read more, click here. This is the same argument that Michele Nuijten makes in her guest blog for TRN, which you can read here.
Whether this is a real concern depends on the replication policies at journals. At least two economics journals have publication policies that explicitly state they are neutral towards the conclusion of replication studies. In their “Call for Replication Studies”, Burmann et al. state: “Public Finance Review will publish all … kinds of replication studies, those that validate and those that invalidate previous research” (see here). And the journal Economics: The Open-Access, Open-Assessment E-Journal states: “The journal will publish both confirmations and disconfirmations of original studies. The only consideration will be quality of the replicating study” (see here).
Further, in their recent study, “Replications in Economics: A Progress Report” (see here), Duvendack et al. find that most published replication studies in economics disconfirm the original research. So while it is possible that replications could make things worse, perhaps this is more a worry in theory than in practice. At least in economics.
If publication bias works in the opposite way for replication studies (a disconfirming study is MORE likely to be published), you’d expect a string of opposing replication studies that at some point converges to the true population effect.
And if publication bias does not affect replication studies, you could argue that the best course of action is to entirely discard the original study (that IS affected by publication bias) and only focus on the (unbiased) replication studies.
In both these cases, my theory about the Replication Paradox, in which I argue that combining multiple studies can make effect size estimates WORSE (see my guest blog on this site) does not hold. And that would be great! Because this would mean that science’s self-correcting mechanism does work.
However, at least in Psychology, most replication studies are not explicitly identified as such, i.e. only about 2% of the published literature is explicitly called a replication. However, the average social psychology paper contains 5-6 studies about the same subject (which you could consider a set of replications). In these multi-study papers and from meta-analyses (basically also a combination of replications), we find overwhelming evidence for publication bias that favors significant effects in the same direction.
In such a scenario, combining the information from several studies can actually DECREASE accuracy in the effect size estimate, since all studies are affected by publication bias and probably contain overestimated effects. If this holds, you obtain the most accurate estimate by only considering studies with high power, and discarding underpowered studies entirely.
In short: replications can add bias to an effect size estimate, but only if they are underpowered AND affected by publication bias that favours confirming studies.
If it is indeed the case that replication studies in economics are not affected by publication bias, they should definitely be included when you estimate an effect.
LikeLike
Good comments, Michele. As you point out, there are important differences between psychology and economics. It is somewhat ironic that your argument implies that in psychology, where there appears to be much more interest in replications, replications may take us further away from “the truth;” compared to economics, where replications are not plagued by publication bias but interest in doing replications is relatively modest.
LikeLike