Here are some things you might want to be aware of when thinking about doing a replication.
1) You want to do a replication? Great, you are doing a great service to science! As shown in Reed (2018), ‘a single replication, if it confirms the original study, can have a large impact on the probability that the estimated effect is real.’ At the workshop, Bob Reed (University of Canterbury Business School) gave an overview of the current state of replication in economics, you can read the blog based on Bob’s presentation on The Replication Network.
2) That being said, Jeff Miller (University of Otago, Psychology) pointed out that, on purely statistical grounds, the chance you replicate a significant result (the ‘aggregate replication probability’) is only 36%. Hence, failure to replicate should not come as a surprise. Moreover, for purely statistical reasons, ‘If your effect is real, you will get about 60% significant results – if not, you will get about 5% significant results’. Hence, there is a sizeable chance that your failed replication, in fact would point us away from the truth rather than closer to the truth.
3) As the first step on your replication journey, you might want to try a Push Button Replication (PBR) – Benjamin Wood (Integra) pointed us to 3ie’s protocol for PBRs which can be found here .
4) But are you sure you want to do replication? Be aware that replication is just one way of trying to find the truth. Brian Haig (University of Canterbury, Psychology) suggested to consider ‘methodological triangulation’ as an alternative. ‘methodological triangulation involves the use of multiple independent methods in order to detect the same property or phenomenon’.
5) With all the focus on statistical significance in replication studies, one might forget that there are alternatives to that too: Philip Schluter (University of Canterbury, Health Sciences) focused on the Bayesian approach, making it clear that subjectivity can be ok, as long as one makes clear what ones prior is.
6) In a similar vein, Arin Basu (University of Canterbury, Health Sciences) focused on the importance of causality. After all, what one really wants is checking causality, that a correlation is replicable might not be that interesting, if it’s not clear what causes the correlation. And hey, maybe you’d want to consider doing a meta-analysis?
7) You still want to do a replication? Practical advice can be found in Annette Brown (FHI360) ’s paper – it provides a battery of tests than can be done to check the robustness of the paper you are trying to replicate. In fact, this paper is a great guide for anybody who wants to make replicable research – indeed, rather than waiting for others to replicate your paper, these tests could/should be done by the authors of the original papers! This is another good reason to do a replication: it will make you a better researcher! A summary of Annette’s paper can be found in her blog on The Replication Network.
8) But are you really sure you want to do a replication? Thomas Pfeiffer (Massey, Biology)’s work shows that betting markets can help to predict which paper is replicable. So you might want to consider a betting market about the paper you want to replicate rather than doing the actual replications.
9) Finally, to avoid that results drive your research, Eric Vanman (University of Queensland, Psychology) suggested to pre-register what one plans to do (you can do this here ). This applies to replications too: a replication plan will help you to avoid the temptation to continue to search for issues until you can show a paper cannot be replicated.
Tom Coupé is an Associate Professor of Economics at the University of Canterbury, New Zealand.
 An example from Reed (2018) :’ The values in the table show the original PSP of 0.20 gets updated depending on whether the replication was unsuccessful or successful. Following an unsuccessful replication, the post‐replication probability that a relationship exists falls from 0.20 to 0.12. However, a successful replication raises the probability from 0.20 to 0.71.’