Category: GUEST BLOGS


BROWN: How to Conduct a Replication Study – Which Tests, Not Witch Hunts

[This post is based on a presentation by Annette Brown at the Workshop on Reproducibility and Integrity in Scientific Research, held at the University of Canterbury, New Zealand, on October 26, 2018. It is cross-published on FHI 360’s R&E Search for…

Read More

REED: An Update on the Progress of Replications in Economics

[This post is based on a presentation by Bob Reed at the Workshop on Reproducibility and Integrity in Scientific Research, held at the University of Canterbury, New Zealand, on October 26, 2018] In 2015, Duvendack, Palmer-Jones, and Reed (DPJ&R) published…

Read More

VLAEMINCK & PODKRAJAC: Do Economics Journals Enforce Their Data Policies?

In the past, the findings of numerous replication studies in economics have raised serious concerns regarding the credibility and reliability of published applied economic research. Literature suggests several explanations for these findings: Beyond missing incentives and rewards for the disclosure…

Read More

MUELLER-LANGER et al.: Replication in Economics

[This blog is based on the article “ Replication studies in economics—How many and which papers are chosen for replication, and why?” by Frank Mueller-Langer, Benedikt Fecher, Dietmar Harhoff, and Gert Wagner, published in the journal Research Policy] Academia is…

Read More

HIRSCHAUER et al.: Why replication is a nonsense exercise if we stick to dichotomous significance thinking and neglect the p-value’s sample-to-sample variability

[This blog is based on the paper “Pitfalls of significance testing and p-value variability: An econometrics perspective” by Norbert Hirschauer, Sven Grüner, Oliver Mußhoff, and Claudia Becker, Statistics Surveys 12(2018): 136-172.] Replication studies are often regarded as the means to…

Read More

GOODMAN: Systematic Replication May Make Many Mistakes

Replication seems a sensible way to assess whether a scientific result is right. The intuition is clear: if a result is right, you should get a significant result when repeating the work; if it it’s wrong, the result should be…

Read More

VASISHTH: The Statistical Significance Filter Leads To Overoptimistic Expectations of Replicability

[This blog draws on the article “The statistical significance filter leads to overoptimistic expectations of replicability”, authored by Shravan Vasishth, Daniela Mertzen, Lena A. Jäger, and Andrew Gelman, published in the Journal of Memory and Language, 103, 151-175, 2018. An open…

Read More

BROWN, LAMBERT, & WOJAN: At the Intersection of Null Findings and Replication

Replication is an important topic in economic research or any social science for that matter. This issue is most important when an analysis is undertaken to inform decisions by policymakers. Drawing inferences from null or insignificant finding is particularly problematic…

Read More

REED: How “Open Science” Can Discourage Good Science, And What Journals Can Do About It

In a recent tweet (or series of tweets) Kaitlyn Werner shares her experience of having a paper rejected after she posted all her data and code and submitted her paper to a journal. The journal rejected the paper because a…

Read More

MENCLOVA: SURE Journal Is Now Open For Submissions!

Is the topic of your paper interesting, your data appropriate and your analysis carefully done – but your results are not “sexy”? If so, please consider submitting your paper to the Series of Unsurprising Results in Economics. SURE is an…

Read More