Archives


Oh No! Not Again!

[From the article “Push button replication: Is impact evaluation evidence for international development verifiable?” by Benjamin Wood, Rui Müller, and Annette Brown, published in PLoS ONE] “…We drew a sample of articles from the ten journals that published the most…

Read More

Excellent, Cross-Disciplinary Overview of Scientific Reproducibility in the Stanford Encyclopedia of Philosophy

[From the article “Reproducibility of Scientific Results”, by Fiona Fidler and John Wilcox, published in The Stanford Encyclopedia of Philosophy] “This review consists of four distinct parts. First, we look at the term “reproducibility” and related terms like “repeatability” and…

Read More

How Should One Statistically Analyse a Replication? It Depends.

[From the preprint, “Statistical Analyses for Studying Replication: Meta-Analytic Perspectives” by Larry Hedges and Jacob Schauer, forthcoming in Psychological Methods] “Formal empirical assessments of replication have recently become more prominent in several areas of science, including psychology. These assessments have…

Read More

Intro to Open Science in 8 Easy Steps

[From the working paper, “8 Easy Steps to Open Science: An Annotated Reading List” by Sophia Crüwell et al., posted at PsyArXiv Preprints] “In this paper, we provide a comprehensive and concise introduction to open science practices and resources that can help…

Read More

Modelling Reproducibility

[From the preprint “A Model-Centric Analysis of Openness, Replication, and Reproducibility”, by Bert Baumgaertner, Berna Devezer, Erkan Buzbas, and Luis Nardin, posted at arXiv.org] “In order to clearly specify the conditions under which we may or may not obtain reproducible results,…

Read More

REED: An Update on the Progress of Replications in Economics

[This post is based on a presentation by Bob Reed at the Workshop on Reproducibility and Integrity in Scientific Research, held at the University of Canterbury, New Zealand, on October 26, 2018] In 2015, Duvendack, Palmer-Jones, and Reed (DPJ&R) published…

Read More

HIRSCHAUER et al.: Why replication is a nonsense exercise if we stick to dichotomous significance thinking and neglect the p-value’s sample-to-sample variability

[This blog is based on the paper “Pitfalls of significance testing and p-value variability: An econometrics perspective” by Norbert Hirschauer, Sven Grüner, Oliver Mußhoff, and Claudia Becker, Statistics Surveys 12(2018): 136-172.] Replication studies are often regarded as the means to…

Read More

ALL INVITED: Workshop on Reproducibility and Integrity in Scientific Research

DATE: Friday 26 October. PLACE: University of Canterbury, Business School, Meremere, Room 236, Christchurch, NEW ZEALAND REGISTRATION (important for catering purposes): email to tom.coupe@canterbury.ac.nz COST: Nada ($0) Supported by the University of Canterbury Business School Research Committee. OVERVIEW: There is more…

Read More

And How Are Things Going In Political Science?

[From the working paper “Why Too Many Political Science Findings Cannot be Trusted and What We Can Do About It” by Alexander Wuttke, posted at SocArXiv Papers] “…this article reviewed the meta-scientific evidence with a focus on the quantitative political science…

Read More

VASISHTH: The Statistical Significance Filter Leads To Overoptimistic Expectations of Replicability

[This blog draws on the article “The statistical significance filter leads to overoptimistic expectations of replicability”, authored by Shravan Vasishth, Daniela Mertzen, Lena A. Jäger, and Andrew Gelman, published in the Journal of Memory and Language, 103, 151-175, 2018. An open…

Read More