Archives


REED: An Update on the Progress of Replications in Economics

[This post is based on a presentation by Bob Reed at the Workshop on Reproducibility and Integrity in Scientific Research, held at the University of Canterbury, New Zealand, on October 26, 2018] In 2015, Duvendack, Palmer-Jones, and Reed (DPJ&R) published…

Read More

HIRSCHAUER et al.: Why replication is a nonsense exercise if we stick to dichotomous significance thinking and neglect the p-value’s sample-to-sample variability

[This blog is based on the paper “Pitfalls of significance testing and p-value variability: An econometrics perspective” by Norbert Hirschauer, Sven Grüner, Oliver Mußhoff, and Claudia Becker, Statistics Surveys 12(2018): 136-172.] Replication studies are often regarded as the means to…

Read More

ALL INVITED: Workshop on Reproducibility and Integrity in Scientific Research

DATE: Friday 26 October. PLACE: University of Canterbury, Business School, Meremere, Room 236, Christchurch, NEW ZEALAND REGISTRATION (important for catering purposes): email to tom.coupe@canterbury.ac.nz COST: Nada ($0) Supported by the University of Canterbury Business School Research Committee. OVERVIEW: There is more…

Read More

And How Are Things Going In Political Science?

[From the working paper “Why Too Many Political Science Findings Cannot be Trusted and What We Can Do About It” by Alexander Wuttke, posted at SocArXiv Papers] “…this article reviewed the meta-scientific evidence with a focus on the quantitative political science…

Read More

VASISHTH: The Statistical Significance Filter Leads To Overoptimistic Expectations of Replicability

[This blog draws on the article “The statistical significance filter leads to overoptimistic expectations of replicability”, authored by Shravan Vasishth, Daniela Mertzen, Lena A. Jäger, and Andrew Gelman, published in the Journal of Memory and Language, 103, 151-175, 2018. An open…

Read More

Significant Effects From Low-Powered Studies Will Be Overestimates

[From the article, “The statistical significance filter leads to overoptimistic expectations of replicability” by Shravan Vasishth, Daniela Mertzen, Lena Jäger, and Andrew Gelman, published in the Journal of Memory and Language] Highlights: “When low-powered studies show significant effects, these will…

Read More

Reproducibility. You Can Do This.

[From the paper, “Practical Tools and Strategies for Researchers to Increase Replicability” by Michele Nuijten, forthcoming in Developmental Medicine & Child Neurology] “Several large-scale problems are affecting the validity and reproducibility of scientific research. … Many of the suggested solutions are…

Read More

A Unified Framework for Quantifying Scientific Credibility?

[From the abstract of the paper, “A Unified Framework to Quantify the Credibility of Scientific Findings”, by Etienne LeBel, Randy McCarthy, Brian Earp, Malte Elson, and Wolf Vanpaemel, published in the journal, Advances in Methods and Practices in Psychological Science] “…we…

Read More

Oh No! Not Zebra Finches Too!

[From the article, “Replication Failures Highlight Biases in Ecology and Evolution Science” by Yao-Hua Law, published at http://www.the-scientist.com%5D  “As robust efforts fail to reproduce findings of influential zebra finch studies from the 1980s, scientists discuss ways to reduce bias in such…

Read More

Making Replications Mainstream: It Takes a Research Community

Just in case you missed it, the latest issue of Behavioral and Brain Sciences includes an article by Rolf Zwaan, Alexander Etz, Richard Lucas, and Brent Donnellan entitled “Making Replications Mainstream”. It is something of a tour-de-force by four prominent…

Read More