Archives


Be a Part of the Next Big Thing! Join the Multi100 Project

The Multi100 project is a crowdsourced empirical project aiming to estimate how robust published results and conclusions in social and behavioral sciences are to analysts’ analytical choices. The project will involve more than 200 researchers. The Center for Open Science…

Read More

Your Definition of Replication is (Probably) Wrong

[From the preprint, “What is Replication?” by Brian Nosek and Tim Errington, posted at MetaArXiv Preprints] “According to common understanding, replication is repeating a study’s procedure and observing whether the prior finding recurs…This definition of replication is intuitive, easy to…

Read More

A Unified Framework for Quantifying Scientific Credibility?

[From the abstract of the paper, “A Unified Framework to Quantify the Credibility of Scientific Findings”, by Etienne LeBel, Randy McCarthy, Brian Earp, Malte Elson, and Wolf Vanpaemel, published in the journal, Advances in Methods and Practices in Psychological Science] “…we…

Read More

CAMPBELL: Is the AER Replicable? And is it Robust? Evidence from a Class Project

As part of a major replication and robustness project of articles in the American Economic Review, this fall I assigned students in my Masters Macro course at the New Economic School (Moscow) to replicate and test robustness for Macro papers…

Read More