Archives


More Pre-Registration, More Null Results

[From the article, “First analysis of ‘pre-registered’ studies shows sharp rise in null findings” by Matthew Warren, published at Nature.com] “Studies that fail to find a positive result are often filed away, never to see the light of day, which…

Read More

How Many Biases? Let Us Count the Ways

[From the article “Congratulations. Your Study Went Nowhere” by Aaron Carroll, published at http://www.nytimes.com%5D “When we think of biases in research, the one that most often makes the news is a researcher’s financial conflict of interest. But another bias, one possibly even more…

Read More

An Economist’s Journey Into the Replication Crisis

[From the blog “Why We Cannot Trust the Published Empirical Record in Economics and How to Make Things Better” by Sylvain Chabé-Ferret, posted at the blogsite An Economist’s Journey] “A strain of recent results is casting doubt on the soundness of the published empirical results in economics. Economics is…

Read More

And How Are Things Going In Political Science?

[From the working paper “Why Too Many Political Science Findings Cannot be Trusted and What We Can Do About It” by Alexander Wuttke, posted at SocArXiv Papers] “…this article reviewed the meta-scientific evidence with a focus on the quantitative political science…

Read More

DID, IV, RCT, and RDD: Which Method Is Most Prone to Selective Publication and p-Hacking?

[From the working paper, “Methods Matter: P-Hacking and Causal Inference in Economics” by Abel Brodeur, Nikolai Cook, and Anthony Heyes] “…Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is…

Read More

Oh No! Not Zebra Finches Too!

[From the article, “Replication Failures Highlight Biases in Ecology and Evolution Science” by Yao-Hua Law, published at http://www.the-scientist.com%5D  “As robust efforts fail to reproduce findings of influential zebra finch studies from the 1980s, scientists discuss ways to reduce bias in such…

Read More

Pre-Registration? Meet Publication Bias

[From the blog post, “What Is Preregistration For?” by Neuroskeptic, published at Discover Magazine] “The paper reports on five studies which all address the same general question. Of these, Study #3 was preregistered and the authors write that it was performed after…

Read More

Progress in Publishing Negative Results?

[From the working paper, “Publication Bias and Editorial Statement on Negative Findings” by Cristina Blanco-Perez and Abel Brodeur] “In February 2015, the editors of eight health economics journals sent out an editorial statement which aims to reduce the incentives to…

Read More

HARKing is Bad, But Which Kind of HARKing is Worse?

[From the article “HARKing: How Badly Can Cherry-Picking and Question Trolling Produce Bias in Published Results?” by Kevin Murphy and Herman Aguinis, published in the Journal of Business and Psychology.]  “The practice of hypothesizing after results are known (HARKing) has…

Read More

REED: Why Lowering Alpha to 0.005 is Unlikely to Help

[This blog is based on the paper, “A Primer on the ‘Reproducibility Crisis’ and Ways to Fix It” by the author] A standard research scenario is the following: A researcher is interested in knowing whether there is a relationship between…

Read More