Archives


IN THE NEWS: Mother Jones (September 25, 2018)

[From the article, “This Cornell Food Researcher Has Had 13 Papers Retracted. How Were They Published in the First Place?” by Kiera Butler, published in Mother Jones] “In 2015, I wrote a profile of Brian Wansink, a Cornell University behavioral science researcher who…

Read More

M Is For Pizza

[From the blog ““Tweeking”: The big problem is not where you think it is” by Andrew Gelman, posted at Statistical Modeling, Causal Inference, and Social Science] “In her recent article about pizzagate, Stephanie Lee included this hilarious email from Brian Wansink, the…

Read More

An Economist’s Journey Into the Replication Crisis

[From the blog “Why We Cannot Trust the Published Empirical Record in Economics and How to Make Things Better” by Sylvain Chabé-Ferret, posted at the blogsite An Economist’s Journey] “A strain of recent results is casting doubt on the soundness of the published empirical results in economics. Economics is…

Read More

DID, IV, RCT, and RDD: Which Method Is Most Prone to Selective Publication and p-Hacking?

[From the working paper, “Methods Matter: P-Hacking and Causal Inference in Economics” by Abel Brodeur, Nikolai Cook, and Anthony Heyes] “…Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is…

Read More

Things Aren’t Looking That Great in Ecology and Evolution Either

[From a recent working paper entitled “Questionable Research Practices in Ecology and Evolution” by Hannah Fraser, Tim Parker, Shinichi Nakagawa, Ashley Barnett, and Fiona Fidler] “We surveyed 807 researchers (494 ecologists and 313 evolutionary biologists) about their use of Questionable…

Read More

IN THE NEWS: Buzzfeed (February 26, 2018)

[From the article, “Sliced and Diced: The Inside Story of How an Ivy League Food Scientist Turned Shoddy Data into Viral Studies” by Stephanie M. Lee in Buzzfeed] “Brian Wansink won fame, funding, and influence for his science-backed advice on…

Read More

BLANCO-PEREZ & BRODEUR: Progress in Publishing Negative Results?

[From the working paper, “Publication Bias and Editorial Statement on Negative Findings” by Cristina Blanco-Perez and Abel Brodeur] Prior research points out that there is a selection bias in favor of positive results by editors and referees. In other words,…

Read More

DON’T: Aim for significance. DO: Aim for precision

[From the recent working paper, “The statistical significance filter leads to overoptimistic expectations of replicability” by Vasishth, Mertzen, Jäger, and Gelman posted at PsyArXiv Preprints] “…when power is low, using significance to decide whether to publish a result leads to a proliferation of exaggerated…

Read More

MURPHY: Quantifying the Role of Research Misconduct in the Failure to Replicate

[NOTE: This blog is based on the article “HARKing: How Badly Can Cherry-Picking and Question Trolling Produce Bias in Published Results?” by Kevin Murphy and Herman Aguinis, recently published in the Journal of Business and Psychology.] The track record for…

Read More

Progress in Publishing Negative Results?

[From the working paper, “Publication Bias and Editorial Statement on Negative Findings” by Cristina Blanco-Perez and Abel Brodeur] “In February 2015, the editors of eight health economics journals sent out an editorial statement which aims to reduce the incentives to…

Read More