Archives


DID, IV, RCT, and RDD: Which Method Is Most Prone to Selective Publication and p-Hacking?

[From the working paper, “Methods Matter: P-Hacking and Causal Inference in Economics” by Abel Brodeur, Nikolai Cook, and Anthony Heyes] “…Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is…

Read More

Things Aren’t Looking That Great in Ecology and Evolution Either

[From a recent working paper entitled “Questionable Research Practices in Ecology and Evolution” by Hannah Fraser, Tim Parker, Shinichi Nakagawa, Ashley Barnett, and Fiona Fidler] “We surveyed 807 researchers (494 ecologists and 313 evolutionary biologists) about their use of Questionable…

Read More

IN THE NEWS: Buzzfeed (February 26, 2018)

[From the article, “Sliced and Diced: The Inside Story of How an Ivy League Food Scientist Turned Shoddy Data into Viral Studies” by Stephanie M. Lee in Buzzfeed] “Brian Wansink won fame, funding, and influence for his science-backed advice on…

Read More

BLANCO-PEREZ & BRODEUR: Progress in Publishing Negative Results?

[From the working paper, “Publication Bias and Editorial Statement on Negative Findings” by Cristina Blanco-Perez and Abel Brodeur] Prior research points out that there is a selection bias in favor of positive results by editors and referees. In other words,…

Read More

DON’T: Aim for significance. DO: Aim for precision

[From the recent working paper, “The statistical significance filter leads to overoptimistic expectations of replicability” by Vasishth, Mertzen, Jäger, and Gelman posted at PsyArXiv Preprints] “…when power is low, using significance to decide whether to publish a result leads to a proliferation of exaggerated…

Read More

MURPHY: Quantifying the Role of Research Misconduct in the Failure to Replicate

[NOTE: This blog is based on the article “HARKing: How Badly Can Cherry-Picking and Question Trolling Produce Bias in Published Results?” by Kevin Murphy and Herman Aguinis, recently published in the Journal of Business and Psychology.] The track record for…

Read More

Progress in Publishing Negative Results?

[From the working paper, “Publication Bias and Editorial Statement on Negative Findings” by Cristina Blanco-Perez and Abel Brodeur] “In February 2015, the editors of eight health economics journals sent out an editorial statement which aims to reduce the incentives to…

Read More

2 Humps = P-Hacking + Publication Bias?

In a recent blogpost at Simply Statistics, Jeff Leek announced a new R package called tidypvals: “The tidypvals package is an effort to find previous collections of published p-values, synthesize them, and tidy them into one analyzable data set.” In a preview…

Read More

If At First You Don’t Succeed, Change Alpha

In a recent working paper, posted on PsyArXiv Preprints, Daniel Benjamin, James Berger, Magnus Johanneson, Brian Nosek, Eric-Jan Wagenmakers, and 67 other authors(!) argue for a stricter standard of statistical significance for studies claiming new discoveries.  In their words: “…we…

Read More

IN THE NEWS: NY Times (MAY 29, 2017)

[From the article “Science Needs a Solution for the Temptation of Positive Results” by Aaron E. Carroll at The New York Times/The Upshot website]   “Science has a reproducibility problem. … As long as the academic environment has incentives for…

Read More