Archives


DON’T: Aim for significance. DO: Aim for precision

[From the recent working paper, “The statistical significance filter leads to overoptimistic expectations of replicability” by Vasishth, Mertzen, Jäger, and Gelman posted at PsyArXiv Preprints] “…when power is low, using significance to decide whether to publish a result leads to a proliferation of exaggerated…

Read More

MURPHY: Quantifying the Role of Research Misconduct in the Failure to Replicate

[NOTE: This blog is based on the article “HARKing: How Badly Can Cherry-Picking and Question Trolling Produce Bias in Published Results?” by Kevin Murphy and Herman Aguinis, recently published in the Journal of Business and Psychology.] The track record for…

Read More

Progress in Publishing Negative Results?

[From the working paper, “Publication Bias and Editorial Statement on Negative Findings” by Cristina Blanco-Perez and Abel Brodeur] “In February 2015, the editors of eight health economics journals sent out an editorial statement which aims to reduce the incentives to…

Read More

2 Humps = P-Hacking + Publication Bias?

In a recent blogpost at Simply Statistics, Jeff Leek announced a new R package called tidypvals: “The tidypvals package is an effort to find previous collections of published p-values, synthesize them, and tidy them into one analyzable data set.” In a preview…

Read More

If At First You Don’t Succeed, Change Alpha

In a recent working paper, posted on PsyArXiv Preprints, Daniel Benjamin, James Berger, Magnus Johanneson, Brian Nosek, Eric-Jan Wagenmakers, and 67 other authors(!) argue for a stricter standard of statistical significance for studies claiming new discoveries.  In their words: “…we…

Read More

IN THE NEWS: NY Times (MAY 29, 2017)

[From the article “Science Needs a Solution for the Temptation of Positive Results” by Aaron E. Carroll at The New York Times/The Upshot website]   “Science has a reproducibility problem. … As long as the academic environment has incentives for…

Read More

SCHÖNBRODT: Learn to p-Hack Like the Pros!

(NOTE: This ironic blog post was originally published on http://www.nicebread.de/introducing-p-hacker/)  My Dear Fellow Scientists! “If you torture the data long enough, it will confess.” This aphorism, attributed to Ronald Coase, sometimes has been used in a dis-respective manner, as if…

Read More

Don’t Have Time To Do a Replication? Have You Considered p-Curves?

So another study finds that X affects Y, and you are a sufficiently cynical TRN reader that you wonder if the authors have p-hacked their way to get their result.  Don’t have time (or the incentive) to do a replication?  You…

Read More

John Oliver and Last Week Tonight on Replications and Scientific Reliability

How does one know when replication has hit the big time?  When JOHN OLIVER and LAST WEEK TONIGHT do an entire episode on it.  For readers of TRN, much of what he talks about will be familiar.  Just a lot funnier.  Check…

Read More

IN THE NEWS: The Economist (21 January 2016)

(FROM THE ARTICLE “Are Results in Top Journals To Be Trusted?”)  A paper recently published in the American Economic Journal, entitled “Star Wars: The Empirics Strike Back”, “analyses 50,000 tests published between 2005 and 2011 in three top American journals. It finds that the…

Read More