Archives


Disagreeing With Disagreeing About Abandoning Statistical Significance

[From the preprint “Abandoning statistical significance is both sensible and practical” by Valentin Amrhein, Andrew Gelman, Sander Greenland, and Blakely McShane, available at PeerJ Preprints] “Dr Ioannidis writes against our proposals to abandon statistical significance…” “…we disagree that a statistical…

Read More

Don’t Abandon It! Learn (and Teach) to Use It Correctly

[From the paper “The practical alternative to the p-value is the correctly used p-value” by Daniël Lakens, posted at PsyArXiv Preprints] “I do not think it is useful to tell researchers what they want to know. Instead, we should teach…

Read More

IN THE NEWS: Undark (March 21, 2019)

[From the article “Stats Experts Plead: Just Say No to P-Hacking” by Dalmeet Singh Chawla, published in Undark] “For decades, researchers have used a statistical measure called the p-value — a widely-debated statistic that even scientists find difficult to define — that is…

Read More

Special Issue of The American Statistician: “Statistical Inference in the 21st Century: A World Beyond p < 0.05”

[From the introductory editorial “Moving to a World Beyond ‘p < 0.05’” by Ronald Wasserstein, Allen Schirm and Nicole Lazar, published in The American Statistician] “Some of you exploring this special issue of The American Statistician might be wondering if…

Read More

HIRSCHAUER et al.: Twenty Steps Towards an Adequate Inferential Interpretation of p-Values in Econometrics

This blog is based on the homonymous paper by Norbert Hirschauer, Sven Grüner, Oliver Mußhoff, and Claudia Becker in the Journal of Economics and Statistics. It is motivated by prevalent inferential errors and the intensifying debate on p-values – as…

Read More

“Retire Statistical Significance”: A Call to Join the Discussion

[From the blog “‘Retire Statistical Significance’: The discussion” by Andrew Gelman, posted at Statistical Modeling, Causal Inference, and Social Science] “So, the paper by Valentin Amrhein, Sander Greenland, and Blake McShane that we discussed a few weeks ago has just appeared online as…

Read More

GOODMAN: When You’re Selecting Significant Findings, You’re Selecting Inflated Estimates

Replication researchers cite inflated effect sizes as a major cause of replication failure. It turns out this is an inevitable consequence of significance testing. The reason is simple. The p-value you get from a study depends on the observed effect…

Read More

The Problem Isn’t Bad Incentives, It’s the Ritual Behind Them

[From the article, “Statistical Rituals: The Replication Delusion and How We Got There” by Gerd Gigerenzer, published in Advances in Methods and Practices in Psychological Science] “The “replication crisis” has been attributed to misguided external incentives gamed by researchers (the…

Read More

Choosing the Right α: What You Need to Know

[From the article “The quest for an optimal alpha” by Jeff Miller and Rolf Ulrich, published in PLOS One] “The purpose of the present article is to show exactly what is necessary to provide a principled justification for a particular α…

Read More

MILLER: The Statistical Fundamentals of (Non-)Replicability

“Replicability of findings is at the heart of any empirical science” (Asendorpf, Conner, De Fruyt, et al., 2013, p. 108) The idea that scientific results should be reliably demonstrable under controlled circumstances has a special status in science.  In contrast…

Read More