[* EIR = Econometrics in Replications, a feature of TRN that highlights useful econometrics procedures for re-analysing existing research. The material for this blog is motivated by a recent blog at TRN, “The problem isn’t just the p-value, it’s also…

Read MoreIn Frequentist statistical inference, the p-value is used as a measure of how incompatible the data are with the null hypothesis. When the null hypothesis is fixed at a point, the test statistic reports a distance from the sample statistic…

Read More[From the preprint “Abandoning statistical significance is both sensible and practical” by Valentin Amrhein, Andrew Gelman, Sander Greenland, and Blakely McShane, available at PeerJ Preprints] “Dr Ioannidis writes against our proposals to abandon statistical significance…” “…we disagree that a statistical…

Read More[From the paper “The practical alternative to the p-value is the correctly used p-value” by Daniël Lakens, posted at PsyArXiv Preprints] “I do not think it is useful to tell researchers what they want to know. Instead, we should teach…

Read More[From the article “Stats Experts Plead: Just Say No to P-Hacking” by Dalmeet Singh Chawla, published in Undark] “For decades, researchers have used a statistical measure called the p-value — a widely-debated statistic that even scientists find difficult to define — that is…

Read More[From the introductory editorial “Moving to a World Beyond ‘p < 0.05’” by Ronald Wasserstein, Allen Schirm and Nicole Lazar, published in The American Statistician] “Some of you exploring this special issue of The American Statistician might be wondering if…

Read MoreThis blog is based on the homonymous paper by Norbert Hirschauer, Sven Grüner, Oliver Mußhoff, and Claudia Becker in the Journal of Economics and Statistics. It is motivated by prevalent inferential errors and the intensifying debate on p-values – as…

Read More[From the blog “‘Retire Statistical Significance’: The discussion” by Andrew Gelman, posted at Statistical Modeling, Causal Inference, and Social Science] “So, the paper by Valentin Amrhein, Sander Greenland, and Blake McShane that we discussed a few weeks ago has just appeared online as…

Read MoreReplication researchers cite inflated effect sizes as a major cause of replication failure. It turns out this is an inevitable consequence of significance testing. The reason is simple. The p-value you get from a study depends on the observed effect…

Read More[From the article, “Statistical Rituals: The Replication Delusion and How We Got There” by Gerd Gigerenzer, published in Advances in Methods and Practices in Psychological Science] “The “replication crisis” has been attributed to misguided external incentives gamed by researchers (the…

Read More