[Excerpts taken from the article “P-value Thresholds: Forfeit at Your Peril’ by Deborah Mayo, forthcoming in the European Journal of Clinical Investigation] “A key recognition among those who write on the statistical crisis in science is that the pressure to…
Read More[* EIR = Econometrics in Replications, a feature of TRN that highlights useful econometrics procedures for re-analysing existing research. The material for this blog is motivated by a recent blog at TRN, “The problem isn’t just the p-value, it’s also…
Read MoreIn Frequentist statistical inference, the p-value is used as a measure of how incompatible the data are with the null hypothesis. When the null hypothesis is fixed at a point, the test statistic reports a distance from the sample statistic…
Read More[From the preprint “Abandoning statistical significance is both sensible and practical” by Valentin Amrhein, Andrew Gelman, Sander Greenland, and Blakely McShane, available at PeerJ Preprints] “Dr Ioannidis writes against our proposals to abandon statistical significance…” “…we disagree that a statistical…
Read More[From the paper “The practical alternative to the p-value is the correctly used p-value” by Daniël Lakens, posted at PsyArXiv Preprints] “I do not think it is useful to tell researchers what they want to know. Instead, we should teach…
Read More[From the article “Stats Experts Plead: Just Say No to P-Hacking” by Dalmeet Singh Chawla, published in Undark] “For decades, researchers have used a statistical measure called the p-value — a widely-debated statistic that even scientists find difficult to define — that is…
Read More[From the introductory editorial “Moving to a World Beyond ‘p < 0.05’” by Ronald Wasserstein, Allen Schirm and Nicole Lazar, published in The American Statistician] “Some of you exploring this special issue of The American Statistician might be wondering if…
Read MoreThis blog is based on the homonymous paper by Norbert Hirschauer, Sven Grüner, Oliver Mußhoff, and Claudia Becker in the Journal of Economics and Statistics. It is motivated by prevalent inferential errors and the intensifying debate on p-values – as…
Read More[From the blog “‘Retire Statistical Significance’: The discussion” by Andrew Gelman, posted at Statistical Modeling, Causal Inference, and Social Science] “So, the paper by Valentin Amrhein, Sander Greenland, and Blake McShane that we discussed a few weeks ago has just appeared online as…
Read MoreReplication researchers cite inflated effect sizes as a major cause of replication failure. It turns out this is an inevitable consequence of significance testing. The reason is simple. The p-value you get from a study depends on the observed effect…
Read More
You must be logged in to post a comment.