[From the paper “The practical alternative to the p-value is the correctly used p-value” by Daniël Lakens, posted at PsyArXiv Preprints] “I do not think it is useful to tell researchers what they want to know. Instead, we should teach…

Read More[From the article “Stats Experts Plead: Just Say No to P-Hacking” by Dalmeet Singh Chawla, published in Undark] “For decades, researchers have used a statistical measure called the p-value — a widely-debated statistic that even scientists find difficult to define — that is…

Read More[From the Twitter thread started by @JessieSunPsych] Jessie Sun (@JessieSunPsych) relayed the following question that was raised at a recent Psychology conference: “At what point can a theory be falsified (e.g., if the effect size is d = .02)? We often…

Read More[From the blog “Justify Your Alpha by Decreasing Alpha Levels as a Function of the Sample Size” by Daniël Lakens, posted at The 20% Statistician] “Testing whether observed data should surprise us, under the assumption that some model of the data is…

Read More[From the recent working paper, “The Costs and Benefits of Replication Studies” by Coles, Tiokhin, Scheel, Isager, and Lakens, posted at psyarxiv.com/c8akj] “The debate about whether replication studies should become mainstream is essentially driven by disagreements about their costs and benefits,…

Read MoreThis past week, the International Methods Colloquium hosted a conference call on a recent proposal to reduce the threshold of statistical significance to 0.005. Participants included Daniel Benjamin, Daniel Lakens, Blake McShane, Jennifer Tackett, E.J. Wagenmakers, and Justin Esarey, all…

Read MoreEach year, the Berkeley Initiative for Transparency in the Social Sciences (BITSS) awards prizes to researchers who have made substantial contributions to improving transparency in research practices. The prizes are names after Ed Leamer (economics) and Robert Rosenthal (psychology) through…

Read MoreObserved power (or post-hoc power) is the statistical power of the test you have performed, based on the effect size estimate from your data. Statistical power is the probability of finding a statistical difference from 0 in your test (aka…

Read MoreWhen we perform a study, we would like to conclude there is an effect, when there is an effect. But it is just as important to be able to conclude there is no effect, when there is no effect. So…

Read MoreIn a recent article in PLOS One, Don van Ravenzwaaij and John Ioannidis argue that Bayes factors should be preferred to significance testing (p-values) when assessing the effectiveness of new drugs. At his blogsite The 20% Statistician, Daniel Lakens argues…

Read More