[From the paper “The practical alternative to the p-value is the correctly used p-value” by Daniël Lakens, posted at PsyArXiv Preprints] “I do not think it is useful to tell researchers what they want to know. Instead, we should teach…

Read More[From the article “Insights into Criteria for Statistical Significance from Signal Detection Analysis” by Jessica Witt, published in Meta-Psychology] “… the best criteria for statistical significance are ones that maximize discriminability between real and null effects, not just those that…

Read More[From the preprint “When and Why to Replicate: As Easy as 1, 2, 3?” by Sarahanne Field, Rink Hoekstra, Laura Bringmann, and Don van Ravenzwaaij, posted at PsyArXiv Preprints.] “…a flood of new replications of existing research have reached the…

Read More[From the abstract to the article, “Quantifying Support for the Null Hypothesis in Psychology: An Empirical Investigation” by Aczel et al., recently published in Advances in Methods and Practices in Psychological Science] “In the traditional statistical framework, nonsignificant results leave researchers…

Read More[NOTE: This is a repost of a blog that Andrew Gelman wrote for the blogsite Statistical Modeling, Causal Inference, and Social Science]. Blake McShane and David Gal recently wrote two articles (“Blinding us to the obvious? The effect of statistical…

Read MoreIn a recent article in PLOS One, Don van Ravenzwaaij and John Ioannidis argue that Bayes factors should be preferred to significance testing (p-values) when assessing the effectiveness of new drugs. At his blogsite The 20% Statistician, Daniel Lakens argues…

Read More