An oft-overlooked detail in the significance debate is the challenge of calculating correct p-values and confidence intervals, the favored statistics of the two sides. Standard methods rely on assumptions about how the data were generated and can be way off…

Read More[From the article “The Importance of Predefined Rules and Prespecified Statistical Analyses: Do Not Abandon Significance” by John Ioannidis, published in JAMA] “A recent proposal to ban statistical significance gained campaign-level momentum in a commentary with 854 recruited signatories. The…

Read More[From the blog “Misinterpreting Tests, P-Values, Confidence Intervals & Power” by Dave Giles, posted at his blogsite, Econometrics Beat] “Today I was reading a great paper by Greenland et al. (2016) that deals with some common misconceptions and misinterpretations that arise not…

Read More[From the working paper “The Unappreciated Heterogeneity of Effect Sizes:Implications for Power, Precision, Planning of Research, and Replication” by David Kenny and Charles Judd, posted at Open Science Framework (OSF)] “The goal of this article is to examine the implications…

Read More[From the working paper, “8 Easy Steps to Open Science: An Annotated Reading List” by Sophia Crüwell et al., posted at PsyArXiv Preprints] “In this paper, we provide a comprehensive and concise introduction to open science practices and resources that can help…

Read More[NOTE: This is a repost of a blog that Andrew Gelman wrote for the blogsite Statistical Modeling, Causal Inference, and Social Science]. Blake McShane and David Gal recently wrote two articles (“Blinding us to the obvious? The effect of statistical…

Read MoreNOTE: This entry is based on the article, “There’s More Than One Way to Conduct a Replication Study: Beyond Statistical Significance” (Psychological Methods, 2016, Vol, 21, No. 1, 1-12) Following a large-scale replication project in economics (Chang & Li, 2015)…

Read More