# Archives

## Doing More with Confidence Intervals

[Excerpts taken from the article “In Praise of Confidence Intervals” by David Romer, posted at the American Economic Association’s 2020 annual conference website] “…most modern empirical papers concentrate on two characteristics of their findings: whether the point estimates are statistically…

## GOODMAN: Ladies and Gentlemen, I Introduce to You, “Plausibility Limits”

Confidence intervals get top billing as the alternative to significance. But beware: confidence intervals rely on the same math as significance and share the same shortcominings. Confidence intervals don’t tell where the true effect lies even probabilistically. What they do…

## Down With Confidence Intervals. Up With Uncertainty Intervals? Compatibility Intervals?

[Excerpts taken from the article “Are confidence intervals better termed ‘uncertainty intervals’?” by Andrew Gelman and Sander Greenland, published in the BMJ.] Are confidence intervals better termed “uncertainty intervals?” Yes—Andrew Gelman “Confidence intervals can be a useful summary in model…

## GOODMAN: Your p-Values Are Too Small! And So Are Your Confidence Intervals!

An oft-overlooked detail in the significance debate is the challenge of calculating correct p-values and confidence intervals, the favored statistics of the two sides. Standard methods rely on assumptions about how the data were generated and can be way off…

## Do Not Abandon Statistical Significance

[From the article “The Importance of Predefined Rules and Prespecified Statistical Analyses: Do Not Abandon Significance” by John Ioannidis, published in JAMA] “A recent proposal to ban statistical significance gained campaign-level momentum in a commentary with 854 recruited signatories. The…

## How Many Ways Can You Misinterpret p-Values, Confidence Intervals, Statistical Tests, and Power? 25

[From the blog “Misinterpreting Tests, P-Values, Confidence Intervals & Power” by Dave Giles, posted at his blogsite, Econometrics Beat] “Today I was reading a great paper by Greenland et al. (2016) that deals with some common misconceptions and misinterpretations that arise not…

## What If There Isn’t a Single Effect Size? Implications for Power Calculations, Hypothesis Testing, Confidence Intervals and Replications

[From the working paper “The Unappreciated Heterogeneity of Effect Sizes:Implications for Power, Precision, Planning of Research, and Replication” by David Kenny and Charles Judd, posted at Open Science Framework (OSF)] “The goal of this article is to examine the implications…

## Intro to Open Science in 8 Easy Steps

[From the working paper, “8 Easy Steps to Open Science: An Annotated Reading List” by Sophia Crüwell et al., posted at PsyArXiv Preprints] “In this paper, we provide a comprehensive and concise introduction to open science practices and resources that can help…

## GELMAN: Some Natural Solutions to the p-Value Communication Problem—And Why They Won’t Work

[NOTE: This is a repost of a blog that Andrew Gelman wrote for the blogsite Statistical Modeling, Causal Inference, and Social Science]. Blake McShane and David Gal recently wrote two articles (“Blinding us to the obvious? The effect of statistical…

## ANDERSON & MAXWELL: There’s More than One Way to Conduct a Replication Study – Six, in Fact

NOTE: This entry is based on the article, “There’s More Than One Way to Conduct a Replication Study: Beyond Statistical Significance” (Psychological Methods, 2016, Vol, 21, No. 1, 1-12) Following a large-scale replication project in economics (Chang & Li, 2015)…