Archives


“Retire Statistical Significance”: A Call to Join the Discussion

[From the blog “‘Retire Statistical Significance’: The discussion” by Andrew Gelman, posted at Statistical Modeling, Causal Inference, and Social Science] “So, the paper by Valentin Amrhein, Sander Greenland, and Blake McShane that we discussed a few weeks ago has just appeared online as…

Read More

Replications Can Lessen the Pressure To Get It Right the First Time — And That Can Be a Good Thing

[From the blog “(back to basics:) How is statistics relevant to scientific discovery?” by Andrew Gelman, posted at Statistical Modeling, Causal Inference, and Social Science] “If we are discouraged from criticizing published work—or if our criticism elicits pushback and attacks…

Read More

The (Honest) Truth About Dishonesty: A Personal Example From the Authors?

[From the blog entitled “Oh, I hate it when work is criticized (or, in this case, fails in attempted replications) and then the original researchers don’t even consider the possibility that maybe in their original work they were inadvertently just…

Read More

IN THE NEWS: New York Times (November 19, 2018)

[From the article, “Essay: The Experiments Are Fascinating. But Nobody Can Repeat Them” by Andrew Gelman, published in The New York Times] “At this point, it is hardly a surprise to learn that even top scientific journals publish a lot…

Read More

M Is For Pizza

[From the blog ““Tweeking”: The big problem is not where you think it is” by Andrew Gelman, posted at Statistical Modeling, Causal Inference, and Social Science] “In her recent article about pizzagate, Stephanie Lee included this hilarious email from Brian Wansink, the…

Read More

VASISHTH: The Statistical Significance Filter Leads To Overoptimistic Expectations of Replicability

[This blog draws on the article “The statistical significance filter leads to overoptimistic expectations of replicability”, authored by Shravan Vasishth, Daniela Mertzen, Lena A. Jäger, and Andrew Gelman, published in the Journal of Memory and Language, 103, 151-175, 2018. An open…

Read More

Significant Effects From Low-Powered Studies Will Be Overestimates

[From the article, “The statistical significance filter leads to overoptimistic expectations of replicability” by Shravan Vasishth, Daniela Mertzen, Lena Jäger, and Andrew Gelman, published in the Journal of Memory and Language] Highlights: “When low-powered studies show significant effects, these will…

Read More

80% Power? Really?

[From the blog “The “80% power” lie” posted by Andrew Gelman in December 2017 at Statistical Modeling, Causal Inference, and Social Science] “Suppose we really were running studies with 80% power. In that case, the expected z-score is 2.8, and…

Read More

DON’T: Aim for significance. DO: Aim for precision

[From the recent working paper, “The statistical significance filter leads to overoptimistic expectations of replicability” by Vasishth, Mertzen, Jäger, and Gelman posted at PsyArXiv Preprints] “…when power is low, using significance to decide whether to publish a result leads to a proliferation of exaggerated…

Read More

Nature Asks, “How To Fix Science?”

[From the article “Five ways to fix statistics” posted at nature.com] “As debate rumbles on about how and how much to poor statistics is to blame for poor reproducibility, Nature asked influential statisticians to recommend one change to improve science.” Researchers…

Read More