Archives


M Is For Pizza

[From the blog ““Tweeking”: The big problem is not where you think it is” by Andrew Gelman, posted at Statistical Modeling, Causal Inference, and Social Science] “In her recent article about pizzagate, Stephanie Lee included this hilarious email from Brian Wansink, the…

Read More

VASISHTH: The Statistical Significance Filter Leads To Overoptimistic Expectations of Replicability

[This blog draws on the article “The statistical significance filter leads to overoptimistic expectations of replicability”, authored by Shravan Vasishth, Daniela Mertzen, Lena A. Jäger, and Andrew Gelman, published in the Journal of Memory and Language, 103, 151-175, 2018. An open…

Read More

Significant Effects From Low-Powered Studies Will Be Overestimates

[From the article, “The statistical significance filter leads to overoptimistic expectations of replicability” by Shravan Vasishth, Daniela Mertzen, Lena Jäger, and Andrew Gelman, published in the Journal of Memory and Language] Highlights: “When low-powered studies show significant effects, these will…

Read More

80% Power? Really?

[From the blog “The “80% power” lie” posted by Andrew Gelman in December 2017 at Statistical Modeling, Causal Inference, and Social Science] “Suppose we really were running studies with 80% power. In that case, the expected z-score is 2.8, and…

Read More

DON’T: Aim for significance. DO: Aim for precision

[From the recent working paper, “The statistical significance filter leads to overoptimistic expectations of replicability” by Vasishth, Mertzen, Jäger, and Gelman posted at PsyArXiv Preprints] “…when power is low, using significance to decide whether to publish a result leads to a proliferation of exaggerated…

Read More

Nature Asks, “How To Fix Science?”

[From the article “Five ways to fix statistics” posted at nature.com] “As debate rumbles on about how and how much to poor statistics is to blame for poor reproducibility, Nature asked influential statisticians to recommend one change to improve science.” Researchers…

Read More

Your Favorite Journal Does Not Publish “Only the Highest Quality Scientific Research”

In a recent opinion piece for Slate, the ubiquitous Andrew Gelman took the prestigious journal Proceedings of the National Academy of Sciences (PNAS) to task for claiming that it “only publishes the highest quality scientific research.” As a result, PNAS no longer…

Read More

Abandon Statistical Significance?

[From the abstract of a recent working paper by Blakeley McShane, David Gal, Andrew Gelman, Christian Robert, and Jennifer Tackett.] “In science publishing and many areas of research, the status quo is a lexicographic decision rule in which any result is first  required to have…

Read More

A Pop Quiz on Significant Effects with Small Sample Sizes

QUICK: Does finding a significant effect when the sample size is small make it more likely that the effects are real and important?  Or less? James Heckman, Nobel Prize winning economist, says more: “Also holding back progress are those who…

Read More

Andrew Gelman Asks, Does Criticizing Bad Research Do More Harm Than Good?

In a recent post at his blogsite, Statistical Modeling, Causal inference, and Social Science, Andrew Gelman asks whether his recent criticisms on statistical grounds of a prominent researcher’s experiments on healthy eating are doing more harm than good. The researcher, Brian Wansink, is John…

Read More