Archives


REED: Why Lowering Alpha to 0.005 is Unlikely to Help

[This blog is based on the paper, “A Primer on the ‘Reproducibility Crisis’ and Ways to Fix It” by the author] A standard research scenario is the following: A researcher is interested in knowing whether there is a relationship between…

Read More

A Roundtable Podcast on the Merits of Lowering the Threshold for Statistical Significance to 0.005

This past week, the International Methods Colloquium hosted a conference call on a recent proposal to reduce the threshold of statistical significance to 0.005.  Participants included Daniel Benjamin, Daniel Lakens, Blake McShane, Jennifer Tackett, E.J. Wagenmakers,  and Justin Esarey, all…

Read More

Is Fixing the Replication Crisis As Simple as Lowering the p-Value?

[From the article “A statistical fix for the replication crisis in science” by Valen E. Johnson at https://theconversation.com/au.] “In a trial of a new drug to cure cancer, 44 percent of 50 patients achieved remission after treatment. Without the drug, only…

Read More

IN THE NEWS: Vox (July 31, 2017)

[From the article “What a nerdy debate about p-values shows about science — and how to fix it” by Brian Resnick at Vox.com]  “There’s a huge debate going on in social science right now. The question is simple, and strikes…

Read More

If At First You Don’t Succeed, Change Alpha

In a recent working paper, posted on PsyArXiv Preprints, Daniel Benjamin, James Berger, Magnus Johanneson, Brian Nosek, Eric-Jan Wagenmakers, and 67 other authors(!) argue for a stricter standard of statistical significance for studies claiming new discoveries.  In their words: “…we…

Read More

If the American Statistical Association Warns About p-Values, and Nobody Hears It, Does It Make a Sound?

[From the article, “The ASA’s p-value statement, one year on”, which appeared in the online journal Significance, a publication of the American Statistical Association] “A little over a year ago now, in March 2016, the American Statistical Association (ASA) took…

Read More

GELMAN: Some Natural Solutions to the p-Value Communication Problem—And Why They Won’t Work

[NOTE: This is a repost of a blog that Andrew Gelman wrote for the blogsite Statistical Modeling, Causal Inference, and Social Science]. Blake McShane and David Gal recently wrote two articles (“Blinding us to the obvious? The effect of statistical…

Read More

Bayes Factors Versus p-Values

In a recent article in PLOS One, Don van Ravenzwaaij and John Ioannidis argue that Bayes factors should be preferred to significance testing (p-values) when assessing the effectiveness of new drugs.  At his blogsite The 20% Statistician, Daniel Lakens argues…

Read More

ANDERSON & MAXWELL: There’s More than One Way to Conduct a Replication Study – Six, in Fact

NOTE: This entry is based on the article, “There’s More Than One Way to Conduct a Replication Study: Beyond Statistical Significance” (Psychological Methods, 2016, Vol, 21, No. 1, 1-12) Following a large-scale replication project in economics (Chang & Li, 2015)…

Read More

Everything is F**KED: The Syllabus

Come on, admit it.  This is the course you really want to teach.  Professor Sanjay Srivastava’s PSY607’s weekly topics include: –Significance testing is f**ked — Causal inference from experiments is f**ked — Replicability is f**ked — Scientific publishing is f**ked…

Read More