Archives


Replication Crisis? What Replication Crisis?

[Excerpts taken from the article “No Crisis but No Time for Complacency” by Wendy Wood and Timothy Wilson, published in Observer Magazine] “The National Academies of Sciences, Engineering, and Medicine recently published a report titled Reproducibility and Replicability in Science….

Read More

Could Bayes Have Saved Us From the Replication Crisis?

[Excerpts are taken from the article “The Flawed Reasoning Behind the Replication Crisis” by Aubrey Clayton, published at nautil.us] “Suppose an otherwise healthy woman in her forties notices a suspicious lump in her breast and goes in for a mammogram….

Read More

Another Journal Adopts the “Pottery Barn Rule”

[From the editorial “SA Editorial About Next Phase of More Open Science” by Michael Seto, published in the journal Sexual Abuse] “It is now widely recognized that there are publication biases toward novel and exciting findings, which has contributed to a replication…

Read More

Disagreeing With Disagreeing About Abandoning Statistical Significance

[From the preprint “Abandoning statistical significance is both sensible and practical” by Valentin Amrhein, Andrew Gelman, Sander Greenland, and Blakely McShane, available at PeerJ Preprints] “Dr Ioannidis writes against our proposals to abandon statistical significance…” “…we disagree that a statistical…

Read More

It’s Not A Problem, It’s an Opportunity

[From the blog “The replication crisis is good for science” by Eric Loken, published at The Conversation] “Science is in the midst of a crisis: A surprising fraction of published studies fail to replicate when the procedures are repeated.” “Is…

Read More

Using Z-Curve to Estimate Mean Power for Studies Published in Psychology Journals

[From the blog “Estimating the Replicability of Psychological Science” by Ulrich Schimmack, posted at Replicability-Index] “Over the past years, I have been working on an … approach to estimate the replicability of psychological science. This approach starts with the simple…

Read More

Picking Significant Estimates to Replicate Can Induce “Replication Crisis”-Like Results

[From the paper “Statistical Methods for Replicability Assessment” by Kenneth Hung and William Fithian, posted at ArXiv.org. Note that H&K’s paper is primarily concerned with presenting an empirical procedure for addressing questions about replicability after correcting for selection bias. This…

Read More

When Trying to Explain p-Values, Maybe Try This?

[From the blog “P-values 101: An attempt at an intuitive but mathematically correct explanation” by Xenia Schmalz, posted at Xenia Schmalz’s blog] “…what exactly are p-values, what is p-hacking, and what does all of that have to do with the replication crisis?…

Read More

The Problem Isn’t Bad Incentives, It’s the Ritual Behind Them

[From the article, “Statistical Rituals: The Replication Delusion and How We Got There” by Gerd Gigerenzer, published in Advances in Methods and Practices in Psychological Science] “The “replication crisis” has been attributed to misguided external incentives gamed by researchers (the…

Read More

MILLER: The Statistical Fundamentals of (Non-)Replicability

“Replicability of findings is at the heart of any empirical science” (Asendorpf, Conner, De Fruyt, et al., 2013, p. 108) The idea that scientific results should be reliably demonstrable under controlled circumstances has a special status in science.  In contrast…

Read More