Archives


Reproducibility and Meta-Analyses: Two Great Concepts That Apparently Don’t Mix

[Excerpts taken from the report “Examining the Reproducibility of Meta-Analyses in Psychology: A Preliminary Report” by Daniel Lakens et al., posted at MetaArXiv Preprints ] “…given the broad array of problems that make it difficult to evaluate the evidential value…

Read More

To Your List of Biases in Meta-Analyses, Add This One: Accumulation Bias

[From the preprint “Accumulation bias in meta-analysis: the need to consider time in error control” by Judith ter Schure and Peter Grünwald, posted at arXiv.org] “Studies accumulate over time and meta-analyses are mainly retrospective. These two characteristics introduce dependencies between…

Read More

Why So Many Insignificant Results in a Meta-analysis?

[From the blog “Where Do Non-Significant Results in Meta-Analysis Come From?” by Ulrich Schimmack, posted at Replicability-Index] “It is well known that focal hypothesis tests in psychology journals nearly always reject the null-hypothesis … However, meta-analyses often contain a fairly…

Read More

When Once Is Not Enough

[From the blog “Be careful what you wish for: cautionary tales on using single studies to inform policymaking” by Emmanuel Jimenez, posted at http://www.3ieimpact.org.%5D “For a development evaluator, the holy grail is to have evidence from one’s study be taken…

Read More

Making Meta-Analyses More Replicable

[From the article “Automatic extraction of quantitative data from ClinicalTrials.gov to conduct meta-analyses” by Richeek Pradhan et al., published in the Journal of Clinical Epidemiology] “Systematic reviews and meta-analyses are labor-intensive and time-consuming. Automated extraction of quantitative data from primary…

Read More

What Can Meta-Analyses Tell Us About Reproducibility?

[From the abstract of the article “What Meta-Analyses Reveal About the Replicability of Psychological Research” by T.D. Stanley, Evan Carter, and Hristos Doucouliagos, published in Psychological Bulletin] “Can recent failures to replicate psychological research be explained by typical magnitudes of statistical power,bias…

Read More

Are Meta-Analyses Overrated?

[From the article, “Meta-analyses were supposed to end scientific debates. Often, they only cause more controversy” by Jop de Vrieze, published at http://www.sciencemag.org%5D “Meta-analyses were thought to be debate enders, but now we know they rarely are,” Ferguson says. “They should…

Read More

IN THE NEWS: Wired (November 14, 2017)

[From the article “The Dismal Science Remains Dismal, Say Scientists” by Adam Rogers at wired.com] “WHEN HRISTOS DOUCOULIAGOS was a young economist in the mid-1990s, he got interested in all the ways economics was wrong about itself—bias, underpowered research, statistical shenanigans. Nobody wanted…

Read More

FYI: ScienceOpen Has a Collection of Papers on How to Fix the Replicability Crisis

ScienceOpen has a collection entitled: “Remedies to the Reproducibility Crisis”.  The collection is introduced thusly: “Psychology, Medicine, Neuroscience and many other research fields, are facing a serious reproducibility crisis, that is, most of the findings published in peer-review journals, independently…

Read More

Concurrent Replication

[From Rolf Zwaan’s blog “Zeitgeist”.] “A form of replication that has received not much attention yet is what I will call concurrent replication. The basic idea is this. A research group formulates a hypothesis that they want to test. At…

Read More