Archives


One Question + Many Researchers = Many Different Answers

[Excerpts taken from the article “Crowdsourcing hypothesis tests: Making transparent how design choices shape research results” by Justin Landy and many others, posted at the preprint repository at the University of Essex] “…we introduce a crowdsourced approach to hypothesis testing….

Read More

A Must-Read on the Statistical Analysis of Replications

[Excerpts taken from the article, “The Statistics of Replication” by Larry Hedges, pubished in the journal Methodology] Background “Some treatments of replication have defined replication in terms of the conclusions obtained by studies (e.g., did both studies conclude that the…

Read More

Reproducibility and Meta-Analyses: Two Great Concepts That Apparently Don’t Mix

[Excerpts taken from the report “Examining the Reproducibility of Meta-Analyses in Psychology: A Preliminary Report” by Daniel Lakens et al., posted at MetaArXiv Preprints ] “…given the broad array of problems that make it difficult to evaluate the evidential value…

Read More

To Your List of Biases in Meta-Analyses, Add This One: Accumulation Bias

[From the preprint “Accumulation bias in meta-analysis: the need to consider time in error control” by Judith ter Schure and Peter Grünwald, posted at arXiv.org] “Studies accumulate over time and meta-analyses are mainly retrospective. These two characteristics introduce dependencies between…

Read More

Why So Many Insignificant Results in a Meta-analysis?

[From the blog “Where Do Non-Significant Results in Meta-Analysis Come From?” by Ulrich Schimmack, posted at Replicability-Index] “It is well known that focal hypothesis tests in psychology journals nearly always reject the null-hypothesis … However, meta-analyses often contain a fairly…

Read More

When Once Is Not Enough

[From the blog “Be careful what you wish for: cautionary tales on using single studies to inform policymaking” by Emmanuel Jimenez, posted at http://www.3ieimpact.org.%5D “For a development evaluator, the holy grail is to have evidence from one’s study be taken…

Read More

Making Meta-Analyses More Replicable

[From the article “Automatic extraction of quantitative data from ClinicalTrials.gov to conduct meta-analyses” by Richeek Pradhan et al., published in the Journal of Clinical Epidemiology] “Systematic reviews and meta-analyses are labor-intensive and time-consuming. Automated extraction of quantitative data from primary…

Read More

What Can Meta-Analyses Tell Us About Reproducibility?

[From the abstract of the article “What Meta-Analyses Reveal About the Replicability of Psychological Research” by T.D. Stanley, Evan Carter, and Hristos Doucouliagos, published in Psychological Bulletin] “Can recent failures to replicate psychological research be explained by typical magnitudes of statistical power,bias…

Read More

Are Meta-Analyses Overrated?

[From the article, “Meta-analyses were supposed to end scientific debates. Often, they only cause more controversy” by Jop de Vrieze, published at http://www.sciencemag.org%5D “Meta-analyses were thought to be debate enders, but now we know they rarely are,” Ferguson says. “They should…

Read More

IN THE NEWS: Wired (November 14, 2017)

[From the article “The Dismal Science Remains Dismal, Say Scientists” by Adam Rogers at wired.com] “WHEN HRISTOS DOUCOULIAGOS was a young economist in the mid-1990s, he got interested in all the ways economics was wrong about itself—bias, underpowered research, statistical shenanigans. Nobody wanted…

Read More