Archives


DUAN & REED: How Are Meta-Analyses Different Across Disciplines?

INTRODUCTION Recently, one of us gave a workshop on how to conduct meta-analyses. The workshop was attended by participants from a number of different disciplines, including economics, finance, psychology, management, and health sciences. During the course of the workshop, it…

Read More

TER SCHURE: Accumulation Bias – How to handle it ALL-IN

An estimated 85% of global health research investment is wasted (Chalmers and Glasziou, 2009); a total of one hundred billion US dollars in the year 2009 when it was estimated. The movement to reduce this research waste recommends that previous…

Read More

Making Meta-Analyses “Open”: A How-To

[Excerpts taken from the article “Conducting a Meta-Analysis in the Age of Open Science: Tools, Tips, and Practical Recommendations” by David Moreau and Beau Gamble, posted at PsyArXiv Preprints] “In this tutorial, we describe why open science is important in the…

Read More

One Question + Many Researchers = Many Different Answers

[Excerpts taken from the article “Crowdsourcing hypothesis tests: Making transparent how design choices shape research results” by Justin Landy and many others, posted at the preprint repository at the University of Essex] “…we introduce a crowdsourced approach to hypothesis testing….

Read More

A Must-Read on the Statistical Analysis of Replications

[Excerpts taken from the article, “The Statistics of Replication” by Larry Hedges, pubished in the journal Methodology] Background “Some treatments of replication have defined replication in terms of the conclusions obtained by studies (e.g., did both studies conclude that the…

Read More

Reproducibility and Meta-Analyses: Two Great Concepts That Apparently Don’t Mix

[Excerpts taken from the report “Examining the Reproducibility of Meta-Analyses in Psychology: A Preliminary Report” by Daniel Lakens et al., posted at MetaArXiv Preprints ] “…given the broad array of problems that make it difficult to evaluate the evidential value…

Read More

To Your List of Biases in Meta-Analyses, Add This One: Accumulation Bias

[From the preprint “Accumulation bias in meta-analysis: the need to consider time in error control” by Judith ter Schure and Peter Grünwald, posted at arXiv.org] “Studies accumulate over time and meta-analyses are mainly retrospective. These two characteristics introduce dependencies between…

Read More

Why So Many Insignificant Results in a Meta-analysis?

[From the blog “Where Do Non-Significant Results in Meta-Analysis Come From?” by Ulrich Schimmack, posted at Replicability-Index] “It is well known that focal hypothesis tests in psychology journals nearly always reject the null-hypothesis … However, meta-analyses often contain a fairly…

Read More

When Once Is Not Enough

[From the blog “Be careful what you wish for: cautionary tales on using single studies to inform policymaking” by Emmanuel Jimenez, posted at http://www.3ieimpact.org.%5D “For a development evaluator, the holy grail is to have evidence from one’s study be taken…

Read More

Making Meta-Analyses More Replicable

[From the article “Automatic extraction of quantitative data from ClinicalTrials.gov to conduct meta-analyses” by Richeek Pradhan et al., published in the Journal of Clinical Epidemiology] “Systematic reviews and meta-analyses are labor-intensive and time-consuming. Automated extraction of quantitative data from primary…

Read More