[From the preprint, “Statistical Analyses for Studying Replication: Meta-Analytic Perspectives” by Larry Hedges and Jacob Schauer, forthcoming in Psychological Methods]
“Formal empirical assessments of replication have recently become more prominent in several areas of science, including psychology. These assessments have used different statistical approaches to determine if a finding has been replicated. The purpose of this article is to provide several alternative conceptual frameworks that lead to different statistical analyses to test hypotheses about replication.”
“…The differences among the methods described involve whether the burden of proof is placed on replication or nonreplication, whether replication is exact or allows for a small amount of “negligible heterogeneity,” and whether the studies observed are assumed to be fixed (constituting the entire body of relevant evidence) or are a sample from a universe of possibly relevant studies.”
“…All of them are valid statistical approaches … Because they use different conceptual definitions of “replication” and place the burden of proof differently, these tests vary in their sensitivity.”
“The example illustrates that the same data might reject replication (if exact replication is required), fail to confirm approximate replication (if the burden of proof is place on nonreplication), or fail to reject approximate nonreplication (if the burden of proof is on replication).”
“… studies of replication cannot be unambiguous unless they are clear about how they frame their statistical analyses and clearly define the hypotheses they actually test. Researchers should also recognize that different frameworks for evaluating replication could lead to different conclusions from the same data.”
“…The power computations offered in this article illustrate that it is likely to be difficult to obtain strong empirical tests for replication.”
To read the article, click here (NOTE: article is behind a paywall).