Did it Replicate? Or Didn’t It?
[From the blogsite, Data Colada] As noted previously in TRN, the Social Sciences Replication Project is replicating 21 experimental studies published in Nature and Science from 2010-2015. To determine whether the original studies replicate, the associated team of researchers is using the following rule: “Set n for the replication so that it would have 90% power to detect an effect that’s 75% as large as the original effect size estimate. If “it fails” (p>.05), try again powering for an effect 50% as big as original.”
URI SIMONSOHN argues that this “90-75-50” rule is “noisy and wasteful.” He contrasts it with his own “small telescopes” approach to replication and finds that his approach produces more reliable findings with a more efficient use of sample sizes. The linked “Small Telescopes” article provides an informative discussion of some of the issues involved with what it means “to replicate”. To read more, click here.