*[From the blog “The “80% power” lie” posted by Andrew Gelman in December 2017 at Statistical Modeling, Causal Inference, and Social Science]*

###### “Suppose we really were running studies with 80% power. In that case, the expected z-score is 2.8, and 95% of the time we’d see z-scores between 0.8 and 4.8. Let’s open up the R:”

###### “> 2*pnorm(-0.8) [1] 0.42”

###### “> 2*pnorm(-4.8) [1] 1.6e-06”

###### “So we should expect to routinely see p-values ranging from 0.42 to . . . ummmm, 0.0000016. And those would be clean, pre-registered p-values, no funny business, no researcher degrees of freedom, no forking paths.”

###### “Let’s explore further . . . the 75th percentile of the normal distribution is 0.67, so if we’re really running studies with 80% power, then one-quarter of the time we’d see z-scores above 2.8 + 0.67 = 3.47.”

###### “> 2*pnorm(-3.47) [1] 0.00052″

###### “Dayum. We’d expect to see clean, un-hacked p-values less than 0.0005, at least a quarter of the time, if we were running studies with minimum 80% power, as we routinely claim we’re doing, if we ever want any of that sweet, sweet NIH funding. And, yes, that’s 0.0005, not 0.005. There’s a bunch of zeroes there.”

###### “And, no, this ain’t happening. We don’t have 80% power. Heck, we’re lucky if we have 6% power.”

###### To read more, **click here**.

**click here**