P-Values Between 0.01 and 0.10 Are a Problem?
[From the blog, “The uncanny mountain: p-values between .01 and .10 are still a problem” by Julia Rohrer, posted at The 100% CI]
“Study 1: In line with our hypothesis, …, p = 0.03.”
“Study 2: As expected, … p = 0.02.”
“Study 3: Replicating Study 2, … p = 0.06.”
“Study 4: …qualified by the predicted interaction, … p = 0.01.”
“Study 5: Again, … p = 0.01.”
“Welcome to the uncanny … p-mountains, one of the most scenic accumulations of p-values between .01 and .10 in the world! Over the course of the last years, many psychologists have learned that such a distribution of p-values is troubling (see e.g. blog posts by Simine Vazire and Daniël Lakens) and statistical tools have been developed to analyze what these distributions can tell us about the underlying evidence (Uli Schimmack’s TIVA, p-curve by Simonsohn, Nelson and Simmons p-curve)—as it turns out, quite frequently, the answer is not good news.”
“However, the uncanny p-mountains can still be seen in journal articles published in 2018. And this is probably no surprise given that (1) our intuitions about something as unintuitive as frequentist statistics are often wrong and (2) many researchers have been socialized in an environment in which such p-values were considered perfectly normal, if not a sign of excellent experimental skills. So let’s keep talking about it!”
Like this:
Like Loading...