[From the abstract of the working paper, “US Courts of Appeal cases frequently misinterpret p-values and statistical significance: An empirical study”, by Adrian Barnett and Steve Goodman, posted at Open Science Framework]
“We examine how p-values and statistical significance have been interpreted in US Courts of Appeal cases from 2007 to 2017. The two most common errors were: 1) Assuming a “non-significant” p-value meant there was no important difference and the evidence could be discarded, and 2) Assuming a “significant” p-value meant the difference was important, with no discussion of context or practical significance. The estimated mean probability of a correct interpretation was 0.21 with a 95% credible interval of 0.11 to 0.31.”
To read more, click here.