*[From the article “A statistical fix for the replication crisis in science” by Valen E. Johnson at https://theconversation.com/au.]*

###### “In a trial of a new drug to cure cancer, 44 percent of 50 patients achieved remission after treatment. Without the drug, only 32 percent of previous patients did the same. The new treatment sounds promising, but is it better than the standard?”

###### “That question is difficult, so statisticians tend to answer a different question. They look at their results and compute something called a p-value. If the p-value is less than 0.05, the results are “statistically significant” – in other words, unlikely to be caused by just random chance.”

###### “The problem is, many statistically significant results **aren’t replicating**. A treatment that shows promise in one trial doesn’t show any benefit at all when given to the next group of patients. This problem has become so severe that **one psychology journal actually banned p-values altogether**.”

**aren’t replicating**

**one psychology journal actually banned p-values altogether**

###### “My colleagues and I have studied this problem, and we think we know what’s causing it. The bar for claiming statistical significance is simply too low.”

###### To read more, **click here**.

**click here**