*[From the paper “The practical alternative to the p-value is the correctly used p-value” by Daniël Lakens, posted at *__PsyArXiv Preprints__]

__PsyArXiv Preprints__]

###### “I do not think it is useful to tell researchers what they want to know. Instead, we should teach them the possible questions they can ask (Hand, 1994). One of these questions is how surprising observed data is under the assumption of some model, to which a *p*-value provides an answer.”

###### “The accusation that *p*-values are a cause of the problems with replicability across scientific disciplines lacks empirical support. Hanson (1958) examined the replicability of research findings published in anthropology, psychology, and sociology. One of the hypotheses examined was whether propositions advanced with explicit confirmation criteria, such as the rejection of a hypothesis at a 5% significance level, were more replicable than propositions made without such an explicit confirmation criterium. He found that ‘over 70 per cent of the original propositions advanced with explicit confirmation criteria were later confirmed in independent tests, while less than 46 per cent of the propositions advanced without explicit confirmation criteria were later confirmed.”

###### “There is also no empirical evidence to support the idea that replacing hypothesis testing with estimation, or *p*-values with for example Bayes factors, will matter in practice. …If alternative approaches largely lead to the same decisions as a *p*-value when used with care, why exactly is the *p*-value the problem?”

###### “Most problems attributed to *p*-values are problems with the practice of null-hypothesis significance testing. Many misinterpretations of single *p*-values have to do with either concluding a meaningful effect is absent after a non-significant result, or misinterpreting a significant result as an important effect.”

###### “I personally believe substantial improvements can be made by teaching researchers how to calculate *p*-values for minimal-effects tests and equivalence tests. Minimal-effects tests and equivalence tests require the same understanding of statistics as null-hypothesis tests, but provide an easy way to ask different questions from your data, such as how to provide support for the absence of a meaningful effect.”

###### “Teaching students that testing a range prediction is just as easy as testing against an effect size of 0 has almost no cost but might solve some of the most common misunderstandings of *p*-values.”

###### To read the paper, **click here**.

**click here**