Archives


HIRSCHAUER et al.: Twenty Steps Towards an Adequate Inferential Interpretation of p-Values in Econometrics

This blog is based on the homonymous paper by Norbert Hirschauer, Sven Grüner, Oliver Mußhoff, and Claudia Becker in the Journal of Economics and Statistics. It is motivated by prevalent inferential errors and the intensifying debate on p-values – as…

Read More

To p-Value or Not to p-Value? An Answer From Signal Detection Theory

[From the article “Insights into Criteria for Statistical Significance from Signal Detection Analysis” by Jessica Witt, published in Meta-Psychology] “… the best criteria for statistical significance are ones that maximize discriminability between real and null effects, not just those that…

Read More

When Trying to Explain p-Values, Maybe Try This?

[From the blog “P-values 101: An attempt at an intuitive but mathematically correct explanation” by Xenia Schmalz, posted at Xenia Schmalz’s blog] “…what exactly are p-values, what is p-hacking, and what does all of that have to do with the replication crisis?…

Read More

How Many Ways Can You Misinterpret p-Values, Confidence Intervals, Statistical Tests, and Power? 25

[From the blog “Misinterpreting Tests, P-Values, Confidence Intervals & Power” by Dave Giles, posted at his blogsite, Econometrics Beat] “Today I was reading a great paper by Greenland et al. (2016) that deals with some common misconceptions and misinterpretations that arise not…

Read More

Using Bayesian Reanalysis to Decide Which Studies to Replicate

[From the preprint “When and Why to Replicate: As Easy as 1, 2, 3?” by Sarahanne Field, Rink Hoekstra, Laura Bringmann, and Don van Ravenzwaaij, posted at PsyArXiv Preprints.] “…a flood of new replications of existing research have reached the…

Read More

HIRSCHAUER et al.: Why replication is a nonsense exercise if we stick to dichotomous significance thinking and neglect the p-value’s sample-to-sample variability

[This blog is based on the paper “Pitfalls of significance testing and p-value variability: An econometrics perspective” by Norbert Hirschauer, Sven Grüner, Oliver Mußhoff, and Claudia Becker, Statistics Surveys 12(2018): 136-172.] Replication studies are often regarded as the means to…

Read More

Failure of Justice: p-Values and the Courts

[From the abstract of the working paper, “US Courts of Appeal cases frequently misinterpret p-values and statistical significance: An empirical study”, by Adrian Barnett and Steve Goodman, posted at Open Science Framework] “We examine how p-values and statistical significance have been interpreted…

Read More

80% Power? Really?

[From the blog “The “80% power” lie” posted by Andrew Gelman in December 2017 at Statistical Modeling, Causal Inference, and Social Science] “Suppose we really were running studies with 80% power. In that case, the expected z-score is 2.8, and…

Read More

P-Values Between 0.01 and 0.10 Are a Problem?

[From the blog, “The uncanny mountain: p-values between .01 and .10 are still a problem” by Julia Rohrer, posted at The 100% CI] “Study 1: In line with our hypothesis, …, p = 0.03.” “Study 2: As expected, … p =…

Read More

A Summary of Proposals to Improve Statistical Inference

In a recent comment published in the Journal of the American Medical Association, John Ioannidis provided the following summary of proposals (see table below). The summary, and his brief commentary, may be of interest to readers of TRN.  Source: Ioannidis…

Read More