Archives


HIRSCHAUER et al.: Why replication is a nonsense exercise if we stick to dichotomous significance thinking and neglect the p-value’s sample-to-sample variability

[This blog is based on the paper “Pitfalls of significance testing and p-value variability: An econometrics perspective” by Norbert Hirschauer, Sven Grüner, Oliver Mußhoff, and Claudia Becker, Statistics Surveys 12(2018): 136-172.] Replication studies are often regarded as the means to…

Read More

Failure of Justice: p-Values and the Courts

[From the abstract of the working paper, “US Courts of Appeal cases frequently misinterpret p-values and statistical significance: An empirical study”, by Adrian Barnett and Steve Goodman, posted at Open Science Framework] “We examine how p-values and statistical significance have been interpreted…

Read More

80% Power? Really?

[From the blog “The “80% power” lie” posted by Andrew Gelman in December 2017 at Statistical Modeling, Causal Inference, and Social Science] “Suppose we really were running studies with 80% power. In that case, the expected z-score is 2.8, and…

Read More

P-Values Between 0.01 and 0.10 Are a Problem?

[From the blog, “The uncanny mountain: p-values between .01 and .10 are still a problem” by Julia Rohrer, posted at The 100% CI] “Study 1: In line with our hypothesis, …, p = 0.03.” “Study 2: As expected, … p =…

Read More

A Summary of Proposals to Improve Statistical Inference

In a recent comment published in the Journal of the American Medical Association, John Ioannidis provided the following summary of proposals (see table below). The summary, and his brief commentary, may be of interest to readers of TRN.  Source: Ioannidis…

Read More

2 Humps = P-Hacking + Publication Bias?

In a recent blogpost at Simply Statistics, Jeff Leek announced a new R package called tidypvals: “The tidypvals package is an effort to find previous collections of published p-values, synthesize them, and tidy them into one analyzable data set.” In a preview…

Read More

If the American Statistical Association Warns About p-Values, and Nobody Hears It, Does It Make a Sound?

[From the article, “The ASA’s p-value statement, one year on”, which appeared in the online journal Significance, a publication of the American Statistical Association] “A little over a year ago now, in March 2016, the American Statistical Association (ASA) took…

Read More

Bayes Factors Versus p-Values

In a recent article in PLOS One, Don van Ravenzwaaij and John Ioannidis argue that Bayes factors should be preferred to significance testing (p-values) when assessing the effectiveness of new drugs.  At his blogsite The 20% Statistician, Daniel Lakens argues…

Read More

IN THE NEWS: The Economist (21 January 2016)

(FROM THE ARTICLE “Are Results in Top Journals To Be Trusted?”)  A paper recently published in the American Economic Journal, entitled “Star Wars: The Empirics Strike Back”, “analyses 50,000 tests published between 2005 and 2011 in three top American journals. It finds that the…

Read More

REBLOG: Evidence of Publication Bias and Misreported p-Values

FROM THE BLOG POLITICAL SCIENCE REPLICATION:  “A new article by researchers at the University of Amsterdam shows that publication bias towards statistically significant results may cause p-value misreporting. The team examined hundreds of published articles and found that authors had…

Read More