[This blog is based on the paper “Pitfalls of significance testing and p-value variability: An econometrics perspective” by Norbert Hirschauer, Sven Grüner, Oliver Mußhoff, and Claudia Becker, Statistics Surveys 12(2018): 136-172.] Replication studies are often regarded as the means to…

Read More[From the abstract of the working paper, “US Courts of Appeal cases frequently misinterpret p-values and statistical significance: An empirical study”, by Adrian Barnett and Steve Goodman, posted at Open Science Framework] “We examine how p-values and statistical significance have been interpreted…

Read More[From the blog “The “80% power” lie” posted by Andrew Gelman in December 2017 at Statistical Modeling, Causal Inference, and Social Science] “Suppose we really were running studies with 80% power. In that case, the expected z-score is 2.8, and…

Read More[From the blog, “The uncanny mountain: p-values between .01 and .10 are still a problem” by Julia Rohrer, posted at The 100% CI] “Study 1: In line with our hypothesis, …, p = 0.03.” “Study 2: As expected, … p =…

Read MoreIn a recent comment published in the Journal of the American Medical Association, John Ioannidis provided the following summary of proposals (see table below). The summary, and his brief commentary, may be of interest to readers of TRN. Source: Ioannidis…

Read MoreIn a recent blogpost at Simply Statistics, Jeff Leek announced a new R package called tidypvals: “The tidypvals package is an effort to find previous collections of published p-values, synthesize them, and tidy them into one analyzable data set.” In a preview…

Read More[From the article, “The ASA’s p-value statement, one year on”, which appeared in the online journal Significance, a publication of the American Statistical Association] “A little over a year ago now, in March 2016, the American Statistical Association (ASA) took…

Read MoreIn a recent article in PLOS One, Don van Ravenzwaaij and John Ioannidis argue that Bayes factors should be preferred to significance testing (p-values) when assessing the effectiveness of new drugs. At his blogsite The 20% Statistician, Daniel Lakens argues…

Read More(FROM THE ARTICLE “Are Results in Top Journals To Be Trusted?”) A paper recently published in the American Economic Journal, entitled “Star Wars: The Empirics Strike Back”, “analyses 50,000 tests published between 2005 and 2011 in three top American journals. It finds that the…

Read MoreFROM THE BLOG POLITICAL SCIENCE REPLICATION: “A new article by researchers at the University of Amsterdam shows that publication bias towards statistically significant results may cause p-value misreporting. The team examined hundreds of published articles and found that authors had…

Read More