[From the blog “Cargo-cult statistics and scientific crisis” by Philip Stark and Andrea Saltelli, published by Significance magazine] “Poor practice is catching up with science, manifesting in part in the failure of results to be reproducible and replicable. Various causes have been posited, but…
Read MoreIn a recent comment published in the Journal of the American Medical Association, John Ioannidis provided the following summary of proposals (see table below). The summary, and his brief commentary, may be of interest to readers of TRN. Source: Ioannidis…
Read More[From the article “Five ways to fix statistics” posted at nature.com] “As debate rumbles on about how and how much to poor statistics is to blame for poor reproducibility, Nature asked influential statisticians to recommend one change to improve science.” Researchers…
Read More[NOTE: This is a repost of a blog that Prasanna Parasurama published at the blogsite Towards Data Science]. “The confidence intervals of the two groups overlap, hence the difference is not statistically significant” The statement above is wrong. Overlapping confidence…
Read More[Note: This blog is based on our articles “Blinding Us to the Obvious? The Effect of Statistical Training on the Evaluation of Evidence” (Management Science, 2016) and “Statistical Significance and the Dichotomization of Evidence” (Journal of the American Statistical Association,…
Read More[From the article “A statistical fix for the replication crisis in science” by Valen E. Johnson at https://theconversation.com/au.] “In a trial of a new drug to cure cancer, 44 percent of 50 patients achieved remission after treatment. Without the drug, only…
Read More[The post below comes from a review by Richard Morey of the article “Meeting the challenge of the Psychonomic Society’s 2012 Guidelines on Statistical Issues: Some success and some room for improvement“, published in the journal Psychonomic Bulletin & Review by Peter…
Read More
You must be logged in to post a comment.