Archives


BITSS to Offer Short Course on Research Transparency and Reproducibility Training in Washington DC, September 11-13, 2019

[From an announcement on the BITSS website] “Research Transparency and Reproducibility Training (RT2) provides participants with an overview of tools and best practices for transparent and reproducible social science research.”  “RT2 is designed for researchers in the social and health sciences,…

Read More

A Random Sampling of the State of Transparency and Reproducibility in Social Science Journals

[From the preprint “An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014-2017)” by Tom Hardwicke, Joshua Wallach, Mallory Kidwell, & John Ioannidis posted at MetaArXiv Preprints] “In this study, we evaluated a broad range of…

Read More

Disagreeing With Disagreeing About Abandoning Statistical Significance

[From the preprint “Abandoning statistical significance is both sensible and practical” by Valentin Amrhein, Andrew Gelman, Sander Greenland, and Blakely McShane, available at PeerJ Preprints] “Dr Ioannidis writes against our proposals to abandon statistical significance…” “…we disagree that a statistical…

Read More

Surprise? Data Sharing in Social Sciences Lags Other Disciplines

[From the article, “Effect of Impact Factor and Discipline on Journal Data Sharing Policies” by David Resnik et al., published in Accountability in Research] “…we coded … 447 journals … The breakdown was: 18.1% biological sciences, 18.8% clinical sciences, 21.7%…

Read More

Surveying Reproducibility

[From the article “Assessing data availability and research reproducibility in hydrology and water resources” by Stagge, Rosenberg, Abdallah, Akbar, Attallah & James, published in Nature’s Scientific Data] “…reproducibility requires multiple, progressive components such as (i) all data, models, code, directions,…

Read More

IN THE NEWS: Wired (February 15, 2019)

[From the article “DARPA Wants to Solve Science’s Reproducibility Crisis With AI” by Adam Rogers, published in Wired] “A Darpa program called Systematizing Confidence in Open Research and Evidence—yes, SCORE—aims to assign a “credibility score” … to research findings in the…

Read More

What Can Stop Bad Science? Open Science and Modified Funding Lotteries

[From the working paper, “Open science and modified funding lotteries can impede the natural selection of bad science” By Paul Smaldino, Matthew Turner, and Pablo Contreras Kallens, posted at OSF Preprints] “…we investigate the influence of three key factors on the…

Read More

Huge Replication Project in Social and Behavioral Sciences Looking for Collaborators

[From the press release “Can machines determine the credibility of research claims? The Center for Open Science joins a new DARPA program to find out” from the Center for Open Science] “The Center for Open Science (COS) has been selected…

Read More

Oh No! Not Again!

[From the article “Push button replication: Is impact evaluation evidence for international development verifiable?” by Benjamin Wood, Rui Müller, and Annette Brown, published in PLoS ONE] “…We drew a sample of articles from the ten journals that published the most…

Read More

Excellent, Cross-Disciplinary Overview of Scientific Reproducibility in the Stanford Encyclopedia of Philosophy

[From the article “Reproducibility of Scientific Results”, by Fiona Fidler and John Wilcox, published in The Stanford Encyclopedia of Philosophy] “This review consists of four distinct parts. First, we look at the term “reproducibility” and related terms like “repeatability” and…

Read More