Archives


What Went Down at the World Bank’s “Transparency, Reproducibility, and Credibility” Research Symposium

[Excerpts taken from the blog “What development economists talk about when they talk about reproducibility …” by Luiza Andrade, Guadalupe Bedoya, Benjamin Daniels, Maria Jones, and Florence Kondylis, published on the World Bank’s Development Impact blog] “Can another researcher reuse…

Read More

Do We Want to Eliminate Selection Bias in Publication? Not Always

[Excerpts taken from the article “No Data in the Void: Values and Distributional Conflicts in Empirical Policy Research and Artificial Intelligence” by Maximilian Kasy, published at econfip.org] “Decision making based on data…is becoming ever more widespread. Any time such decisions…

Read More

Using Z-Curve to Estimate Mean Power for Studies Published in Psychology Journals

[From the blog “Estimating the Replicability of Psychological Science” by Ulrich Schimmack, posted at Replicability-Index] “Over the past years, I have been working on an … approach to estimate the replicability of psychological science. This approach starts with the simple…

Read More

Surveying Reproducibility

[From the article “Assessing data availability and research reproducibility in hydrology and water resources” by Stagge, Rosenberg, Abdallah, Akbar, Attallah & James, published in Nature’s Scientific Data] “…reproducibility requires multiple, progressive components such as (i) all data, models, code, directions,…

Read More

MILLER: The Statistical Fundamentals of (Non-)Replicability

“Replicability of findings is at the heart of any empirical science” (Asendorpf, Conner, De Fruyt, et al., 2013, p. 108) The idea that scientific results should be reliably demonstrable under controlled circumstances has a special status in science.  In contrast…

Read More

Modelling Reproducibility

[From the preprint “A Model-Centric Analysis of Openness, Replication, and Reproducibility”, by Bert Baumgaertner, Berna Devezer, Erkan Buzbas, and Luis Nardin, posted at arXiv.org] “In order to clearly specify the conditions under which we may or may not obtain reproducible results,…

Read More

VASISHTH: The Statistical Significance Filter Leads To Overoptimistic Expectations of Replicability

[This blog draws on the article “The statistical significance filter leads to overoptimistic expectations of replicability”, authored by Shravan Vasishth, Daniela Mertzen, Lena A. Jäger, and Andrew Gelman, published in the Journal of Memory and Language, 103, 151-175, 2018. An open…

Read More

Significant Effects From Low-Powered Studies Will Be Overestimates

[From the article, “The statistical significance filter leads to overoptimistic expectations of replicability” by Shravan Vasishth, Daniela Mertzen, Lena Jäger, and Andrew Gelman, published in the Journal of Memory and Language] Highlights: “When low-powered studies show significant effects, these will…

Read More

A Unified Framework for Quantifying Scientific Credibility?

[From the abstract of the paper, “A Unified Framework to Quantify the Credibility of Scientific Findings”, by Etienne LeBel, Randy McCarthy, Brian Earp, Malte Elson, and Wolf Vanpaemel, published in the journal, Advances in Methods and Practices in Psychological Science] “…we…

Read More

ISO-AHOLA: On Reproducibility and Replication in Psychological and Economic Sciences

[This blog is a summary of a longer treatment of the subject that was published in Frontiers in Psychology in June 2017.  To read that article, click here.] Physicists have asked “why is there something rather than nothing?” They have theorized that…

Read More