Archives


And How Are Things Going In Political Science?

[From the working paper “Why Too Many Political Science Findings Cannot be Trusted and What We Can Do About It” by Alexander Wuttke, posted at SocArXiv Papers] “…this article reviewed the meta-scientific evidence with a focus on the quantitative political science…

Read More

DID, IV, RCT, and RDD: Which Method Is Most Prone to Selective Publication and p-Hacking?

[From the working paper, “Methods Matter: P-Hacking and Causal Inference in Economics” by Abel Brodeur, Nikolai Cook, and Anthony Heyes] “…Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is…

Read More

Oh No! Not Zebra Finches Too!

[From the article, “Replication Failures Highlight Biases in Ecology and Evolution Science” by Yao-Hua Law, published at http://www.the-scientist.com%5D  “As robust efforts fail to reproduce findings of influential zebra finch studies from the 1980s, scientists discuss ways to reduce bias in such…

Read More

Pre-Registration? Meet Publication Bias

[From the blog post, “What Is Preregistration For?” by Neuroskeptic, published at Discover Magazine] “The paper reports on five studies which all address the same general question. Of these, Study #3 was preregistered and the authors write that it was performed after…

Read More

Progress in Publishing Negative Results?

[From the working paper, “Publication Bias and Editorial Statement on Negative Findings” by Cristina Blanco-Perez and Abel Brodeur] “In February 2015, the editors of eight health economics journals sent out an editorial statement which aims to reduce the incentives to…

Read More

HARKing is Bad, But Which Kind of HARKing is Worse?

[From the article “HARKing: How Badly Can Cherry-Picking and Question Trolling Produce Bias in Published Results?” by Kevin Murphy and Herman Aguinis, published in the Journal of Business and Psychology.]  “The practice of hypothesizing after results are known (HARKing) has…

Read More

REED: Why Lowering Alpha to 0.005 is Unlikely to Help

[This blog is based on the paper, “A Primer on the ‘Reproducibility Crisis’ and Ways to Fix It” by the author] A standard research scenario is the following: A researcher is interested in knowing whether there is a relationship between…

Read More

Another Elsevier Journal Goes “Blind”

[From the article “Results masked review: peer review without publication bias” by Jennifer Franklin at Elsevier.com.] “We know that research data isn’t neat and tidy. It’s messy, complex and often throws something unexpected at us. At the Journal of Vocational…

Read More

Results-Free Peer Review: The Video

Previous posts at TRN have highlighted “results-free peer review” (RFPR) efforts at a variety of journals: see here, here, and here. The journal BMC Psychology recently put together a short (approximately 2 minutes) video discussing their new policy of “results-free…

Read More

WEICHENRIEDER: FinanzArchiv/Public Finance Analysis Wants Your Insignificant Results!

There is considerable concern among scholars that empirical papers face a drastically smaller chance of being published if the results looking to confirm an established theory turn out to be statistically insignificant. Such a publication bias can provide a wrong…

Read More