[From the working paper, “Publication Bias and Editorial Statement on Negative Findings” by Cristina Blanco-Perez and Abel Brodeur]
Prior research points out that there is a selection bias in favor of positive results by editors and referees. In other words, research articles rejecting the null hypothesis (i.e., finding a statistically significant effect) are more likely to get published than papers not rejecting the null hypothesis. This issue may lead policymakers and the academic community to believe more in studies that find an effect than in studies not finding an effect.
Fortunately, innovations in social sciences are under way to improve research transparency. For instance, many scientific journals now ask the authors to share their codes and data to facilitate replication. Registration and pre-analysis plans are also becoming more popular for randomized controlled trials and lab experiments.
In this study, we test the impact of a simple, low-cost, new transparent practice that aims to reduce the extent of publication bias. In February 2015, the editors of eight health economics journals published on their journals’ websites an Editorial Statement on Negative Findings. In this statement, the editors express that: “well-designed, well-executed empirical studies that address interesting and important problems in health economics, utilize appropriate data in a sound and creative manner, and deploy innovative conceptual and methodological approaches […] have potential scientific and publication merit regardless of whether such studies’ empirical findings do or do not reject null hypotheses that may be specified.”
The editors point out in the statement that it: “should reduce the incentives to engage in two forms of behavior that we feel ought to be discouraged in the spirit of scientific advancement:
– Authors withholding from submission such studies that are otherwise meritorious but whose main empirical findings are highly likely `negative’ (e.g., null hypotheses not rejected).
– Authors engaging in `data mining,’ `specification searching,’ and other such empirical strategies with the goal of producing results that are ostensibly `positive’ (e.g., null hypotheses reported as rejected).”
We collect z -statistics from two of the eight health economics journals that sent out the editorial statement and compare the distribution of tests before and after the editorial statement. We find that test statistics in papers submitted and published after the editors sent out the editorial statement are less likely to be statistically significant. The figure below illustrates our results.