BUTERA: A Novel Approach for Novel Results

[NOTE: This post refers to the article “An Economic Approach to Alleviate the Crises of Confidence in Science: With an Application to the Public Goods Game” by Luigi Butera and John List.  The article is available as a working paper which can be downloaded here.]
In the process of generating scientific knowledge, scholars sometimes stumble upon new and surprising results. Novel studies typically face a binary fate: either their relevance and validity is dismissed, or findings are embraced as important and insightful. Such judgements however commonly rely on statistical significance as the main criterion for acceptance. This poses two problems, especially when a study is the first of its kind.
The first problem is that novel results may be false positives simply because of the mechanics of statistical inference. Similarly, new surprising results that suffer from low power, or marginal statistical significance, may sometimes be dismissed even though they point toward an economic association that is ultimately true.
The second problem has to do with how people should update their beliefs based on unanticipated new scientific evidence. Given the mechanics of inference, it is difficult to provide a definite answer when such evidence is based on one single exploration. To fix ideas, suppose that before running an experiment, a Bayesian scholar had a prior about the likelihood of a given result being true of only 1%. After running the experiment and observing the significant results (significant at, say, 5% level), the scholar should update his beliefs to 13.9%, a very large increase relative to the initial beliefs. Posterior beliefs can be easily computed, for any given prior, by dividing the probability that a true result is declared true by the probability that any result is declared true.  Even more dramatically, a second scholar who for instance had priors of 10%, instead of 1%, would update his posterior beliefs to 64%. The problem is clear: posterior beliefs generated from low priors are extremely volatile when they only depend on evidence provided by a single study. Finding a referee with priors of 10% or 1% can make or break a paper!
The simple solution to this problem is of course to replicate the study: as evidence accumulates, posterior beliefs converge. Unfortunately, the incentives to replicate existing studies are rarely in place in the social sciences: once a paper is published, the original authors have little incentive to replicate their own work. Similarly, the incentives for other scholars to closely replicate existing work are typically very low.
To address this issue, we proposed in our paper a simple incentive-compatible mechanism to promote replications, and generate mutually beneficial gains from trade between scholars. Our idea is simple: upon completion of a study that reports novel results, the authors make it available online as a working paper, but commit never to submit it to a peer-reviewed journal for publication. They instead calculate how many replications they need for beliefs to converge to a desired level, and then offer co-authorship for a second, yet to be written, paper to other scholars willing to independently replicate their study. Once the team of coauthors is established, but before replications begin, the first working paper is updated to include the list of coauthors and the experimental protocol is registered at the AEA RCT registry. This guarantees that all replications, both failed and successful, are accounted for in the second paper. The second paper will then reference the first working paper, include all replications, and will be submitted to a peer-reviewed journal for publication.
We put our mechanism to work on our own experiment where we asked: can cooperation be sustained over time when the quality of a given public good cannot be precisely estimated? From charitable investments to social programs, uncertainty about the exact social returns from these investments is a pervasive characteristic. Yet we know very little about how people coordinate over ambiguous and uncertain social decisions. Surprisingly, we find that the presence of (Knightian) uncertainty about the quality of a public good does not harm, but rather increases cooperation. We interpret our finding through the lenses of conditional cooperation: when the value of a public good is observed with noise, conditional cooperators may be more tolerant to observed reductions in their payoffs, for instance because such reductions may be due, in part, to a lower-than-expected quality of the public good itself rather than solely to the presence of free-riders. However, we will wait until all replications are completed to draw more informed inference about the effect of ambiguity on social decisions.
One final note: while we believe that replications are always desirable, we do not by any means suggest that all experiments, lab or field, necessarily need to follow our methodology. We believe that our approach is best suited for studies that find results that are unanticipated, and in some cases at odds with the current state of knowledge on a topic. This is because in these cases, priors are more likely to be low, and perhaps more sensitive to other factors such as the experience or rank of the investigator. As such, we believe that our approach would be particularly beneficial for scholars at the early stages of their careers, and we hope many will consider joining forces together. 
Luigi Butera is a Post-Doctoral scholar in the Department of Economics at the University of Chicago. He can be contacted via email at lbutera@uchicago.edu.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: