[From the “2018 Economics Replication Project” posted by Nick Huntington-Klein and Andy Gill of California State University, Fullerton]
“In this project, we are asking recruited researchers to perform a “blind” replication of one of two studies. Without telling researchers the methods used by the original study, we will instruct participants to use a particular data set and set of statistical assumptions in order to estimate a single specific causal estimate. Participants will clean the data, construct variables, and make the other minor decisions that go into a statistical analysis, aside from the data source, identifying assumption, and effect of interest, which will be held constant. By comparing the analyses that different researchers perform under these conditions, we will estimate the variability in estimates that occurs as a result of decisions that researchers make.”
“This approach is different from most replication studies in economics. We are not trying to test the validity of the original results. Instead, our aim is to measure the degree of variation in results that can be attributed to generally “invisible” features of analysis. You may have seen similar tests elsewhere, such as in the New York Times’ The Upshot section. Our project is most similar to the “Crowdsourced Data Analysis” project described by Raphael Silberzahn and Eric Uhlmann here, although our goal is slightly different.”
“If you are interested in joining us, we are looking for researchers who have published at least one published or forthcoming paper in the empirical microeconomics literature and who are familiar with methods of causal identification. Participants will be offered authorship on the final publication. We are also currently working on securing funding. If we do, there may be financial compensation for your time.”
To learn more, click here.