Project TIER (Teaching Integrity in Empirical Research) is one of the many initiatives launched within the last several years—a number of which have been featured in previous TRN guest blogs—that seek to strengthen standards of research transparency in the social sciences. Its mission statement reads:
Project TIER’s mission is to promote a systemic change in the professional norms related to the transparency and reproducibility of empirical research in the social sciences. It is guided by the principle that providing comprehensive replication documentation for research involving statistical data should be as ubiquitous and routine as it is to provide a list of references cited. Authors should view this documentation as an essential component of how they communicate their research to other scholars, and readers should not consider a study to be credible unless such documentation is available.
We will know this mission has been accomplished when failing to provide replication documentation for an empirical study is considered as aberrant as writing a theoretical paper that does not contain proofs of the propositions, an experimental paper that does not describe the treatment conditions, or a law review article that does not cite legal statutes or judicial precedents.
Project TIER’s approach to promoting research transparency is distinctive in two ways: it focuses on the education of social scientists early in their training, and it emphasizes the things that authors of research papers can do ensure that interested readers are able to replicate their empirical results without undue difficulty. Both of these features reflect the circumstances that led to the conception of Project TIER and in which the initiative has evolved.
The ideas that eventually grew into Project TIER began taking shape in an introductory course on statistical methods for undergraduates majoring in economics at Haverford College. Richard Ball, an economics faculty member, was the instructor for the course, and Norm Medeiros, a librarian, collaborated closely in the advising of students conducting research projects required for the class. For those projects, students chose the topics they investigated, found statistical data that could shed light on the questions they were interested in investigating, examined and analyzed the data in simple ways, and then wrote complete papers in which they presented and interpreted their results.
When this research project was introduced as a requirement for the course in 2001, the initial results were not encouraging. The papers students turned in were, to put it mildly, less than completely transparent. Their descriptions of the original data they had used and the sources from which those data had been obtained, of how the original data were cleaned and processed to create the final data sets used for the analyses, and of how the figures and tables presented in the papers were generated from the final data sets, were incoherent. In most cases it was impossible to understand the empirical work underlying the papers or to evaluate it in a constructive way.
To address this problem, we began requiring students to turn in not only printed copies of their research papers, but also to submit electronic documentation consisting of their data, code and some supporting information. We found, however, that developing a workable set of guidelines for the required documentation presented some challenges: they needed to be detailed and explicit enough that students would know unambiguously what was expected of them, and they needed to be general enough that they would be applicable across the varied types of data and analyses encountered in these projects; but they also needed to be short, simple and clear enough that it would be realistic to expect students to understand and implement them. It took a number of iterations to formulate guidelines that met these challenges, but over the course of several semesters we arrived at a set of written instructions that proved to be adequate. In the past ten years or so it has become routine for our students to follow those instructions for constructing replication documentation to accompany their research papers.
Requiring students to turn in comprehensive replication documentation with their research papers has solved the problem that led us to introduce the requirement: if any aspect of the data processing and analysis is not explained adequately in text of a paper, it is possible to discover exactly what the students did simply by reading and running their code. But a number of other benefits have followed as well. When students know that they have to document their statistical work, and are given some guidance for doing so, they themselves understand better what they are doing. And when they understand what they are doing, the analyses they choose to conduct tend to make more sense, and the explanations of what they did that they give in their papers tend to be much more coherent. Moreover, throughout the entire course of the semester in which students work on a project, their instructors are able to advise them much more effectively than would be possible if students did not keep their data organized and systematically record their work in command files.
Most fundamentally, placing upon students the responsibility to ensure that their work is reproducible (and teaching them some tools for achieving this goal) reinforces the principle that one should not make claims that cannot be verified or whose validity is in doubt; allowing students to turn in a paper based on work that they cannot reproduce undermines this principle. This principle applies broadly across academic disciplines, but it is particularly important to convey to beginning students of statistics, many of whom hold the prior belief that manipulation and obfuscation are inherent in statistical analysis.
After developing a simple and effective way of teaching students to document statistical research and observing the benefits that follow from doing so, we decided it would be worthwhile to share our experiences with others. We began in 2011 by presenting a paper at the first annual Conference on Teaching and Research in Economic Education (CTREE), organized by the American Economic Association Committee on Education, and that paper later appeared in the Journal of Economic Education.
Positive responses to these and other early outreach efforts led us to launch Project TIER in 2013. The activities we have undertaken since then include
– A series of workshops for social science faculty interested in introducing principles and methods of transparent research in classes they teach on quantitative methods. The next Faculty Development Workshop will take place April 1-2, 2016, on the Haverford College campus. These workshops are offered free of charge; information and applications are now available.
– A program of year-long fellowships, in which faculty who have already made significant contributions collaborate with us in the development and dissemination of new curriculum and approaches to teaching transparent research methods. We are currently working with five Fellows nominated for 2015-16, and have begun recruiting for the 2016-17 cohort of TIER Faculty Fellows, for which information and applications are also available.
– Workshops offered to doctoral students in doctoral programs in the social sciences, offering practical guidance on research documentation and workflow management in the course of writing an empirical dissertation. We will be conducting a workshop at Duke University, for economics graduate students, on February 12, 2016. Thanks to a generous grant, we can offer these workshops free of charge, and we would be happy to consider requests from other graduate departments interested in hosting a workshop.
Please note that we are working on a completely redesigned website, which will have a new URL: www.projecttier.org. At the time this blog is being posted, this URL is not yet active, but it will launch in the spring of 2016.
Follow us on Twitter: @Project_TIER
 Ball, R. and N. Medeiros (2012). Teaching Integrity in Empirical Research: A Protocol for Documenting Data Analysis and Management. Journal of Economic Education, 43(2), 182–189.