Pre-Analysis Plans in Economics and Political Science: Are They Working?

[Excerpts taken from the article, “Pre-analysis Plans: A Stocktaking” by George Ofosu and Daniel Posner, posted at George Ofosu’s website at the London School of Economics]
“We draw a representative sample of PAPs and analyze their content to determine whether they are sufficiently clear, precise, and comprehensive as to meaningfully limit the scope for fishing and post-hoc hypothesis adjustment. We also assess whether PAPs do, in fact, tie researchers hands by comparing publicly available papers that report the findings of pre-registered studies to the PAPs that were registered when those studies were initiated.”
“…we drew a representative sample of PAPs from the universe of studies registered on the AEA and EGAP registries between their initiation and 2016.”
“All 195 PAPs in our sample were coded according to a common rubric that recorded details of the hypotheses that were pre-specified, the dependent and independent variables that would be used in the analysis, the sampling strategy, the inclusion and exclusion rules, and the statistical models to be run, among other features.”
“For the sub-sample of 93 PAPs for which publicly available papers were available, we added further questions that addressed how faithfully the pre-specified details of the analysis were adhered to in the resulting paper.”
“We supplemented our coding of PAPs with an anonymous survey of PAP users to elicit their experiences with writing and using PAPs in their research.”
“The overwhelming majority of the 195 PAPs we coded were from field (63%), survey (27%), or lab (4%) experiments; observational studies comprised just 4% of our sample.”
“In our sample of PAPs, 77% of primary dependent variables and 93% of independent/treatment variables were judged to have been clearly specified.”
“In 44% of PAPs, the number of pre-specified control variables was judged to be unclear, making it nearly impossible to compare what was pre-registered with what is ultimately presented in the resulting paper.”
“…only 68% of PAPs were judged to have spelled out the precise statistical model to be tested and just 37% specified how they would estimate their standard errors.”
“…just 25% of PAPs specified how they would deal with missing values and/or attrition; just 13% specified how they would deal with noncompliance; just 8% specified how they would deal with outliers; and just 20% specified how they would deal with covariate imbalances.”
“Ninety percent of the PAPs we coded were judged to have specified a clear hypothesis.”
“While 34% of PAPs specified between one and five hypotheses—a number sufficiently small as to limit the leeway for selective presentation of results downstream—18% specified between six and ten hypotheses; 18% specified between 11 and 20 hypotheses; 21% specified between 21 and 50 hypotheses; and 8% specified more than 50 hypotheses…PAPs that pre-specify so many hypotheses raise questions about the value of pre-registration.”
“Taken together, these practices leave significant leeway for authors to omit results that are null or that complicate the story they wish to tell. But do authors take advantage of this latitude in practice?”
“To find out, we examined the sub-sample of 93 PAPs we coded that had publicly available papers and compared the primary hypotheses pre-specified in the PAP with the hypotheses discussed in the paper and/or its appendices. We find that study authors faithfully presented the results of all their pre-registered primary hypotheses in their paper or its appendices in just 61% of cases.”
“We found that 18% of the papers in our sample presented tests of novel hypotheses that were not pre-registered…authors that presented results based on hypotheses that were not pre-registered failed to mention this in 82% of cases.”
“Foremost among the objections to PAPs is that they are too time-consuming to prepare. Eighty-eight percent of researchers in our PAP users’ survey reported devoting a week or more to writing the PAP for a typical project, with 32% reporting spending an average of 2-4 weeks and 26% reporting spending more than a month.”
“However, while the PAP users we surveyed nearly all agreed that writing a PAP was costly, 64% agreed with the statement that ‘it takes a considerable amount of time, but it is worth it.’”
“Our stocktaking suggests that PAPs, as they are currently written and used, are not doing everything their proponents had hoped for…The details of the analyses that PAPs pre-specify…are often inadequate to reduce researcher degrees of freedom in a meaningful way.”
“In addition, papers that result from pre-registered analyses do not always follow them. Some papers introduce entirely novel hypotheses; others present only a subset of the hypotheses that were pre-registered.”
“However, while many of the PAPs we analyzed fell short of the ideal, a majority were sufficiently clear, precise and comprehensive to substantially limit the scope for fishing and post-hoc hypothesis adjustment.”
“So, even if improvements in research credibility do not come from every PAP, the growing adoption of PAPs in Political Science and Economics has almost certainly increased the number of credible studies in these fields.”
To read the article, click here.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: