BROWN, LAMBERT, & WOJAN: At the Intersection of Null Findings and Replication

Replication is an important topic in economic research or any social science for that matter. This issue is most important when an analysis is undertaken to inform decisions by policymakers. Drawing inferences from null or insignificant finding is particularly problematic because it is often unclear when “not significant” can be interpreted as “no effect.” We recently wrestled with this issue in our paper, “The Effect of the Conservation Reserve Program on Rural Economies: Deriving a Statistical Verdict from a Null Finding,” published in the American Journal of Agricultural Economics.  Below is a summary of our findings.
While an inherent bias to publish research with significant findings is widely recognized, there are times when not finding an effect may be more important. For example, suggestive evidence that a policy may not work is arguably more consequential than statistical confirmation that is does. The conundrum produced by null findings is not having any statistical basis for determining whether the true effect is close to zero or if the test is underpowered—that is, unlikely to detect a substantive effect. Our paper developed a method for deriving probabilities for null findings by providing a valid ex post estimate of statistical power. This allows economists and policymakers to more confidently conclude when “not significant” can, in fact, be interpreted as “no substantive effect.”
We demonstrate our method by replicating an analysis from the Economic Research Service’s (ERS) 2004 Report to Congress on the economic implications of the Conservation Reserve Program (CRP). The program, which was signed into law in 1985, was designed to remove environmentally vulnerable land from agricultural production. However, farm-dependent counties experienced both employment and population declines through the economically prosperous 1990s, raising concerns that the program might have cost jobs due to a reduction in agricultural production. Indeed, the ERS report identified worse employment growth in farm-dependent counties with high-CRP enrollments relative to their low-CRP enrollment peers. However, the report was unable to attribute lost employment to CRP enrollments.
While the report failed to identify a statistically significant, negative long-term effect of the program on employment growth, the authors cautioned that the verdict of “no negative employment effect” was only valid if the econometric test was statistically powerful. Replicating the 2004 analysis using new statistical inference methods allowed us to determine whether the tentative 2004 conclusion was correct. Our replication addresses two critical deficiencies that prevent economists from estimating statistical power: 1) we posit a compelling effect size—the level of job losses that would raise concerns regarding the trade-off with environmental benefits–and 2) we estimate the variability of an unobserved alternative distribution using simulation methods. We conclude that the test used in the ERS report had high power for detecting employment effects of −1 percent or lower, equivalent to job losses that would reduce the program’s environmental benefits by a third. An unrestricted test in line with Congress’s charge to search for “any effect” had very low power.
In many circumstances, economists do not have the opportunity to conduct power analysis before research starts. The approaches we suggest can be used to determine power for univariate analyses or multivariate regressions after the fact, provided the data-generating process can be replicated and the effect size of economic significance or policy relevance is stated. Given a range of posited effect sizes, our approach supplements an array of tools to inform decision making in the event of a null finding.”
In the spirit of replication, you can find our data and code in the supporting documentation of the article. If you are not able to access the article, the supplemental materials are also available here. We hope that others confronted with the “null hypothesis lacking error probability” conundrum will consider using the methods as a tool for making null findings potentially more informative, and for making our toolkit of applied econometric methods more useful for decision-making. 
Jason P. Brown is an assistant vice president and economist at the Federal Reserve Bank of Kansas City. Dayton M. Lambert is a professor and Willard Sparks Chair, Department of Agricultural Economics, Oklahoma State University. Timothy R. Wojan is a senior economist, USDA, Economic Research Service. The opinions expressed are those of the authors and are not attributable to the Federal Reserve Bank of Kansas City, the Federal Reserve System, Oklahoma State University, the Economic Research Service, or USDA. Correspondence can be directed to Jason Brown at

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: