Should Null Results Require Greater Justification?

In a recent editorial in Management Review Quarterly, the journal invited replications, and put forth the following “Seven Principles of Effective Replication Studies”: 
#1. “Understand that replication is not reproduction”
#2. “Aim to replicate published studies that are relevant” 
#3. “Try to replicate in a way that potentially enhances the generalizability of the original study”
#4. “Do not compromise on the quality of data and measures” 
#5. “Nonsignificant findings are publishable but need explanation”
#6. “Extensions are possible but not necessary”
#7. “Choose an appropriate format based on the replication approach”
We have to admit that #5 caught our eye. Here is the explanation the editors gave:
“…nonsignificant results or ‘failed’ replications can be extremely important to further theory development. However, they need more information and explanation than ‘successful’ replications. Replication studies should account for this and include detailed comparison tables of the original and replicated results and an elaborate discussion of the differences and similarities between the studies. Authors need to make an effort to explain deviant findings. The differences might be due to different contextual environments from where the sample is drawn; the use of different, more appropriate measures; different statistical methods or simply a result of frequentist null-hypothesis testing where, by definition, false positives are possible (Kerr, 1998). In any case, authors should comment on these possibilities and take a clear stand.”
Note that a replicating author who can’t explain why their replication did not reproduce the original study’s results is given the following lifeline: “The differences might be due to… simply a result of frequentist null-hypothesis testing where, by definition, false positives are possible.”
Is it reasonable that nonsignificant results in replications — and we acknowledge that the fact it is a replication is important — be held to a higher standard of justification than significant results?
We think so, but we wonder if it really will be sufficient for MRQ for a replicating author to “explain” their nonsignificant results by claiming the original study was a 5%, rare result of samping error.
And we wonder what others think.
To read the full editorial, click here.


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: