Replications Can Lessen the Pressure To Get It Right the First Time — And That Can Be a Good Thing

[From the blog “(back to basics:) How is statistics relevant to scientific discovery?” by Andrew Gelman, posted at Statistical Modeling, Causal Inference, and Social Science]
“If we are discouraged from criticizing published work—or if our criticism elicits pushback and attacks from the powerful, or if it’s too hard to publish criticisms and obtain data for replication—that’s bad for discovery, in three ways.”
“First, criticizing errors allows new science to move forward in useful directions. We want science to be a sensible search, not a random walk.”
“Second, learning what went wrong in the past can help us avoid errors in the future. That is, criticism can be methodological and can help advance research methods.”
“Third, the potential for criticism should allow researchers to be more free in their speculation. If authors and editors felt that everything published in a top journal was gospel, there could well be too much caution in what to publish.”
“Just as, in economics, it is said that a social safety net gives people the freedom to start new ventures, in science the existence of a culture of robust criticism should give researchers a sense of freedom in speculation, in confidence that important mistakes will be caught.”
“Along with this is the attitude, which I strongly support, that there’s no shame in publishing speculative work that turns out to be wrong. We learn from our mistakes. Shame comes not when people make mistakes, but rather when they dodge criticism, won’t share their data, refuse to admit problems, and attack their critics.”
“We want to encourage scientists to play with new ideas. To this purpose, I recommend the following steps:”
– “Reduce the costs of failed experimentation by being more clear when research-based claims are speculative.”
– “React openly to follow-up studies. Once you recognize that published claims can be wrong (indeed, that’s part of the process), don’t hang on to them too long or you’ll reduce your opportunities to learn.”
– “Publish all your data and all your comparisons (you can do this using graphs so as to show many comparisons in a compact grid of plots). If you follow current standard practice and focus on statistically significant comparisons, you’re losing lots of opportunities to learn.”
– “Avoid the two-tier system. Give respect to a student project or Arxiv paper just as you would to a paper published in Science or Nature.”
“We should all feel free to speculate in our published papers without fear of overly negative consequences in the (likely) event that our speculations are wrong; we should all be less surprised to find that published research claims did not work out (and that’s one positive thing about the replication crisis, that there’s been much more recognition of this point); and we should all be more willing to modify and even let go of ideas that didn’t happen to work out, even if these ideas were published by ourselves and our friends.”
To read the blog, click here.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: