Is It Pointless to Try and Predict Reproducibility?
[Excerpts taken from the blog “Responding to the replication crisis: reflections on Metascience2019” by Dorothy Bishop, published at her blogsite, BishopBlog]
“I’m just back from MetaScience 2019…It is a sign of a successful meeting, I think, if it gets people…raising more general questions about the direction the field is going in, and it is in that spirit I would like to share some of my own thoughts.”
“…Another major concern I had was the widespread reliance on proxy indicators of research quality. One talk that exemplified this was Yang Yang’s presentation on machine intelligence approaches to predicting replicability of studies…implicit in this study was the idea that the results from this exercise could be useful in future in helping us identify, just on the basis of textual analysis, which studies were likely to be replicable.”
“Now, this seems misguided on several levels…Goodhart’s law would kick in: as soon as researchers became aware that there was a formula being used to predict how replicable their research was, they’d write their papers in a way that would maximise their score.”
“One can even imagine whole new companies springing up who would take your low-scoring research paper and, for a price, revise it to get a better score.”
Like this:
Like Loading...