An Interview with Ted Miguel on “How To Do Open Science”
[Excerpts taken from the article “Ted Miguel in conversation on “Transparent and Reproducible Social Science Research: How to Do Open Science” by Isabelle Cohen and Hagit Caspi, posted at the website of the Economics Department, University of California, Berkeley]
“Edward Miguel wants researchers in the social sciences to join the Open Science Movement. In his new book, Transparent and Reproducible Social Science Research: How to Do Open Science, Miguel and co-authors Jesse Freese (Stanford University) and Garret Christensen (U.S. Census Bureau) give readers the tools to do so.”
“What do you hope to accomplish with this book?”
“There are two goals here. The more obvious one is giving researchers (whether they’re students or practitioners) tools and practices that lead to better, more transparent research, such as pre-analysis plans and reproducible coding, so that they can improve their research from a technical standpoint.”
“On a more fundamental level, we wanted to provide readers with a new mindset about how to do research. I want them to understand that being a researcher isn’t just about learning statistical techniques, that it’s about a set of values and principles and a way of looking at the world in a very even handed-way.”
“You mention the File Drawer Problem in the book. Tell me more about that and how it translates to everyday research.”
“…A famous paper by Sterling in 1957 showed that over 97% of papers published in leading Psychology journals in the 1950s had statistically significant results, which meant that a lot of null results were just disappearing. The issue with that is that if we’re not seeing a large body of evidence then our understanding of the world is really incomplete…”
“That brings us to registries, which are a recent development from the past seven years, and an interesting solution. What is a registry and why should researchers register their projects?”
“A study registry is an attempt to make the universe of studies that have been conducted more accessible to researchers that fewer disappear. This is the product of deliberate action by the US government and National Institutes of Health (NIH) who, 20 years ago, established a registry and tried to encourage or forced grantees to sort to register their trials on the registry. That was motivated by various scandals in the eighties and nineties of pharmaceutical companies funding trials for drugs and then only selectively publishing the results that they liked. In the book, we talk about how applying this practice in the social sciences can help make null results more prevalent.”
“The idea of pre-analysis plans seems to compliment this effort to avoid “cherry picking” which studies are published.”
“Yeah, that’s right. So before we’ve analyzed the data, we post what our main hypotheses are, what our statistical models are with data we want to use will be. That has this great benefit of increasing accountability for scholars, who are then more likely to report the results of those pre-specified analyses. it constrains researchers in some way to make sure they are showing the analyses they said they would show.”
“One of my favorite quotes from the book is that “science prizes originality”. And yet, you call for replicating. Tell me why should we be replicating.”
“Scientific results that don’t replicate aren’t really scientific results. Replication means that a result would be the same if the experiment were to be conducted again. It increases our certainty and the validity of the result.”
“If there was one tool you would want researchers who read this book to take from it, what would it be?”
“It’s hard to choose, but I’d say that maintaining reproducible coding practices is a valuable tool. It pays huge dividends, whether it’s for you to reconstruct the data later or for shareability.”
“But there is a larger point here, which is that we hope to change the attitude towards what constitutes research quality. When people ask whether someone is a good student, or look at the quality of their work, the criteria shouldn’t be, “Oh, they’ve got two stars.” It should be, “They tackled this really important question around state capacity for tax collection in Uganda, which is a central issue in development. Without tax revenue, you can’t invest in social programs. So they tackled this really important question, with a great research design and a large sample and a clever experiment.” This puts the emphasis on the method, which in itself is reproducible, rather than the specific result.”
To read the full interview, click here.
Like this:
Like Loading...