There’s Gold in Them Thar Predictions!
[Excerpts taken from the article “Predict science to improve science” by Stefano DellaVigna, Devin Pope, and Eva Vivalt, published in Science]
“Many fields of research, such as economics, psychology, political science, and medicine, have seen growing interest in new research designs to improve the rigor and credibility of research…”
“…relatively little attention has been paid to another practice that could help to achieve this goal: relating research findings to the views of the scientific community, policy-makers, and the general public.”
“We stress three main motivations for a more systematic collection of predictions of research results.”
“…we do not have a systematic procedure to capture the scientific views prior to a study, nor the updating that takes place afterward.”
“…people routinely evaluate the novelty of scientific results with respect to what is known. However, they typically do so ex post, once the results of the new study are known. Unfortunately, once the results are known, hindsight bias (“I knew that already!”) makes it difficult for researchers to truthfully reveal what they thought the results would be. This stresses the importance of collecting systematic predictions of results ex ante.”
“A second benefit of collecting predictions is that they can…potentially help to mitigate publication bias”
“…null results…are rarely published even when authors have used rigorous methods to answer important questions…if priors are collected before carrying out a study, the results can be compared to the average expert prediction, rather than to the null hypothesis of no effect.”
“This would allow researchers to confirm that some results were unexpected, potentially making them more interesting and informative, because they indicate rejection of a prior held by the research community; this could contribute to alleviating publication bias against null results.”
“A third benefit of collecting predictions systematically is that it …may help with experimental design.”
“For example, envision a behavioral research team consulted to help a city recruit a more diverse police department. The team has a dozen ideas for reaching out to minority applicants, but the sample size allows for only three treatments to be tested…the team can elicit predictions for each potential project and weed out those interventions judged to have a low chance of success or focus on those interventions with a higher value…”
“These three broad uses of predictions highlight two important implications. First, it will be important to collect forecast data systematically to draw general lessons.”
“Second, like preanalysis plans, it is critical to set up the collection of predictions before the results are known, to avoid the impact of hindsight bias.”
“With these features in mind, a centralized platform that collects forecasts of future research results can play an important role. Toward this end, in coordination with the Berkeley Initiative for Transparency in the Social Sciences (BITSS), we have developed an online platform for collecting forecasts of social science research results (https://socialscienceprediction.org/).”
“The platform will make it possible to track multiple forecasts for an individual across a variety of interventions, and thus to study determinants of forecast accuracy, such as characteristics of forecasters or interventions, and to identify superforecasters…”
“There are many open questions about the details of the platform…We expect that continued work and experimentation will provide more clarity regarding such design questions.”
To read the article, click here. (NOTE: This article is behind a paywall.)
Like this:
Like Loading...