Make a Living Supporting Research Transparency

[From the BITSS website] “Both Innovations for Poverty Action (IPA) and BITSS are hiring in the area of research transparency. IPA is hiring a coordinator, and BITSS is looking for a new program manager.” To learn more, click here.

The National Academy of Sciences Weighs in On Reproducibility

In late February, the National Academy of Sciences published a report summarizing a workshop held the previous year.  The report can be freely downloaded here.  The workshop convened researchers across a wide variety of disciplines and addressed numerous facets regarding research reproducibility.  Some highlights are:
— There is still no consensus about terminology: “reproducibility”, “replicability”, and “robustness” are some (but not all!) of the terms that attempt to parse out the nuances associated with verifying research reliability.
— There is general consensus that a p-value of 0.05 is too high to ensure a reasonable likelihood that the results can be “reproduced.”  However, no consensus exists about what 0.05 should be replaced with.
— Workshop participants noted that the p-value is itself a sample statistic with variance.  This had led to constructs such as the “reproducibility probability” which reports the probability that “a repeated experiment will produce a statistically significant result.”
— There is progress, but still no consensus, on the appropriate statistical measures to determine when a follow-up study confirms the findings of a previous study. Greater reliance on Bayesian statistics was mentioned.
— TABLE 3.2 in the report provides an illuminating taxonomy of different issues associated with reproducibility
— Many ideas for incentivizing reproducibility were offered.  One innovative idea is journal policies that give authors the option to have their article certified as “reproducible,” and allowing the reviewers who do the certification to receive some degree of co-authorship status.
— Economics is far behind other disciplines in seriously addressing this issue.

The Solution to Where to Publish Insignificant Results: Author-Pay, Open Access Journals?

While everybody recognizes the problem, there seems to be little consensus about a solution.  The problem is: Where to publish insignificant research results?  Journals are understandably loathe to publish studies that do not report statistically significant findings.  But when all, or most, journals follow this policy, publication bias arises, and the journals cease to provide a representative sampling of the population of research findings.  In a recent opinion piece for Chemical & Engineering News, Stephen Curry, professor of structural biology at Imperial College London, suggests a solution: author-pay, open access journals.  Yes, there are issues with ensuring quality.  Yes, the author-pay model raises concern about “vanity press” science. But there are workarounds for these problems.  And when you’re living in a third-best world (or worse), maybe second-best isn’t so bad.  To read more, click here

Deadline Extended for BITSS Transparency and Reproducibility Workshop

The Berkeley Institute for Transparency in the Social Sciences (BITSS) is extending the application deadline for its Transparency and Reproducibility workshop to April 8th.  The workshop runs from June 8-10th and is taught by an impressive set scholars: Edward Miguel (Introduction); Tom Stanley (Meta-Analysis Methods and Application); Justin Kitzes (Git + GitHub); Maya Petersen, Fiona Burlig, and Sean Tanner (Pre-Registration and Pre-Analysis Plans); Sean Grant (Disclosure Guidelines); Jesse Rothstein (Replication); Cyrus Dioun (Reproducible Workflow); Leif Nelson (P-Hacking); Daniele Fanelli (Scientific Misconduct). [From the BITSS website] “The workshop is designed for researchers across the social science spectrum, from economics to political science, psychology, and other related disciplines. Ideal candidates include: (i) graduate or post-graduate students (ii) junior faculty, (iii) staff from research organizations interested in using these methods, and (iv) journal editors or research funders curious about the implications for their work. Diversity in terms of background and academic discipline is encouraged.” The workshop charges no tuition fees.  To learn more, click here.

So Is There a Replicability Crisis? Or What?

[From the blog Core Economics] Experimental economist ANDREAS ORTMANN reflects on the recent replicability studies in psychology and economics and tackles the question of what it all means.  The article is entitled, “So, is there a crisis?  Or is there a crisis of the crisis, or what? On replicability, reproducibility, and other current challenges in the social sciences”.  To read more, click here.

Recap of the Current “Replicability Crisis” in Psychology: No Pain No Gain?

[From the article “What psychology’s crisis means for the future of science” in Vox]  This article provides a nice summary of the recent controversy about replicability in psychology.  It concludes that this period of introspection is ultimately good for that discipline.  To read more, click here.  An unaddressed question is, Why is psychology more fussed about replicability than economics?  

Maybe There is Only a Replication Crisis for Published Research

[From an article in the Washington Post entitled “Does social science have a replication crisis?”]  This article consists of an interview with Kevin Mullinix, Thomas Leeper, and Alex Cox.  The article highlights their recent research which reports a high rate of replicability in psychology experiments — in contrast to many other studies on the subject.  The researchers hypothesize that this might be due to the fact that much of the research they replicated was unpublished.  Why would that make for higher replication rates?  Because the unpublished work has not yet been subject to the filtering of publication bias that affects the set of studies that get published in prestigious journals.  To read more, click here.

 

On the Reproducibility of Psychological Science: A Response from Nosek and Gilbert

[From an article in Retraction Watch] “Scientists have been abuzz over a report in last week’s Science questioning the results of a recent landmark effort to replicate 100 published studies in top psychology journals. The critique of this effort – which suggested the authors couldn’t replicate most of the research because they didn’t adhere closely enough to the original studies – was debated in many outlets, including Nature, The New York Times, and Wired. Below, two of the authors of the original reproducibility project —Brian Nosek and Elizabeth Gilbert – use the example of one replicated study to show why it is important to describe accurately the nature of a study in order to assess whether the differences from the original should be considered consequential.”  To read more, click here.

The American Statistical Association Wants to Change the Way We Use p-values

[From an article at Retraction Watch] “After reading too many papers that either are not reproducible or contain statistical errors (or both), the American Statistical Association (ASA) has been roused to action. Today the group released six principles for the use and interpretation of p values.” To read more, click here.

Replication Bombshells in Psychology: They Just Keep Coming

[From the article “Everything is Crumbling” in Slate]  “A paper now in press, and due to publish next month in the journal Perspectives on Psychological Science, describes a massive effort to reproduce the main effect that underlies [the psychological theory of ego depletion]. Comprising more than 2,000 subjects tested at two-dozen different labs on several continents, the study found exactly nothing. A zero-effect for ego depletion: No sign that the human will works as it’s been described, or that these hundreds of studies amount to very much at all.” To read more, click here.  Meanwhile, there is relatively little news on the replication front in economics. Is no news good news?  Hmmm.