This article from the Washington Post is noteworthy only because it highlights how a small coding error can cause a major change in a study’s results. The original study claimed that men were more likely than women to divorce a spouse who fell ill. The study was based on a longitudinal survey. A subsequent replication study found that the result was driven by a coding mistake, in which people who left the study were incorrectly coded as having become divorced. To read more, click here.
In his post at The Impact Blog, economist Michael Clemens argues that vagueness about what constitutes a replication is harming the reputations of reputable researchers, and hurting the progress of replications. Clemens proposes a classification system to eliminate confusion between a replication and a robustness check. To read more, click here.
The website Retraction Watch is approaching it’s 5th birthday. Among other things, it publishes a “leaderboard” where they keep track of researchers with the most retractions. The leaderboard lists a Top 30 list of researchers, with links to the individual cases. Perhaps reassuringly, only one of the current Top 30 is an economist. To read more, click here.
FROM THE ARTICLE: “Negative results are an important building block in the development of scientific thought, primarily because most likely the vast majority of data is negative, i.e., there is not a favorable outcome. Only very limited data is positive, and that is what tends to get published, albeit alongside a sub-set of negative results to emphasize the positive nature of the positive results. Yet, not all negative results get published.” To read more, click here.
FROM THE ARTICLE: “Replication is often viewed as the demarcation between science and nonscience. However, contrary to the commonly held view, we show that in the current (selective) publication system replications may increase bias in effect size estimates.” To read more, click here.
Before a paper can be published at the American Journal of Political Science (AJPS), the journal checks that all the empirical results from the paper can be reproduced with the data and code that the author has provided. The paper does not get published until the journal confirms that this can be done. A full statement of the AJPS replication policy can be found by clicking here.
FROM THE ORIGINAL BLOG: “A recent study sent data requests to 200 authors of economics articles where it was stated ‘data available upon request’. Most of the authors refused.” Is this scientific misconduct? If so, what should be done about it? To read more, click here.
The LaCour data faking scandal has officially gone viral. Googling “LaCour scandal” recently produced 230,000 hits. While serious soul-searching needs to take place at academic journals — or more accurately, needs to continue to take place at academic journals, because journals are increasingly sensitive to this issue — this HITLER RANT PARODY on the LaCour scandal provides some welcome comic relief.
From the article “As a Fulbright PhD student in development economics from Brussels, my experience this past year on the Berkeley campus has been eye opening. In particular, I discovered a new movement toward improving the standards of openness and integrity in economics, political science, psychology, and related disciplines lead by the Berkeley Initiative for Transparency in the Social Sciences (BITSS).” Click here for more.
FROM THE ARTICLE: “A major publisher of scholarly medical and science articles has retracted 43 papers because of “fabricated” peer reviews amid signs of a broader fake peer review racket affecting many more publications. As The Washington Post reports, BioMed Central – a well-known publication of peer-reviewed journals – shows a partial list of the retracted articles suggests most of them were written by scholars at universities in China. To read more, click here.