In a recent working paper (“Science with no fiction: measuring the veracity of scientific reports by citation analysis”), Peter Grabitz, Yuri Lazebnik, Josh Nicholson, and Sean Rife suggest that one solution to the “crisis” in scientific credibility is publication of an article’s “R-Factor”. To calculate the R-Factor for a given study, one would comb through all the papers that cite a given study, then count up the number of attempts to confirm the findings from the original study. The R-Factor is simply the ratio of confirming studies over total attempts. R-Factors close to 1 indicate a study is likely to be true. R-Factors close to 0, not so much. The authors give an example from three studies in biomedical research. And how would this be done for thousands and thousands of studies, with results being continuously updated? The authors suggest this could be done through machine learning technology.