Statistical Tools to Detect Data Fabrication: Caveat Emptor

[Excerpts taken from the preprint “Detection of data fabrication using statistical tools” by Chris Hartgerink, Jan Voelkel, Jelte Wicherts, and Marcel van Assen, posted at PsyArXiv Preprints]
“In this article, we investigate the diagnostic performance of various statistical methods to detect data fabrication. These statistical methods (detailed next) have not previously been validated systematically in research using both genuine and fabricated data.”
“We present two studies where we try to distinguish (arguably) genuine data from known fabricated data based on these statistical methods. These studies investigate methods to detect data fabrication in summary statistics (Study 1) or in individual level (raw) data (Study 2) in psychology.”
“In Study 1, we invited researchers to fabricate summary statistics for a set of four anchoring studies, for which we also had genuine data from the Many Labs 1 initiative…In Study 2, we invited researchers to fabricate individual level data for a classic Stroop experiment, for which we also had genuine data from the Many Labs 3 initiative.”
“Statistical methods to detect potential data fabrication can be based either on reported summary statistics that can often be retrieved from articles or on the raw (underlying) data if these are available. Below we detail p-value analysis, variance analysis, and effect size analysis as potential ways to detect data fabrication using summary statistics… Among the methods that can be applied to uncover potential fabrication using raw data, we consider digit analyses…and multivariate associations between variables.”
“…our studies have highlighted that variance- and effect size analysis and multivariate associations are methods that look promising to detect problematic data.”
“All presented results…pertain to relative comparisons between genuine and fabricated data. Hence, all statements about the performance of classification depends on the availability of unbiased genuine data to compare to… we agree with the call to always include a control sample when applying these statistical tools to studies that look suspicious…”
“We do advise to use some of the more successful statistical methods as screening tools in review processes and as additional tools in formal misconduct investigations where prevalence is supposedly higher than in the general population of research results…this should only happen in combination with evidence from other sources than statistical methods”
“…if any of these statistical tools are used, we recommend to solely use them to screen for indications of potential data anomalies, which are subsequently further inspected by a blinded researcher to prevent confirmation bias and using a rigorous protocol that involves due care and due process.”
To read the article, click here.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: