Science is a community of human beings of the homo sapiens species: bipedals with the capacity to be self-reflexive. This implies that science as a community is subject to all the same behavioral patterns that all human communities are, including a plethora of biases at both the individual and collective level.
Examples of well-known individual-level biases are hubris, confirmatory preference, and desire for novelty (or the reverse: fear of the new). This implies, for instance, that “When an experiment is not blinded, the chances are that the experimenters will see what they ‘should’ see” (The Economist, 2013). Together, these biases lead to Type I and Type II errors in judging research, both our own and that of others. As a result, without correcting mechanisms, published research will be heavily biased in favor of evidence that is in line with the theory.
Science’s first line of defense is the micro-level reviewing process. Regrettably, the reviewing process, double-blinded or not, is anything but flawless, but rather full of biases itself. This is not surprising, as the reviewing process is carried out by exemplars of the very same homo sapiens species that cannot escape from all these biases referred to above (plus quite a few others).
Ample evidence abounds that current reviewing practices fail to provide the effective filtering mechanism they are claimed to provide. Take the revealing study of Callaham and McCulloch (2011). On the basis of a 14-year sample of 14,808 reviews by 1,499 reviewers rated by 84 editors, they conclude that the quality scores deteriorated steadily over time, with the rate of deterioration being positively correlated with reviewers’ experience. This is mirrored in the well-established finding that reviewers, on average, fail to detect fatal errors in manuscripts, which reinforces the publication of false positives (Callaham & Tercier, 2007; Schroter et al., 2008).
Hence, giving these unavoidable biases associated with the working of the human brain, the scientific community should adhere, as a collective, to a set of macro-level correcting principles as a second line of defense. Probably the most famous among these is Popper’s falsifiability principle. Key to Karl Popper’s (1959) incredibly influential philosophy of science is his argument that scientific progress evolves on the back of the falsification principle.
We, as researchers, should try, time and again, to prove that we are wrong. If we find the evidence that indeed our theory is incorrect, we can further work on developing new theory that does fit with the data. Hence, we should teach the younger generation of researchers that instead of being overly discouraged, they should be happy if they cannot confirm their hypotheses.
This quest for falsification is critical because, in the words of Ioannidis (2012: 646), “Efficient and unbiased replication mechanisms are essential for maintaining high levels of scientific credibility.” The falsification principle requires a tradition of replication studies in combination with the publication of non-significant and counter-results, or so-called nulls and negatives, backed by systematic meta-analyses.
Current publication practices are overwhelmingly anti-Popperian. No one is really interested in replicating anything, and meta-analyses are far and between. Indeed, only a tiny fraction of published studies involve a replication effort or meta-analysis. Moreover, journal authors, editors, reviewers and readers are not interested in seeing nulls and negatives in print.
This replication defect and publication bias crisis implies that Popper’s critical falsification principle is actually thrown into the scientific community’s dustbin. We, as a collective, violate basic scientific principles by (a) mainly publishing positive findings (i.e., those that are in support of our hypotheses) and (b) rarely engaging in replication studies (being obsessed with novelty). Behind the façade of all these so-called new discoveries, false positives abound, as do questionable research practices .
In my recently published Manifesto “What Happened to Popperian Falsification?”, I argue what I believe is wrong, why that is so, and what we might do about this. This Manifesto is primarily directed at the worldwide Business and Management scholarly community. However, clearly, Business and Management is not the only discipline in crisis.
If you share the concerns expressed in the my Manifesto, I encourage you to signal your support. For that purpose, I opened a petition webpage at change.org. This can be signed, and used to start exchanging ideas.
To kick-start this dialogue, I provide a tentative suggestion regarding a new and dynamic way of conducting, reporting, reviewing and publishing research, for now referred to as Scientific Wikipedia. My hope is that by initiating this dialogue, a few of the measures suggested in the Manifesto will be implemented; and others – perhaps far more effective ones – will be added over time.
Callaham, M. and C. McCulloch (2011). Longitudinal Trends in the Performance of Scientific Peer Reviewers, Annals of Emergency Medicine, 57: 141-148.
Callaham, M. L. and J. Tercier (2007). The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality, PLoS Medicine, 4: 0032-0040.
Ioannidis, J. P. A. (2012). Why Science Is Not Necessarily Self-Correcting, Perspectives on Psychological Science, 7: 645-654.
Popper, K. (1959). The Logic of Scientific Discovery. Oxford: Routledge.
Schroter, S., N. Black, S., Evans, F., Godlee, L., Osorio, L., and R. Smith (2008). What Errors Do Peer Reviewers Detect, and Does Training Improve their Ability to Detect Them?, Journal of the Royal Society of Medicine, 101: 507-514.