A Scientific Fraud of Epic Proportions

[Excerpts taken from the article “A miracle cancer prevention and treatment? Not necessarily as the analysis of 26 articles by legendary Hans Eysenck shows” by Tomasz Witkowski and Maciej Zatonski, published in Science Based Medicine]
“In May 2019 a report from an internal enquiry conducted by the King’s College London (KCL) was released. This report diplomatically labels 26 articles published by Professor Hans Jürgen Eysenck as “unsafe”…”
“Eysenck was one of the most renowned and influential psychologists of all time. When he died in 1997, he was the most cited living psychologist and the third-most cited of all time, just behind Sigmund Freud and Jean Piaget. In the global rating of most cited scholars across all social sciences Eysenck ranked third, following only Sigmund Freud and Karl Marx.”
“The report published by the KCL committee scrutinised only articles from peer-reviewed journals that Eysenck co-authored with Ronald Grossarth-Maticek, and analysed data relevant to personality and physical health outcomes in conditions like cancer, cardiovascular diseases, their causes and methods of treatment. All of reviewed research results were assessed as ‘unsafe’.”
“In their publications both authors claimed no causal connection between tobacco smoking and development of cancer or coronary heart disease, and attributed those outcomes to personality factors…”
“In one of their research projects conducted among more than 3,000 people, Eysneck and Grossarth-Maticek have claimed that people with a “cancer-prone personality” were 121 times more likely to die of the disease than patients without such a disposition (38.5% vs 0.3% ).”
“The authors have also contributed personality factors to the risk of developing coronary heart disease. Their publications stated that subjects who were “heart-disease prone” died 27 times more often than people without such temperament.”
“The greatest hits in the album of suspicious publications are articles where the authors “demonstrated” that they can effectively “prevent cancer and coronary heart disease in disease-prone probands”. In one of their projects, 600 “disease-prone probands” received a leaflet explaining how to make more autonomous decisions and how to take control over their destinies.”
“This simple intervention resulted in one of the most spectacular findings in the history of medicine, psychology, and probably in the entire scientific literature. After over 13 years of observation, the group of 600 patients randomly assigned to this “bibliotherapy” (as it was called by the authors) had an overall mortality of 32% when compared to 82% among the 600 people who were not lucky enough to receive the leaflets.”
“However, the most destructive and infamous of his achievements was the publication of the book entitled The Causes and Effects of Smoking in 1980, where he condemned the already established causal relationship between tobacco smoking and lung cancer. His later cooperation with Ronald Grossarth-Maticek resulted in the publication of numerous articles that were recently assessed as “unsafe”. The irregularities uncovered during preparation of the KCL’s report appalled and shocked the global scientific community.”
“After humiliating degradation and after losing his job Diedrik Stapel published a particular memoir of a fraudster, where he described a rather accurate characteristics of the academia of psychologists. He emphasises virtually complete lack of any structure of control or self-correction: ‘Nobody ever checked my work. They trusted me.’”
“…the price for such misperceived and ill-understood academic freedom will be paid by members of the general public when they make everyday decisions related to smoking or when deciding to improve on their personalities, fed by misconstrued beliefs related to the development of cancer, unnecessary suffering, and premature deaths. Those damages will never be assessed.”
To read the article, click here.

Replication of Cancer Studies a Failure to Launch

[Excerpts taken from the article “Trouble replicating cancer studies a ‘wake-up call’ for science” by Jack Groves, published at timeshighereducation.com]
“Later this year, the Virginia-based Center for Open Science will publish the final two papers of an initiative launched in 2013 with the aim of replicating experiments from 50 high-impact cancer biology papers.”
“Many replication efforts were dropped, however, as the organisers realised that they needed more information from the original authors because vital methodological instructions had not been included in published papers. Frequently such details – as well as required materials – proved difficult to obtain, often owing to a lack of cooperation from the laboratories.”
“Subsequent delays and cost increases meant that just 18 papers covering 52 separate experiments, out of an initial 192 experiments targeted, were eventually replicated.”
“The forthcoming papers in the replication study – one detailing the results of all 18 replicated papers, one on why so few were completed – were likely to be a “wake-up call” to science “given that so much information is missing” from published papers, according to Brian Nosek, the centre’s director and co-founder, who is also professor of psychology at the University of Virginia.”
To read the article, click here.

Notes from the 2019 Annual BITSS Conference

[Excerpts taken from the article “The 2019 BITSS Annual Meeting: A barometer for the evolving open science movement” by Aleksandar Bogdanoski and Katie Hoeberling, posted at bitss.org]
“Each year we look forward to our Annual Meeting as a space for showcasing new meta-research and discussing progress in the movement for research transparency.”
“…we’ve historically focused our efforts on research conduct, rather than on publishing. It’s become abundantly clear in recent years, however, that access is a critical component in the production and evaluation of social science.”
“Acknowledging this, we’ve forged fruitful partnerships with stakeholders on the “other end” of the scholarly communication cycle, including the Journal of Development Economics and the Open Science Framework Preprints platform.”
“Wide access to training resources is similarly critical for normalizing open research practices. Open Science (with a capital O and S) has only recently entered the social science curriculum.”
“Speakers on the meeting’s final panel discussed the challenges they’ve faced in trying to institutionalize open science curriculum and supporting their students in applying open principles outside of the classroom, as well as approaches and resources they’ve found helpful.”
“Instructors and students looking to teach or learn transparent research practices can start at our Resource Library or this growing list of course syllabi on the OSF.” 
“Having organized eight of these annual events, a few other patterns have begun to emerge. One of the most exciting developments we’ve seen is that our meetings have shifted focus from diagnosing problems in research, to testing interventions and assessing adoption and wider change.”
“There is no longer a need, at least in this community, to debate whether or not publication bias exists or that perverse incentives lead researchers to use questionable research practices, for example. How we measure and correct for them, however, remain open questions. Such questions were discussed during the first block of research presentations, which proposed sensitivity analysis in meta-analysis and revised significance thresholds compatible with researcher behavior to correct for publication bias, plus a framework to translate open practices for observational research.”
“Relatedly, the use of pre-registration and pre-analysis plans (PAPs) is becoming more normative than cutting edge.”
“Finally, it’s become clear that the reach and efficacy of many open science tools can benefit from, and often requires, the support of diverse stakeholders, as well as rigorous evaluation components integrated in interventions from the beginning. The final block of presentations explored the application of open science principles in novel contexts, including Institutional Review Boards and qualitative research with sensitive data, and offered a general framework for designing and evaluating open science interventions.”
“If you missed the meeting, or want to revisit any of the sessions, you can find slides on this OSF page, watch videos on our YouTube channel, and find open access versions of the papers in the event agenda. The summaries of each session can be found below…” [go to link below].
To read the article, click here.

Missed the Free Webinar on Pre-Registation by COS: Check it Out Here

On January 21st, the Center for Open Science (COS) held a free webinar on pre-registration entitled “The What, Why, and How of Preregistration”. For those who were unable to watch, it is now posted on YouTube.
The actual presentation is about 31 minutes, followed by the presenters (Alex DeHaven and Sara Bowman) answering questions for another 25 minutes.
The presentation is a great introduction to pre-registration and is particularly strong in going through the steps of posting a pre-registration on OSF.
Some representative slides are posted below to give a flavour of the content of the presentation. The time records at the bottom of the slides indicate where during the presentations the topics are covered.
— Introduction:TRN1(20200131)
– Pre-registration is most useful for “confirmatory” modes of research:TRN2(20200131)
– Difference between confirmatory and exploratory research:TRN3(20200131)
– Components of a pre-registration:TRN4(20200131)
– OSF has seven different pre-registration templates one can choose:TRN5(20200131)
– OSF also lists a number of pre-registration resources including examples:TRN6(20200131)
– At the 12-minute mark, the presentation goes through a step-by-step explanation of how to write and register a pre-registration:TRN7(20200131)
– The presentation focuses on the different steps associated with an OSF pre-registration, the first of the seven different options:TRN8(20200131)
– The presentation also discusses how to search for other pre-registations at OSF and elsewhere:TRN9(20200131)
To watch the YouTube, click here.

A New Journal Ranking System Based on Transparency

[Excerpts taken from the article, “Journal transparency index will be ‘alternative’ to impact scores” by Jack Groves, published at timeshighereducation.com]
“A new ranking system for academic journals measuring their commitment to research transparency will be launched next month – providing what many believe will be a useful alternative to journal impact scores.”
“Under a new initiative from the Center for Open Science, based in Charlottesville, Virginia, more than 300 scholarly titles in psychology, education and biomedical science will be assessed on 10 measures related to transparency, with their overall result for each category published in a publicly available league table.”
“The centre aims to provide scores for about 1,000 journals within six to eight months of their site’s launch in early February.”
To read the article, click here.

When Replication Met Climate Change

[Excerpts taken from a not so recent article, “Those 3% of scientific papers that deny climate change? A review found them all flawed”, by Katherine Foley, published in Quartz (qz.com), in September 2017]
“It’s often said that of all the published scientific research on climate change, 97% of the papers conclude that global warming is real, problematic for the planet, and has been exacerbated by human activity.”
“But what about those 3% of papers that reach contrary conclusions? Some skeptics have suggested that the authors of studies indicating that climate change is not real, not harmful, or not man-made are bravely standing up for the truth, like maverick thinkers of the past.”
“Not so, according to a review published in the journal of Theoretical and Applied Climatology. The researchers tried to replicate the results of those 3% of papers—a common way to test scientific studies—and found biased, faulty results.”
“Katharine Hayhoe, an atmospheric scientist at Texas Tech University, worked with a team of researchers to look at the 38 papers published in peer-reviewed journals in the last decade that denied anthropogenic global warming.”
“Every single one of those analyses had an error—in their assumptions, methodology, or analysis—that, when corrected, brought their results into line with the scientific consensus,’ Hayhoe wrote in a Facebook post.”
“The review serves as an answer to the charge that the minority view on climate change has been consistently suppressed, wrote Hayhoe. ‘It’s a lot easier for someone to claim they’ve been suppressed than to admit that maybe they can’t find the scientific evidence to support their political ideology… They weren’t suppressed. They’re out there, where anyone can find them.’”
“Indeed, the review raises the question of how these papers came to be published in the first place, when they used flawed methodology, which the rigorous peer-review process is designed to weed out.”
To read the article, click here.

IN THE NEWS: Bloomberg (January 14, 2020)

[Excerpts taken from the article “Behavioral Economics’ Latest Bias: Seeing Bias Wherever It Looks” by Brandon Kochkodin, published at Bloomberg.com]
“Behavioral economics, it seems, might just have a bias problem of its own.”
“…for a small, but vocal group of skeptics, the field has quickly become a victim of its own astounding success. Call it the ‘bias bias.’”
“Drawing on the work of longtime critic Gerd Gigerenzer, an expert in psychology at the Max Planck Institute for Human Development, they point to the tendency of those who have embraced its ideas to see biases everywhere — even when there are none.”
“A Wikipedia entry for cognitive biases currently lists nearly 200 entries…Are they all legit? Gigerenzer, who has made himself into something of a bête noire among behavioral economists over the past couple decades, has his doubts.”
“In his 2018 paper, he concluded that most studies on cognitive biases are flawed. They either rely on small sample sizes, misinterpret individual errors for systematic biases or underestimate how people absorb information based on how a fact or question is framed.”
“Gigerenzer’s beef with behavioral economics, and its most influential proponents, like Kahneman, Amos Tversky and Richard H. Thaler, isn’t new. If you google “behavioral economics criticism,” it doesn’t take long before Gigerenzer’s name comes up, again and again.”
“Kahneman, who was awarded the Nobel Prize in 2002, and Tversky long ago took issue with what they say is Gigerenzer’s willful misinterpretation of their positions and ideas, which misleads readers. Others, like Carnegie Mellon’s Alex Imas, say the problem is that Gigerenzer often uses oversimplified arguments to dismiss theories…”
“Nevertheless, as behavioral economics becomes increasingly ubiquitous in everyday life, even proponents have started to acknowledge the potential pitfalls.”
“In a recent episode of Ted Seides’ “Capital Allocators” podcast, Albert Bridge Capital’s Drew Dickson talked about how his team integrates behavioral economics into its investing approach. After listing some of the biases they watch out for, Dickson named the “bias bias” as his new favorite.
“People are now talking about behavioral finance so much, and a lot of them are relatively new to it, they almost want to start looking as if there’s definitely going to be a bias here,” said Dickson, who declined to comment for this story. “You’re biased to find a bias.”
To read more, click here.

QRP: The Board Game

[Excerpts taken from the “QRP Game Rules”, by Roger Giner-Sorolla, posted at OSF]
“A game of scientific discovery, careers, and reform for 2-6 players or teams.”
Overview
“You, the players, are researchers using sampling statistics to find out more about a yes/no Research Topic.”
“In each of three Rounds, you’ll take turns performing and reporting Experiments, discovering part of the unknown composition of that round’s Evidence Deck.”
“You can also publish your Experiments to gain Prestige points. As you carry out your Experiment, you may commit Fraud, bluffing about your data to improve your chances of being Published. But other players can Investigate you if they don’t believe your findings!”
“There are severe penalties for Fraud, but also penalties for wrongfully Investigating an honest player.”
“Most Experiments also allow for selective reporting of findings (questionable research practices, or QRPs). These can also be Investigated, but with lower stakes.”
“As the game progresses, players introduce Reforms that change the rules of experiments, publishing and Prestige.”
“Reforms move the game towards a science that serves the truth, and away from one that rewards exaggerating positive results.”
“At the end of the Round, new Reforms are introduced, and players advance along the Career Track according to the Prestige they have scored.”
“The game ends after the third Round. Who wins? The player with the most advanced career, or if tied, with the most Prestige in the final Topic.”
To learn more about the game, click here.

Making Meta-Analyses “Open”: A How-To

[Excerpts taken from the article “Conducting a Meta-Analysis in the Age of Open Science: Tools, Tips, and Practical Recommendations” by David Moreau and Beau Gamble, posted at PsyArXiv Preprints]
“In this tutorial, we describe why open science is important in the context of meta-analysis in psychology, and suggest how to adopt the three main components of open science: preregistration, open materials, and open data.”
“We first describe how to make the preregistration as thorough as possible—and how to handle deviations from the plan.”
“We then focus on creating easy-to-read materials (e.g., search syntax, R scripts) to facilitate reproducibility and bolster the impact of a meta-analysis.”
“Finally, we suggest how to organize data (e.g., literature search results, data extracted from studies) that are easy to share, interpret, and update as new studies emerge. “
“For each component, we provide templates that can help standardization while allowing the necessary flexibility to accommodate idiosyncrasies across meta-analyses (see Fig. 1 for an overview).”
“All templates—together with tips on how to best utilize them—are also illustrated with step-by-step video tutorials, in an effort to facilitate systematic implementation in psychology. For each step of the meta-analysis, we provide example templates, accompanied by brief video tutorials…”
Figure 1. Flow chart describing all the provided templates for creating a transparent meta-analysis.TRN(20200120)
Preregistration
“In the context of meta-analyses…preregistration might seem particularly challenging. How can one know what studies are relevant before running the search, for instance? Will five databases be comprehensive enough, or overkill? What moderator analyses will have enough statistical power? And what methods should be used to detect bias? To facilitate the process of preregistration, we provide a list of items to consider, together with a brief explanation for each, in Template 1…”
“The template is designed to enable flexible yet thorough preregistration, specifically in the context of meta-analyses in psychology. The items follow the PRISMA protocol, but also contain tips and examples specific to our field. We also provide a ready-to-populate flow diagram that adheres to the PRISMA protocol (Template 2).”
“The flow diagram is a visual depiction of the search process in a meta-analysis, from the initial query to the final selection of studies, and including all steps in between. This component is key for transparency, as it provides a quick overview of the data collection process, and serves as a reference document to navigate possible intricacies of the design.”
Open Materials
“A central component of open material relates to sharing scripts for data wrangling and analyses…we provide examples in R, given its appeal for reproducibility and its widespread use in psychology. We designed a generic script (Template 3) that requires minimal tailoring for each individual meta-analysis.”
“To promote reproducibility, it is also crucial to document the exact search syntax used for all databases. We provide a search syntax template that can accommodate different databases, and that has been prepopulated with an example from psychology (Template 4).”
“A thorough meta-analysis often includes articles or reports that were not directly available online…This is an important step, as it helps correct for some of the overall publication bias.”
“In the process of accessing all possible data, a researcher often has to contact authors directly…To facilitate these requests, we provide two email templates that have been worded carefully to ensure the rationale of the request is clear and that authors understand what the project will entail.”
“These templates include an open call for data, to email authors in the field during the initial stage of the search (Template 5), and a specific data request, to email authors of studies that appear to be relevant, but lack information or require clarification (Template 6).”
Open Data
Template 7 has been designed to allow sharing all content relevant to the search results, thus increasing transparency and facilitating reproducibility. Information such as decisions to include or exclude a study, ambiguities, and the overall breakdown of articles across databases can be documented in this file.”
“In addition, we provide a template to store all data extracted from the search, after it has been curated (Template 8). The template includes metadata about each reference, descriptive statistics, as well as notes and additional information for each study.”
“Finally, regardless of how thoroughly planned a meta-analysis is, researchers will almost inevitably deviate from the original plan.”
“Keeping a record of all deviations from the original protocol is key to facilitate assessment by editors, reviewers, and readers. This record should include the specifics of what has been changed, and the extent of the change.”
“For example, if one changes a preregistered criterion on outlier detection, perhaps because it appears too stringent, it is important to state the difference between the two criteria, as well as the extent to which this impacts the results and the interpretation of the findings.”
“Deviations also need to be justified, that is, a rationale as to why the change was necessary or desired should be provided…Template 9 allows documenting and justifying deviations from planned protocols, and shows some fictional examples from psychology.”
Conclusion
“In this tutorial, we presented a rationale for open science practices in the context of meta-analysis, specifically focused on preregistration, open materials and open data. In an effort to facilitate broad impact and readability, we provided nine standardized templates with tips and recommendations.”
To read the article and access the templates, click here.

An Assessment of Open Science Research Practices in Psychology: Lots of Talk But…

[Excerpts taken from the article, “Estimating the prevalence of transparency and reproducibility-related research practices in psychology (2014-2017)”, by Hardwicke, et al., posted at MetaArXiv Preprints]
“…we manually examined a random sample of 250 articles to estimate the prevalence of several transparency and reproducibility-related indicators in psychology articles published between 2014-2017.”
“The indicators were open access to published articles; availability of study materials, study protocols, raw data, and analysis scripts; preregistration; disclosure of funding sources and conflicts of interest; conduct of replication studies; and cumulative synthesis of evidence in meta-analyses and systematic reviews.”
“…we used a random number generator to sample 250 articles from all 224,556 documents in the Scopus database (as of 22nd September, 2018) designated with the document type “article” or “review”, published between 2014 and 2017, and with an All Science Journal Classification (ASJC) code related to psychology…”
Article availability (‘open access’)
“Among the 237 English-language articles, we were able to obtain a publicly available version for 154…whereas 74…were only accessible to us through a paywall and 9…were not available to us at all…”
Materials and protocol availability
“Of the 183 articles involving primary data…26 contained a statement regarding availability of original research materials such as survey instruments, software, or stimuli…”
“Of the 188 articles involving primary or secondary data, zero reported availability of a study protocol…”
“For the 26 articles where materials were reportedly available, the materials were not actually available for 7 articles due to broken links.”
Data availability
“Of the 188 articles that involved primary or secondary data, 4 contained data availability statements…”
“For one dataset, a fee was required (which we did not pay) to obtain access. Of the 3 accessible datasets…One dataset was incomplete whereas the remaining 2 appeared complete and clearly documented.”
Analysis script availability
“Of the 188 articles that involved primary or secondary data, an analysis script was shared for one article…”
Pre-registration
“Of the 188 articles involving primary or secondary data, 5 included a statement regarding preregistration…All accessible pre-registrations contained information about hypotheses and methods but did not contain analysis plans.”
Replication and evidence synthesis
“Of the 188 articles involving primary or secondary data, 10…claimed to include a replication study. Of the 183 articles involving primary data, 1 article…was cited by another article that claimed to be a replication.”
“Of the 183 articles involving primary data, 21 were formally included in systematic reviews…12 were formally included in meta-analyses…”
Discussion
“Our evaluation of transparency and reproducibility-related research practices in a random sample of 250 psychology articles published between 2014-2017 highlights that whilst many articles are publicly available, crucial components of research – protocols, materials, raw data, and analysis scripts – are rarely made publicly available…”
“Pre-registration remains a nascent proposition with minimal adoption.”
“Replication or evidence synthesis via meta-analysis or systematic review is relatively infrequent.”
“The current study has several caveats and limitations. Firstly, our findings are based on a random sample of 250 articles, and the obtained estimates may not necessarily generalize to specific contexts, such as other disciplines…”
To read the article, click here.