Not Only That, Effect Sizes from Registered Reports Are Also Much Lower

[In a recent post at TRN, we highlighted that positive results were drastically lower in registered reports. In this post, we report findings about effect sizes. Excerpts are taken from “The Meaningfulness of Effect Sizes in Psychological Research: Differences Between Sub-Disciplines and the Impact of Potential Biases” by Thomas Schäfer and Marcus Schwarz, recently published in Frontiers in Psychology]
 “…how large is the problem of inflated effects? … the Open Science Collaboration (2015) found that replication effects were half the magnitude of original effects. … But the Open Science Collaboration’s focus on replication studies and use of … studies from high-ranked journals means there might not be sufficient information to reliably estimate the difference between published (i.e., potentially biased) effects and “true” … effects representative of the population…”
“In the present study, we employed a broader basis of empirical studies and compared the results of original research that has either been published traditionally (and might therefore be affected by the causes of bias just mentioned) or been made available in the course of a pre-registration procedure (therefore probably not affected by these biases).”
“…to get a representative overview of published effects in psychology, we analyzed a random selection of published empirical studies. … to estimate how strongly published effects might be biased, we distinguished between studies with and without pre-registration.”
“Since pre-registered studies have gained in popularity only in recent years, we did not expect there to be that many published articles adhering to a pre-registration protocol. We therefore set out to collect all of them instead of only drawing a sample.”
“Because our aim was to get an impression of the distribution of effects from psychological science in general, we transformed all effect sizes to a common metric if possible. As the correlation coefficient r was the most frequently reported effect size … we transformed effects to r whenever possible.”
“…resulting in 684 values for r in total for studies without pre-registration and 89 values for r in total for studies with pre-registration.”
“Figure 1 (upper part) shows the empirical distribution of effects from psychological publications without pre-registration … The distribution is fairly symmetrical and only slightly right-skewed, having its mean at 0.40 and its grand median at 0.36.”
“Figure 1 (lower part) shows the empirical distribution of effects from psychological publications with pre-registration … It has its mean at 0.21 and its grand median at 0.16.”
TRN(20190612)
“Our finding that effects in psychological research are probably much smaller than it appears from past publications has … a disadvantageous implication. … smaller effect sizes mean that the under-powering of studies in psychology is even more dramatic than recently discussed … because smaller population effects would require even larger samples to produce statistical significance.”
To read the article, click here.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: