Archives


Not Only That, Effect Sizes from Registered Reports Are Also Much Lower

[In a recent post at TRN, we highlighted that positive results were drastically lower in registered reports. In this post, we report findings about effect sizes. Excerpts are taken from “The Meaningfulness of Effect Sizes in Psychological Research: Differences Between…

Read More

Why So Many Insignificant Results in a Meta-analysis?

[From the blog “Where Do Non-Significant Results in Meta-Analysis Come From?” by Ulrich Schimmack, posted at Replicability-Index] “It is well known that focal hypothesis tests in psychology journals nearly always reject the null-hypothesis … However, meta-analyses often contain a fairly…

Read More

Another Economics Journal Pilots Pre-Results Review

[From the article “Pre-results review reaches the (economic) lab: Experimental Economics follows the Journal of Development Economics in piloting pre-results review”, an interview with Irenaeus Wolff, published at http://www.bitss.org. The following are excerpts from that interview.] “In its April 2019…

Read More

IN THE NEWS: Vox (May 17, 2019)

[From the article “This economics journal only publishes results that are no big deal: Here’s how that might save science” by Kelsey Piper, published in Vox] “Most new publications, upon their launch, seek to promote their content as novel, surprising,…

Read More

In Two Decades, Will We Look Back And Wonder At All the Flawed Research?

[From the article, “Rein in the four horsemen of irreproducibility”, by Dorothy Bishop, published in Nature] “More than four decades into my scientific career, I find myself an outlier among academics of similar age and seniority: I strongly identify with…

Read More

Does Psychology Have a Publication Bias Problem? Yes and No

[From the article, “The Meaningfulness of Effect Sizes in Psychological Research: Differences Between Sub-Disciplines and the Impact of Potential Biases” by Thomas Schäfer and Marcus Schwarz, published April 11, 2019 in Frontiers in Psychology] “From past publications without preregistration, 900…

Read More

It’s Not A Problem, It’s an Opportunity

[From the blog “The replication crisis is good for science” by Eric Loken, published at The Conversation] “Science is in the midst of a crisis: A surprising fraction of published studies fail to replicate when the procedures are repeated.” “Is…

Read More

GOODMAN: When You’re Selecting Significant Findings, You’re Selecting Inflated Estimates

Replication researchers cite inflated effect sizes as a major cause of replication failure. It turns out this is an inevitable consequence of significance testing. The reason is simple. The p-value you get from a study depends on the observed effect…

Read More

Registered Reports Are Not Optimal?

[From the working paper, “Which findings should be published?” by Alexander Frankel and Maximilian Kasy] “There have been calls for reforms in the direction of non-selective publication. One proposal is to promote statistical practices that de-emphasize statistical significance … Another…

Read More

The AEA Interviews Ted Miguel About the Replication Crisis

[From the article “Making economics transparent and reproducible” by Tyler Smith, published on the American Economic Association’s website] “The AEA spoke with Miguel about the replication problem in economics and how the next generation of researchers is embracing new tools…

Read More