REBLOG: Evidence of Publication Bias and Misreported p-Values

FROM THE BLOG POLITICAL SCIENCE REPLICATION:  “A new article by researchers at the University of Amsterdam shows that publication bias towards statistically significant results may cause p-value misreporting. The team examined hundreds of published articles and found that authors had reported p-values < .05 when they were in fact larger. They conclude that publication bias may incentivize researchers to misreport results.” To read the blog, click here.  To read the article, click here.

Is Most Economics Research Never Cited?

In a recent blog post in the LA Times entitled “Are most academic papers really worthless? Don’t trust this worthless statistic”, MICHAEL HILTZIK counters the widely held belief that most research is never cited.  To read, click here.   So just how frequently, or infrequently, is academic research cited?  It turns out, the question is not so easy to answer. To read more about that, click here and here.  

FiveThirtyEight.com asks scientists to explain the meaning of a p-value. Hilarity ensues.

FROM THE ARTICLE:   The following is from an interview with Steven Goodman, co-director of METRICS. “Even after spending his ‘entire career’ thinking about p-values, he said he could tell me the definition, ‘but I cannot tell you what it means, and almost nobody can.’”  To read more, click here.

GARRET CHRISTENSEN: An Introduction to BITSS

The Berkeley Initiative for Transparency in the Social Sciences (BITSS) was formed in late 2012 after a meeting in Berkeley that led to the publication of an article in Science on ways to increase transparency and improve reproducibility in research across the social sciences. BITSS is part of Berkeley’s Center for Effective Global Action (CEGA), and is led by development economist Edward Miguel and advised by a group of leaders in transparent research from economics, psychology, political science, and public health.
Since our founding, we’ve worked to build a network of like-minded researchers, and focused on the following aspects of research transparency, which hopefully covers the entire lifecycle of a research project:
– Registering Studies: Whether it is clinicialtrials.gov, the AEA’s registryEGAP’s registry, or 3ie’s registry, creating a database of the universe of studies helps combat publication bias.
– Writing Pre-Analysis Plans: Tying your hands a bit by pre-specifying the analysis you plan to run can reduce your ability to consciously or unconsciously mine the data for spurious results.
– Replication and Meta-Analysis: We encourage researchers to conduct and publish replications and meta-analyses so we can build on existing work more systematically.
– Reproducible Workflow: Organizing your research in a way that others (or just your future self) will be able to understand your code and re-run it to get the same results.
– Sharing Data and Code: Put data, code, and adequate documentation in a trusted public repository so that others can more easily build off your work.
To help spread the methods of more transparent and reproducible research, we’ve engaged in the following activities:
– Manual of Best Practices: a how-to guide and reference manual for researchers interested in conducting reproducible research.
– Semester-Long Research Transparency Course: taught by Edward Miguel as Econ 270D, the course is available on YouTube and we are working to make an interactive MOOC.
– Summer Institute and Workshops: an annual training for graduate students and young researchers held each June in Berkeley, featuring lectures from eminent scholars in transparency plus hands-on training in dynamic documents, version control, and data sharing.
– Annual Meeting and Conference Sessions: we host a conference in Berkeley with an open call for papers (coming up December 10 and 11!) and have also organized sessions at past AEA/ASSA meetings and other conferences. This year we’re co-hosting a workshop on replication and transparency in San Francisco January 6-7, right after the AEA meeting. Registration is open now!
– Grants: We had a call for our Social Science Meta-Analysis and Research Transparency (SSMART) grants. Announcements of winners will be made soon, and we plan to have an additional call next year.
– Prizes: We will soon be announcing the first winners of the Leamer-Rosenthal Prizes for Open Social Science—young researchers who have been incorporating transparency in their work as well as established faculty who have been teaching transparency.
If you’re interested in getting involved, we’d love to hear from you. (You can e-mail me: garret@berkeley.edu, or our Program Director Jen Sturdy jennifer.sturdy@berkeley.edu.) We’re working on formalizing a Catalyst program where you could be an ambassador for transparency at your own university or institution and receive BITSS funding for workshops and trainings. Follow us on our blog or on Twitter (@UCBITSS) to hear the latest updates.

REBLOG: The Reformation: Can Social Scientists Save Themselves?

We recently came across this article in the May/June 2014 issue of Pacific Standard magazine.  Okay.  It’s not “new”, but it provides an excellent historical overview of some of the issues associated with reproducibility of social science research.  WARNING: It is excellent, but it is long.  To read, click here.  A nice follow-up article, also from the Pacific Standard magazine, can be found here

WANTED: Replications of the Most Influential Empirical Papers in Financial Economics

FROM THE JOURNAL: “The Critical Finance Review is planning to publish issues dedicated to replicating the most influential empirical papers in financial economics. It is explicitly not the goal of these replication issues either to prove or to disprove the papers. The replications are meant to be as objective as possible. The CFR wants no incentive on itself or the authors to slant the results either favorably or unfavorably. The contract between an invited replicating team (often headed by a senior researcher) and the CFR is that the journal will publish the replicating paper even (or especially if) all the findings of the original paper hold perfectly.” To read more, click here.

Can Bayes Save p-values?

FROM THE ARTICLE: “Currently used thresholds in classical test of statistical significance are responsible for much of the non-reproducibility of scientific studies……Bayesian testing methods that calculate the posterior probability in favor of the null hypothesis alleviate the unreliability of p-values, and when prior assumptions under the alternative hypothesis are made using uniformly most powerful Bayesian tests, the resulting posterior probability is both objective and equivalent to a classical test, but with higher standards of evidence. We view these Bayesian testing methods as a simple and potent way to reduce the non-reproducibility in modern science.” To read more, click here.

So You Want to Learn How to Do a Replication?

(FROM THE BITSS WEBSITE) The Berkeley Initiative for Transparency in the Social Sciences will be holding a workshop on replication and transparency following the AEA Annual Meeting in San Francisco, CA.  “Topics will include teaching integrity in empirical research, replication of macro-models, use of pre-analysis plans and their relationship to replication, and replication case studies.”  To learn more about BITSS and their replication workshops, click here.

ETIENNE LEBEL: Introducing “CurateScience.Org”

It is my pleasure to introduce Curate Science (http://CurateScience.org) to The Replication Network. Curate Science is a web application that aims to facilitate and incentivize the curation and verification of empirical results in the social sciences (initial focus in Psychology). Science is the most successful approach to generating cumulative knowledge about how our world works. This success stems from a key activity, independent verification, which maximizes the likelihood of detecting errors, hence maximizing the reliability and validity of empirical results. The current academic incentive structure, however, does not reward verification and so verification rarely occurs and when it does, is highly difficult and inefficient. Curate Science aims to help change this by facilitating the verification of empirical results (pre- and post-publication) in terms of (1) replicability of findings in independent samples and (2) reproducibility of results from the underlying raw data.
The platform facilitates replicability by enabling users to link replications directly to their original studies, with corresponding real-time updating of meta-analytic effect size estimates and forest plots of replications (see Figure below).[1] The platform aims to incentivize verification in terms of replicability by easily allowing users to invite others to replicate one’s work and also by providing a professional credit system that explicitly acknowledges replicators’ hard work commensurate to the “expensiveness” of the executed replication.
row-based-meta
The platform facilitates reproducibility by enabling researchers to check and endorse the analytic reproducibility of each other’s empirical results via data analyses executed within their web browser for studies with open data. The platform will visually acknowledge the endorser via a professional credit system to incentivize researchers to verify the reproducibility of each other’s results, when direct replications are not feasible or too expensive to execute.
The platform allows curation of study information, which is required for independent verification in terms of replicability and reproducibility. However, the platform also features additional curation activities including “revised community abstracts” (crowd-sourced abstracts summarizing how follow-up research has qualified original findings, e.g., boundary conditions) and curation of organic and external post-publication peer-review commentaries.
Our vision
Curate Science’s vision for the future of academic science is one where verification is routinely and easily done in the cloud, and in which appropriate professional credit is given to researchers who engage in such verification activities (i.e., verifying replicability and reproducibility of empirical results, and post-publication peer review). We foresee a future where one can easily look up important articles in one’s field to see the current status of findings via revised community abstracts (a la Wikipedia). This will maximize the impact and value of research in terms of re-use by other researchers (e.g., help unearth new insights from different theoretical perspectives), and hence accelerate theoretical progress and innovation for the benefit of society.
Current activities
Our current activities include the curation of articles and replications in psychology, which includes identifying professors who will get PhD students to curate and link replications for seminal studies covered in their seminar classes. We’re also busy in terms of advocacy and canvassing: I’m currently on a 3-month USA-Europe tour presenting Curate Science and getting concrete feedback from over 10 university psychology departments. Finally, and crucially, we’re particularly busy with software development and refinement of the website’s user-interface to improve the usability and user experience of the website (e.g., fixing bugs, implementing refinements, and improvements). To check out the early beta version of our website, please go here: http://www.curatescience.org/beta#/login

[1] In the future, users will also be able to create their own meta-analyses in the cloud for generalizability studies (a.k.a “conceptual replications”), which other users will easily be able to add to/update via crowd-sourcing.

REPLICATE THIS: Do Dark-Skinned Footballers Get Given Red Cards More Often Than Light-Skinned Ones?

FROM THE ARTICLE: “It sounds like an easy question for any half-competent scientist to answer. Do dark-skinned footballers get given red cards more often than light-skinned ones? But, as RAPHAEL SILBERZAHN …and ERIC UHLMANN … illustrate in this week’s Nature, it is not. The answer depends on whom you ask, and the methods they use.”  To read more, click here.