It is my pleasure to introduce Curate Science (http://CurateScience.org) to The Replication Network. Curate Science is a web application that aims to facilitate and incentivize the curation and verification of empirical results in the social sciences (initial focus in Psychology). Science is the most successful approach to generating cumulative knowledge about how our world works. This success stems from a key activity, independent verification, which maximizes the likelihood of detecting errors, hence maximizing the reliability and validity of empirical results. The current academic incentive structure, however, does not reward verification and so verification rarely occurs and when it does, is highly difficult and inefficient. Curate Science aims to help change this by facilitating the verification of empirical results (pre- and post-publication) in terms of (1) replicability of findings in independent samples and (2) reproducibility of results from the underlying raw data.
The platform facilitates replicability by enabling users to link replications directly to their original studies, with corresponding real-time updating of meta-analytic effect size estimates and forest plots of replications (see Figure below).[1] The platform aims to incentivize verification in terms of replicability by easily allowing users to invite others to replicate one’s work and also by providing a professional credit system that explicitly acknowledges replicators’ hard work commensurate to the “expensiveness” of the executed replication.
The platform facilitates reproducibility by enabling researchers to check and endorse the analytic reproducibility of each other’s empirical results via data analyses executed within their web browser for studies with open data. The platform will visually acknowledge the endorser via a professional credit system to incentivize researchers to verify the reproducibility of each other’s results, when direct replications are not feasible or too expensive to execute.
The platform allows curation of study information, which is required for independent verification in terms of replicability and reproducibility. However, the platform also features additional curation activities including “revised community abstracts” (crowd-sourced abstracts summarizing how follow-up research has qualified original findings, e.g., boundary conditions) and curation of organic and external post-publication peer-review commentaries.
Our vision
Curate Science’s vision for the future of academic science is one where verification is routinely and easily done in the cloud, and in which appropriate professional credit is given to researchers who engage in such verification activities (i.e., verifying replicability and reproducibility of empirical results, and post-publication peer review). We foresee a future where one can easily look up important articles in one’s field to see the current status of findings via revised community abstracts (a la Wikipedia). This will maximize the impact and value of research in terms of re-use by other researchers (e.g., help unearth new insights from different theoretical perspectives), and hence accelerate theoretical progress and innovation for the benefit of society.
Current activities
Our current activities include the curation of articles and replications in psychology, which includes identifying professors who will get PhD students to curate and link replications for seminal studies covered in their seminar classes. We’re also busy in terms of advocacy and canvassing: I’m currently on a 3-month USA-Europe tour presenting Curate Science and getting concrete feedback from over 10 university psychology departments. Finally, and crucially, we’re particularly busy with software development and refinement of the website’s user-interface to improve the usability and user experience of the website (e.g., fixing bugs, implementing refinements, and improvements). To check out the early beta version of our website, please go here: http://www.curatescience.org/beta#/login
[1] In the future, users will also be able to create their own meta-analyses in the cloud for generalizability studies (a.k.a “conceptual replications”), which other users will easily be able to add to/update via crowd-sourcing.
Reblogged this on Prove Yourself Wrong.
LikeLike
Pingback: LEBEL: Introducing “CurateScience.Org” | Prove Yourself Wrong