Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
This is the repository for a [hackathon project](https://hackmd.io/@scibeh/HkV20LiLv#2-Optimising-research-dissemination-and-curation) originating from the [SciBeh Virtual Workshop 2020](https://www.scibeh.org/events/workshop2020/). **Background** Reviewing is a critical process in research, but it is not properly built into academic workloads, inconsistent in quality, and proper reviewing is not integrated into (or mandated as) standard research training. Reviewing is thus a limited resource compared to the amount of new research, especially with more being published ahead of review processes on pre-print servers. While pre-prints speed up the dissemination of new knowledge to research and public audiences alike, there needs to be more critical evaluation of this new research, and communication of key aspects that help with this evaluation. **Project Goals** We are developing tools to aid the critical evaluation process of pre-prints. These should be accessible to diverse readers of pre-prints. **Current Outputs** - Assessment rubric for pre-print review. This rubric is designed in the first instance to train students to review pre-prints (e.g., as part of a modular assessment). It can be integrated into any teaching module where students are required to read research literature. *Status: approaching beta version for piloting with small group of students in early 2021.* - Tagging pre-prints in a basic review. In operationalising the rubric, we are considering tools that could help to tag the pre-prints reviewed using the rubric (whether by students or researchers). This can form meta-data about key pre-print characteristics that facilitates further research synthesis, or provide a snapshot of the research context and application (e.g., a 'quick label'—see next output). *Status: in development.* - Quick label to communicate research quality. This is designed as a way to communicate succinctly the essence of a research paper and how others have evaluated it. *Status: in development.* **Resources** This is a (non-exhaustive) list of resources/existing literature on peer review processes and guidelines. We will keep adding to it. * [CREPS guidelines for replication projects](https://osf.io/wfc6u/wiki/home/) * [OutbreakScience rubric](https://outbreaksci.prereview.org/2007.09477) * PRISMA [checklist](http://www.prisma-statement.org/documents/PRISMA%202009%20checklist.doc) and [systematic review protocol](http://www.prisma-statement.org/Protocols/) * ARRIVE [guidelines for reporting](https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000410) * Replicability and Meta-Analytic Suitability Inventory ([RAMSI](https://journals.sagepub.com/doi/full/10.1177/1745691614551749)) * Research to understand out review judgements are made: [torr project](https://www.torrproject.org/) * PREreview [resource centre](https://content.prereview.org/resources/) * PLoS [review resources](https://plos.org/resources/for-reviewers/) * Rubin's framework for evaluation: Edward L Rubin, ‘On Beyond Truth: A Theory for Evaluating Legal Scholarship’ (1992) 80 California Law Review 889. * Transparency and openness (TOP) guidelines from [Center for Open Science](https://topfactor.org/) * [Review of news media](https://healthfeedback.org/feedbacks/) * [Rapid Reviews Covid 19](https://rapidreviewscovid19.mitpress.mit.edu/guidelines) guidelines for reviewers * [Equator network database of reporting guidelines](https://www.equator-network.org/) * [APA JARS Guidelines](https://apastyle.apa.org/jars) * [How to Review a Paper (blog by Rene Bekkers)](https://renebekkers.wordpress.com/2020/06/24/how-to-review-a-paper/) * [Undark Ratings System for Popular Science Books](https://undark.org/2019/11/25/rating-system-weigh-claims-science-books/) * [Proposals for accelerating pre-prints](https://asapbio.org/category/preprint-sprint-proposals) * [Soderberg et al. Research Quality of Registered Reports Compared to the Traditional Publishing Model.](https://osf.io/preprints/metaarxiv/7x9vy) Places to locate pre-prints (non-exhaustive, evolving list): * ArXiv platforms * [Authorea](https://www.authorea.com/preprints) * [CrowdPeer](https://www.thecrowdpeer.com/) * ASAPbio's [directory of pre-print servers](https://asapbio.org/preprint-servers) * [F1000](https://f1000research.com/) * [ScienceMatters](https://sciencematters.io/) * [Zenodo](https://zenodo.org/) **Plans** The pre-print review rubric is currently focused as a teaching tool and optimised for psychology in the pilot. We plan to expand this to: - Adapt rubric to more disciplines. - Increase relevance to a wider range of review expertise. - Make it adaptable for different means of reviewing (e.g., journal clubs). Suggestions for other expansion plans very welcome!
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.