Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
** How should we be assessing the robustness of the published literature? ---------------------------------------------------------------------- ** As scientists, we read and rely on the published literature constantly – even as we become aware of an increasing number of problems with reproducibility of this work. Our reliance on an unreliable published literature is hugely costly for the field at large. So how should we be scrutinising the published literature before we use it as a basis for research proposals? What can we do in systematic reviews and meta-analyses to account for the (un)reliability of published studies? **In this session, our goal will be to develop a checklist for assessing the robustness of a body of published work for literature reviews.** Things we might consider to build the checklist are: - *What role can p-curves and z-curves play in assessing reproducibility?* - *Can we use correlates of replicability in large scale replication studies as indicators of reliability?* - *Should psychology have a Cochrane ‘risk of bias’ assessment tool of its own?* Access the google documents here: https://docs.google.com/document/d/1fenYqzB2Ek_hkHwA8obQPf9SfFwImb_4r_FJ9WypHOw/edit?usp=sharing
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.