Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Computational analyses are playing an increasingly central role in research. Journals, funders, and researchers are calling for published research to include associated data and code. However, many involved in research have not received training in best practices and tools for sharing code and data. This workshop aims to address this gap in training while also providing those who support researchers with curated best practices guidance and tools. This workshop is unique compared to other reproducibility workshops due to its practical, step-by-step design. It is comprised of hands-on exercises to prepare research code and data for computationally reproducible publication. Although the workshop starts with some brief introductory information about computational reproducibility, the bulk of the workshop is guided work with data and code. The basic best practices for publishing code and data are covered with curated resources and participants move through preparing research for reuse, organization, documentation, automation, and submitting their code and data to share. Tools to support reproducibility will be introduced but all lessons will be platform agnostic. **Audience**: Researchers who use code in their research and wish to share it, those who plan to do research using code, or those who support researchers in sharing code. **Preparation**: - Bring a laptop to fully participate. - Participants may bring their own data and code to work through during the workshop. - If you don't have code and data of your own to bring, you will follow along with example code and data. **Best practices topics covered**: **Organization** - Create one repository or directory that holds all related research files. - Organize your research to separate data, code, and results. - Save results explicitly. **Documentation** - Document each element or variable in your dataset with a data dictionary / codebook. - Create a project README file. - Specify licenses for your data and your code. - Consider literate programming. - Specify your computational environment and package versions. **Automation** - Configure a container to make your analysis portable and reusable. - Change absolute paths to relative paths. - Create a master script for your analyses. **Dissemination** - Write a detailed study protocol before you gather your data. - Report all results, no matter their direction or statistical significance. - Publish and share your data and code.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.