**ODCS: Data Quality in Online Data Collection**
Even when study data is collected in-person, outlining the steps to ensure data quality for human subjects data can feel a little overwhelming. Now, with the large shift to online data collection, there is even MORE to think about- How do we ensure that only the people who *should* be responding to our studies *are* responding? What are the best ways for making sure our participants are actually paying attention? What's the deal with bots gaining access to studies? How do I set a data plan in place that is reproducible?
We'll go over these questions(and more!) in our interactive workshop. In particular, we will discuss:
- Strategies to prevent bad actors and bots from taking your online study
- Strategies to assess the quality of data *after* it has been collected online
- The upsides and downsides of implementing these strategies
- How your participant pool(e.g. social media vs. MTurk vs. Prolific vs. market research panel vs. your own participant pool) can influence data quality and the data quality measures you want to put in place
- How to build a data-quality game plan for your online study, with an emphasis on research reproducibility
**To be successful, you should have:**
- A general familiarity with survey and/or experimental study data collection in the social sciences