We will first check to see if the overall VO effect can be replicated using new distractor tasks.
This will be done using the following code:
regress correct i.vo, vce(robust)
Next, we will run an analysis of each of our distractor tasks to see if it moderates the effect of VO. This will be done with the following code for each DV.
gen vox*dvname*
where *dvname* is the name of each DV.
interflex correct i.vo *dvname* vox*dvname*, type(linear) vce(robust)
Which will check for linear interactions (Hainmuller et al., 2019). Dropping the type(linear) from the code produces a Wald test for a non-linear interaction and will be investigated for every DV.
Next, we will apply the same exclusion criteria used in the VO RRR (Alogna et al., 2014). Finally, we will check whether the VO effect can be replicated when only removing participants for failing the 'seriousness' check.
**References**
Alogna, V. K., Attaya, M. K., Aucoin, P., Bahník, Š., Birch, S., Birt, A. R., ... & Buswell, K. (2014). Registered replication report: Schooler and engstler-schooler (1990). Perspectives on Psychological Science, 9(5), 556-578.
Hainmueller, J., Mummolo, J., & Xu, Y. (2019). How much should we trust estimates from multiplicative interaction models? Simple tools to improve empirical practice. *Political Analysis*, 27(2), 163-192.