Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
**Aims** This study aims to provide a highly-powered test of the association between media multitasking and performance on a change detection task in which two targets are shown together with 0, 2, 4, or 6 distractors. Using this task, Ophir, Wagner, and Nass (2009) found an interaction of distractor set size and group (high vs. low media multitaskers), and concluded that media multitasking is associated with increased susceptibility to distraction. Using the same task, Uncapher, Thieu, and Wagner (*in press*) subsequently found only a main effect of group and concluded that high media multitaskers may suffer from "continual distraction not under experimental control" (p. 7, Uncapher et al.). By conducting a highly powered exact replication, we aim to determine if media multitasking is truly associated with performance on a change detection task, and, if so, whether this association stems from increased susceptibility to distractors in the task itself, or from increased susceptibility to internal distraction. To examine the latter possibility in closer detail, the experiment will include an experience sampling method aimed to measure task-unrelated thought at different moment during the change detection task (e.g., Schooler et al., 2011). **Hypotheses** In light of previous findings, we arrive at three possible outcomes for the change detection task: 1. Interaction effect of Group and Distractor Set Size, with high media multitaskers showing a stronger decline in performance with increasing set size. This finding would replicate the results of Ophir et al. 2. Main effect of Group, but no interaction, with high media multitaskers showing worse performance regardless of distractor set size. This finding would replicate the results found by Uncapher et al. 3. No effects involving group. Performance does not differ for high and low media multitaskers. This finding would signify that the effects of Ophir et al. and Uncapher et al. were spurious effects caused by the use of a relatively small sample of participants. For the experience sampling measurement, the following questions will be addressed in analyzing the data: 1. Is there a difference in the frequency of task-unrelated thought in high and low media multitaskers? 2. Is this difference in task-unrelated thought related to performance on the change detection task? 3. Is performance on the change detection task influenced by task-unrelated thought? **Required Sample Size for 95% Replication Power** The required sample sizes for obtaining 80 and 95% replication power were calculated using the G-Power 3.1. software (Faul, Erdfelder, Lang, & Buchner, 2007). We calculated two sample sizes for the ANOVA comparisons. For the Group × distractors set size interaction, Ophir et al. reported F(3, 120) = 4.61. Based on this information, we calculated partial η2 using the equations specified by Cohen (1988) and Lakens (2013): η2 = (F x df[effect])/(F x df[effect] + df[error]) This value was then transformed into a standardized mean difference score using: d=√((η2)/(1-η2 )×2k), Cohen (1988). The resulting effect size of .339. was entered into G-power with these settings: F tests, ANOVA Repeated measures, within-between interaction, with number of groups equal to two, number of measurements equal to four, correlation between repeated measures equal to .42, and the nonsphericity correction equal to one. These settings yielded a required sample of 8 participants in each group for 80% power and 11 participants in each group for 95% power. For the main effect of group, reported to be statistically significant with F(1, 68) = 4.88 by Uncapher et al., we calculated an effect size f of .268. This number was entered into G-power with these settings: F tests, ANOVA Repeated measures, between factors, with number of groups equal to two, number of measurements equal to four, correlation between repeated measures equal to .42, and the nonsphericity correction equal to one. These settings yielded a required sample of 32 participants in each group for 80% power and 53 participants in each group for 95% power. Correlations among repeated measures were calculated from Wiradhany & Nieuwenstein’s (in prep.) dataset, which consisted of two earlier exact replication attempts that used a relatively small sample size (Total N = 23 and N = 29). The correlations in these two studies ranged from .42 to .63. We took a conservative approach by entering the lowest possible correlation. **Procedure** Data collection will be done using Open Sesame software (Mathôt, Schreij, & Theeuwes, 2012). We aim to collect data from about 100 participants in a room equipped with 10 computer set-ups. The remaining data will be collected by students in a research practicum course. These students will collect data using their own computers and we will compare the data collected in the computer room with that collected by the students to verify the consistency of outcomes. **Data Analyses** We will conduct both null-hypothesis significance tests and Bayes factors analyses (Rouder, Morey, Speckman, & Province, 2012). **Results** *Change detection task.* The repeated-measures ANOVA analysis for the *K* Change Detection task with MMI groups as a between-subjects factor and number of distractors as a within-subjects factor showed a main effect of distractor set size, *F*(3, 369) = 6.72, *p* < .001, partial *η*2 = .052, but no main effect of group, F(1, 123) = .038, p = .85, *BF*01 = 3.2 and no group x distractor set size interaction, *F*(3, 369) = .061, *p* = .61., *BF*01 = 24.4. Furthermore, we found no correlation between MMI and the average *K* performance across all 261 participants, *r* = .019, *p* = .76, *BF*01 = 16.5, one-tailed. *Experience sampling*. HMMs reported being off-task more often compared with LMMs (10.82% vs. 4.72%, respectively) and being unaware of where their attention was focused compared with LMMs (11.77% vs. 7.29%, respectively). Frequency of off-task reports are negatively correlated with average K performance, *r* = -.21, *p* < .001 and are positively correlated with MMI score, *r* =.17, *p* = .005. However, we observed no off-task reports x MMI interaction, *t* = 1.89, *p* = .58 in a moderator analysis in linear regression model with *K* mean as the outcome variable, and off-task reports and MMI as predictors. **Original citations:** Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159. http://doi.org/10.1037/0033-2909.112.1.155 Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–91. http://doi.org/10.3758/bf03193146 Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4(NOV), 1–12. http://doi.org/10.3389/fpsyg.2013.00863 Mathôt, S., Schreij, D., & Theeuwes, J. (2012). OpenSesame: An open-source, graphical experiment builder for the social sciences. Behavior Research Methods, 44(2), 314-324. doi:10.3758/s13428-011-0168-7 Ophir, E., Nass, C., & Wagner, A. D. (2009). Cognitive control in media multitaskers. *Proceedings of the National Academy of Sciences of the United States of America, 106*(37), 15583–7. http://doi.org/10.1073/pnas.0903620106 Rouder, J. N., Morey, R. D., Speckman, P. L., & Province, J. M. (2012). Default Bayes factors for ANOVA designs. Journal of Mathematical Psychology, 56(5), 356–374. http://doi.org/10.1016/j.jmp.2012.08.001 Schooler, J.W., Smallwood, J., Christoff, K., Handy, T.C., Reichle, E.D., & Sayette, M.A. (2011). Meta-awareness, perceptual decoupling and the wandering mind. *Trends in Cognitive Sciences, 15,* 319-326 .doi: 10.1016/j.tics.2011.05.006 Uncapher, M. R., Thieu, M. K., & Wagner, A. D. (2015). Media multitasking and memory: Differences in working memory and long-term memory. *Psychonomic Bulletin & Review.* http://doi.org/10.3758/s13423-015-0907-3
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.