Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Introduction ------------ Previous research has separately analyzed the stability of: (a) the classic attentional networks components (i.e., phasic alertness, orienting, and executive control) by measuring these functions in ten repeated sessions with the ANT and the ANTI task (Ishigami & Klein, 2010), (b) the executive vigilance component by measuring vigilance in two repeated sessions with an adapted CPT task, i.e., a signal detection paradigm (Shaked et al., 2020), and (c) the arousal vigilance component by measuring vigilance in ten repeated sessions with the PVT task, i.e., a single reaction time paradigm (Basner et al., 2018). The ANTI-Vea has demonstrated to be a suitable tool for measuring simultaneously the classic attentional networks functions and vigilance components both in the typical lab conditions as well as in an online session (https://www.ugr.es/~neurocog/ANTI/). Importantly, note that the online ANTI-Vea has shown similar split-half reliability scores to those observed with the standard version of the task (Luna, Roca, Martín-Arévalo, & Lupiáñez, 2020). Objectives ---------------------------- The main goal of the present study is to analyze the stability of the attentional and vigilance scores of the online ANTI-Vea across ten repeated sessions. Hypotheses ---------------------------- Based on previous research, there could be a small practice effect on the orienting and cognitive control scores (Ishigami & Klein, 2010), as well as in some overall measures of the arousal vigilance subtask (Basner et al., 2018). However, it is possible that these practice effects would be more likely observed between the first and second experimental session than between the second and the tenth session. Moreover, given the extended practice of the ANTI-Vea we expect the practice effect to be milder, even between the first and second sessions. Most importantly, note that previous researches have not examined the changes on the vigilance decrement (i.e., performance across time on task) across sessions. We anticipate that, although practice might enhance some of the overall scores of the online ANTI-Vea, the vigilance decrement would be more independent of practice and therefore observed in a similar size across the sessions. Sample size ---------- We plan to gather data from 20 participants. This sample size is similar (or even higher) than those of previous studies separately investigating the stability of attentional and vigilance scores. Procedure and Design -------------------------------------- Participants will complete ten repeated sessions of the online ANTI-Vea (https://www.ugr.es/~neurocog/ANTI/). Each session will comprises 6 experimental blocks of 80 randomized trials: 48 ANTI (suitable for measuring the independence and interactions of alertness -no tone/tone conditions-, orienting -invalid/no cue/valid conditions-, and executive control -congruent/incongruent conditions- as in the ANTI task), 16 EV (suitable for measuring the executive component of vigilance as in a signal-detection paradigm), and 16 AV (similar to a PVT task, suitable for measuring the arousal component of vigilance). Full practice blocks will be completed only in the first session. Since the second session, only instructions and 40 practice trials without visual feedback will be given prior to experimental blocks. Further details of the experimental design of the online ANTI-Vea can be reviewed in Luna et al. (2020). Data analysis plan ---------------------------- 1) The standard analysis of the ANTI-Vea will be conducted (Luna, Barttfeld, Martín-Arévalo, and Lupiáñez, 2021), including the session (10 levels) as a within-participant factor in all analysis. 2) To determine the presence of practice effect across sessions, Bayesian analyses will be conducted excluding N first sessions from which the possible practice effect could be present/absent. 3) Split-half reliability scores will be analyzed in two different ways: (a) including data from all sessions and (b) by sessions, as in Ishigami and Klein (2010). Split-half reliability scores will be computed in the same way as in Luna et al. (2020). References ---------- Basner, M., Hermosillo, E., Nasrini, J., McGuire, S., Saxena, S., Moore, T. M., Gur, R. C., & Dinges, D. F. (2018). Repeated Administration Effects on Psychomotor Vigilance Test Performance. Sleep, 41(1). https://doi.org/10.1093/sleep/zsx187 Ishigami, Y., & Klein, R. M. (2010). Repeated measurement of the components of attention using two versions of the Attention Network Test (ANT): stability, isolability, robustness, and reliability. Journal of Neuroscience Methods, 190(1), 117–128. https://doi.org/10.1016/j.jneumeth.2010.04.019 Luna, F. G., Barttfeld, P., Martín-Arévalo, E., & Lupiáñez, J. (2021). The ANTI-Vea task: analyzing the executive and arousal vigilance decrements while measuring the three attentional networks. Psicológica, 42(1), 1–26. https://doi.org/10.2478/psicolj-2021-0001 Luna, F. G., Roca, J., Martín-Arévalo, E., & Lupiáñez, J. (2021). Measuring attention and vigilance in the laboratory vs. online: The split-half reliability of the ANTI-Vea. Behavior Research Methods, 53(3), 1124–1147. https://doi.org/10.3758/s13428-020-01483-4 Shaked, D., Faulkner, L. M. D., Tolle, K., Wendell, C. R., Waldstein, S. R., & Spencer, R. J. (2020). Reliability and validity of the Conners’ Continuous Performance Test. Applied Neuropsychology: Adult, 27(5), 478–487. https://doi.org/10.1080/23279095.2019.1570199
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.