Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
This is a follow up experiment on the Reverse Hierarchy project. Previous findings suggest an interaction between the time from stimulus offset to response cue on confidence efficiency. This Experiment will test the timing effect within subjects and compare between high- and low-level tasks between subject groups. The task design is the same as the 'no-mask' condition from the project 'Masking metacognitive boost', and the short response cue duration tested in 'Fork of Applying the Reverse Hierarchy to metacognition'. **Methods** **Participants** 200 participants will be recruited through the Prolific Academic online platform, 100 will complete the high-level task, and 100 the low-level task. Participants whose performance fails to increase above chance in the T1 perceptual task will be removed from further analysis. Removed participants may be replaced if there is insufficient data to estimate the boost parameter in the confidence model. Participants will be required to speak English. Prior to commencing the experiment participants will be shown information about the experiment and be asked to consent. Ethical approval for this study was granted by the local ethics committee (CER-U-Paris No. 2020-33). **Apparatus and Stimuli** Stimuli will be presented using the Psychopy library (Pierce, 2007; 2009; Pierce and MacAskill, 2018; Pierce et al., 2019) in Python, hosted on Pavlovia.org. The experiment will be conducted online, where each observer uses their own computer and is free to complete the experiment as they please (e.g. sitting closer or further from the screen). The face will be female with cropped hair and neutral expression, created with Daz software (http://www.daz3d.com/). The face will be mirrored vertically to control for any artifacts that might bias observers based on asymmetry. Iris contrast and gaze direction will be controlled by removing the original eyes from the face (using Gimp software) and replacing them with realistic counterparts, generated in Matlab. The size of the face images will be adjusted relative to the reported sizes of the participants’ monitors, based on the estimation of the size of a standard credit card on the screen at the beginning of the experiment. The estimated relative luminance scale of the monitors will also be recorded by asking participants to indicate what grey-level differences they can see compared to both black and white. **Procedure** Participants will complete two conditions in separate blocks over the same set of stimuli. Condition order will be counterbalanced across participants. The low-level perceptual task will require participants to discriminate the contrast of the eyes; observers will be asked to respond as to whether the left eye or the right eye was ‘darker’. The high-level perceptual task will require participants to discriminate the direction of gaze of the eyes; observers will be asked to report whether the eyes were looking to the left or the right of them. Participants will respond using the left and right arrow keys. Both tasks and conditions will implement the method of constant stimuli, contrast levels -20%:5%:20% (difference in contrast between left and right eye), and gaze directions -7:1.75:7 degrees of rotation. The stimuli for the low-level task will have direct gaze and the stimuli for the high-level task will have equal iris contrast. Both tasks and conditions will also implement the same metacognitive decision: a 2AFC confidence judgement, wherein observers will be asked to decide in which of two consecutive trials their response was more likely to be correct. Metacognitive responses will be entered by pressing ‘1’ for the first trial and ‘2’ for the second. As trials of similar perceptual difficulty are more important for the analysis of confidence, the order and quantity of trials will be specified such that observers will complete more trials of the same/similar difficulty than trials of very different difficulty. The conditions will differ only in the time from stimulus offset to response cue. In the 'short' condition, the stimulus will be presented for 400 ms, followed by 100 ms blank, followed by the response cue. In the 'long' condition, the stimulus is presented for 400 ms followed by an 800 ms blank before the response cue. There will be a total of 334 trials, plus six practice trials in each condition. Over 100 participants, this makes 16,700 pairs of trials for confidence judgements. Simulations indicate that at least 10,000 pairs of trials are required to estimate confidence boost in the model, though this depends on the parameter values (Mamassian and de Gardelle, 2021). **Analysis** For each observer, sensitivity to the stimuli in the 'short' and 'long' conditions will be estimated by fitting a psychometric function. Data will then be aggregated across observers, where the stimulus values are scaled by the standard deviation of each observer’s psychometric function, relative to their response criterion. This aggregated data will then be used to estimate confidence efficiency and the contributions of additional noise and confidence boost to the confidence judgements, using the Confidence Forced Choice toolbox (Mamassian and de Gardelle, 2021), implemented in Matlab. The contribution of these variables will be compared across conditions by bootstrapping across participants. We hypothesise an interaction between Task and Condition: confidence efficiency should increase with time to response cue in the high-level task, and decrease in the low-level task. Bayesian statistics will be used to assess the evidence for the null against the alternative. **References** Mamassian, P., de Gardelle, V., 2021. Modelling perceptual confidence and the confidence forced choice paradigm. Psychological Review. doi.org/10.1037/rev0000312 Peirce, J. W., Gray, J. R., Simpson, S., MacAskill, M. R., Höchenberger, R., Sogo, H., Kastman, E., Lindeløv, J. (2019). PsychoPy2: experiments in behavior made easy. Behavior Research Methods. 10.3758/s13428-018-01193-y Peirce, J. W., & MacAskill, M. R. (2018). Building Experiments in PsychoPy. London: Sage. Peirce J. W. (2009). Generating stimuli for neuroscience using PsychoPy. Frontiers in Neuroinformatics, 2 (10), 1-8. doi:10.3389/neuro.11.010.2008 Peirce, J. W. (2007). PsychoPy - Psychophysics software in Python. Journal of Neuroscience Methods, 162 (1-2):8-13 doi:10.1016/j.jneumeth.2006.11.017
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.