Whilst the processing of visual information proceeds in a forward hierarchy, advancing from simple representations in primary visual cortex to the coding of more complex object-level representations further down the dorsal pathway, it has been proposed that this automatic and implicit processing is accessed by conscious and deliberate cognition in a reverse hierarchy (Hochstein and Ahissar, 2002). Evidence for this theory is largely drawn from studies of attention, for example, in visual search, where features ‘pop-out’, whilst conjunction searches take longer. Given that metacognition is also proposed to rely on explicit vision, the evidence available to metacognitive scrutiny should also proceed in a reverse hierarchy, with higher-level visual representations being more immediately available for metacognitive decisions than lower-level visual representations. Thus, performance in metacognitive tasks on high-level visual decisions should be superior to those on low-level visual decisions. Here we seek to test this hypothesis by directly comparing metacognitive performance on high- and low- level visual decisions within observers.
This experiment extends the initial version using online data collection to obtain sufficient trials to adequately compare the ‘boost’ in confidence evidence across the high- and low-level tasks.
**Methods**
**Participants**
100 participants will be recruited through the Prolific Academic online platform. Participants whose performance fails to increase above chance in either of the T1 perceptual tasks will be removed from further analysis. Removed participants may be replaced if there is insufficient data to estimate the boost parameter in the confidence model. Participants will be required to speak English. Prior to commencing the experiment participants will be shown information about the experiment and be asked to consent. Ethical approval for this study was granted by the local ethics committee (CER-U-Paris No. 2020-33).
**Apparatus and Stimuli**
Stimuli will be presented using the Psychopy library (Pierce, 2007; 2009; Pierce and MacAskill, 2018; Pierce et al., 2019) in Python, hosted on Pavlovia.org. The experiment will be conducted online, where each observer uses their own computer and is free to complete the experiment as they please (e.g. sitting closer or further from the screen). The face will be female with cropped hair and neutral expression, created with Daz software (http://www.daz3d.com/). The face will be mirrored vertically to control for any artifacts that might bias observers based on asymmetry. Gaze direction and iris contrast will be controlled by removing the original eyes from the face (using Gimp software) and replacing them with realistic counterparts, generated in Matlab. The size of the face images will be adjusted relative to the reported size of the participants monitor, based on the estimation of the size of a standard credit card on the screen at the beginning of the experiment. The estimated relative luminance scale of the monitor will also be recorded by asking participants to indicate what grey-level differences they can see compared to both black and white.
**Procedure**
Observers will complete two perceptual tasks over the same set of stimuli. Task order will be counterbalanced across participants. The ‘Low level’ task will require participants to discriminate the contrast of the eyes; observers will be asked to respond as to whether the left eye or the right eye was ‘darker’. The ‘High level’ task will require participants to discriminate the direction of gaze of the eyes, with respect to themselves; observers will be asked to respond as to whether gaze was directed to the left or the right of them. In both tasks observers will respond using the left and right arrow keys. Both tasks will implement the method of constant stimuli, with gaze directions -5°: 1.25°: 5° (degrees rotated from direct gaze) and contrast levels -16%:4%:16% (difference in contrast between left and right eye). In the ‘High level’ task, the contrast of the eyes will be equal, and in the ‘Low level’ task, the eye direction will be direct. Both tasks also implement the same metacognitive decision: a 2AFC confidence judgement, wherein observers will be asked to decide in which of two consecutive trials their response was more likely to be correct. Metacognitive responses will be entered by pressing ‘1’ for the first trial and ‘2’ for the second. As trials of similar perceptual difficulty are more important for the analysis of confidence, the order and quantity of trials will be specified such that observers will complete more trials of same/similar difficulty than trials of very different difficulty. There will be a total of 334 trials, plus nine practice trials in each task. Over 100 participants, this makes 16,700 pairs of trials for confidence judgements. Simulations indicate that at least 10,000 pairs of trials are required to estimate confidence boost in the model, though this depends on the parameter values (Mamassian and de Gardelle, in prep).
**Analysis**
For each observer, sensitivity to the stimuli in the high- and low-level tasks will be estimated by fitting a psychometric function. Data will then be aggregated across observers, where the stimulus values are scaled by the standard deviation of each observer’s psychometric function. This aggregated data will then be used to estimate the contributions of additional noise and confidence boost to the confidence judgements, using the Confidence Forced Choice toolbox (Mamassian and de Gardelle, in prep), implemented in Matlab. The contribution of these variables will be compared across tasks by bootstrapping across participants.
We hypothesise that confidence efficiency will be greater in the ‘High level’ task than in the ‘Low level’ task: as metacognitive evidence must reach further down the hierarchy in the ‘Low level’ task it incurs more noise. Confidence efficiency is determined by the relative contribution of noise and boost to the confidence evidence. Thus, we will compare these parameters across the high- and low-level tasks by bootstrapping over participants. If the null hypothesis cannot be rejected, Bayesian statistics will be used to assess the evidence for the null against the alternative.
**References**
Mamassian, P., de Gardelle, V., In prep. Modelling confidence forced choice.
Peirce, J. W., Gray, J. R., Simpson, S., MacAskill, M. R., Höchenberger, R., Sogo, H., Kastman, E., Lindeløv, J. (2019). PsychoPy2: experiments in behavior made easy. *Behavior Research Methods*. 10.3758/s13428-018-01193-y
Peirce, J. W., & MacAskill, M. R. (2018). *Building Experiments in PsychoPy.* London: Sage.
Peirce J. W. (2009). Generating stimuli for neuroscience using PsychoPy. *Frontiers in Neuroinformatics*, 2 (10), 1-8. doi:10.3389/neuro.11.010.2008
Peirce, J. W. (2007). PsychoPy - Psychophysics software in Python. *Journal of Neuroscience Methods*, 162 (1-2):8-13 doi:10.1016/j.jneumeth.2006.11.017