This is a follow up experiment on the Reverse Hierarchy project. Previous findings suggest an interaction between the time from stimulus offset to response cue on confidence efficiency, depending on whether observers were making a ‘high-level’ or a ‘low-level’ perceptual decision. This Experiment will attempt to replicate the effect with different stimuli: participants will make a high-level perceptual decision about the walking direction of biological motion stimuli (leftward or rightward of direct) or a low-level perceptual decision about the rotation of two dots at the centre of the biological motion stimulus (leftward or rightward rotation). The task design is the same as the project “Testing the effect of response cue timing on high- and low-level metacognition”. Two groups of participants will be recruited to test the effect of high- vs low-level perceptual decision between subjects. The effect of time from stimulus offset to response cue will be tested within subjects.
**Methods**
**Participants**
200 participants will be recruited through the Prolific Academic online platform, 100 will complete the high-level task, and 100 the low-level task. Participants whose performance fails to increase above chance in the perceptual task will be removed from further analysis. Removed participants may be replaced if there is insufficient data to estimate the parameters of the confidence forced-choice model. Participants will be required to speak English. Prior to commencing the experiment participants will be shown information about the experiment and be asked for informed consent. Ethical approval for this study was granted by the local ethics committee (CER-U-Paris No. 2020-33).
**Apparatus and Stimuli**
Stimuli will be presented using the Psychopy builder (Pierce, 2007; 2009; Pierce and MacAskill, 2018; Pierce et al., 2019) written in Python (translated to Java), hosted on Pavlovia.org. The experiment will be conducted online, where each observer uses their own computer and is free to complete the experiment as they please (e.g. sitting closer or further from the screen). The stimuli will be biological motion stimuli, which appear as a human figure (neutral gender) walking, based on the relative movements of 15 dots. The stimuli are premade as mp4s and played on each trial. The size that the mp4 movies are presented at will be adjusted relative to the reported sizes of the participants’ monitors, based on the estimation of the size of a standard credit card on the screen at the beginning of the experiment. The estimated relative luminance scale of the monitors will also be recorded by asking participants to indicate what grey-level differences they can see compared to both black and white.
**Procedure**
Participants will complete two conditions in separate blocks over the same set of stimuli. Condition order will be counterbalanced across participants. The low-level perceptual task will require participants to discriminate whether two central dots (forming the chest and pelvis) rotate 'leftward' (counter-clockwise) or 'rightward' (clockwise) and back to centre. The high-level perceptual task will require participants to discriminate whether the figure is walking leftward or rightward of directly toward them. Participants will respond using the left and right arrow keys.
Both tasks and conditions will implement the method of constant stimuli, sampling 9 positions along the psychometric function to generate an average slope of around 0.5 over the range of stimuli, with ceiling performance at the end of the range. Based on pilot data, for the low-level task, the rotation of the central dots will be displayed at [4, 3, 2, 1, 0] pixels sinusoidal motion in clockwise and counter-clockwise directions, returning to the original position in 400 ms. For the high-level task, walking direction will be displayed at [20, 14, 8, 4, 0] degrees to the left and right of direct, showing 400 ms of motion. The stimuli for the low-level task will have 0 walking direction and the stimuli for the high-level task will have 0 central dot rotation. In both tasks, the starting position of the dots within the cycle of motion will be pseudo-randomised: for each stimulus a random starting point is chosen from one of three pre-recorded points jittered around the beginning, early-middle, and late-middle of the cycle of dot motion.
Both tasks and conditions will also implement the same metacognitive decision: a 2AFC confidence judgement. Observers will be asked to decide in which of two consecutive trials their response was more likely to be correct. Metacognitive responses will be entered by pressing ‘1’ for the first trial and ‘2’ for the second. As trials of similar perceptual difficulty are more important for the analysis of confidence, the order and quantity of trials will be specified such that observers will complete more trials of the same/similar difficulty than trials of very different difficulty.
The within-subject conditions will differ only in the time from stimulus offset to response cue. In the 'short' condition, the stimulus will be presented for 400 ms, followed by 100 ms blank, followed by the response cue. In the 'long' condition, the stimulus is presented for 400 ms followed by an 800 ms blank before the response cue. There will be a total of 334 trials, plus six practice trials in each condition. Over 100 participants, this makes 16,700 pairs of trials for confidence judgements in each condition. Simulations indicate that at least 10,000 pairs of trials are required to estimate the parameters of the confidence forced-choice model, though this depends on the parameter values (Mamassian and de Gardelle, 2021).
**Analysis**
For each observer, sensitivity to the stimuli in the 'short' and 'long' conditions will be estimated by fitting a psychometric function. Data will then be aggregated across observers, where the stimulus values are scaled by the standard deviation of each observer’s psychometric function, relative to their response criterion. This aggregated data will then be used to estimate confidence efficiency and the contributions of additional noise and confidence boost to the confidence judgements, using the Confidence Forced-Choice toolbox (Mamassian and de Gardelle, 2021), implemented in Matlab. The contribution of these variables will be compared across conditions by bootstrapping across participants. We hypothesise an interaction between Task and Condition: confidence efficiency should increase with time to response cue in the high-level task, and decrease in the low-level task. Bayesian statistics will be used to assess the evidence for the null against the alternative.
**References**
Mamassian, P., de Gardelle, V., 2021. Modelling perceptual confidence and the confidence forced choice paradigm. Psychological Review. doi.org/10.1037/rev0000312
Peirce, J. W., Gray, J. R., Simpson, S., MacAskill, M. R., Höchenberger, R., Sogo, H., Kastman, E., Lindeløv, J. (2019). PsychoPy2: experiments in behavior made easy. Behavior Research Methods. 10.3758/s13428-018-01193-y
Peirce, J. W., & MacAskill, M. R. (2018). Building Experiments in PsychoPy. London: Sage. Peirce J. W. (2009). Generating stimuli for neuroscience using PsychoPy. Frontiers in Neuroinformatics, 2 (10), 1-8. doi:10.3389/neuro.11.010.2008
Peirce, J. W. (2007). PsychoPy - Psychophysics software in Python. Journal of Neuroscience Methods, 162 (1-2):8-13 doi:10.1016/j.jneumeth.2006.11.017