Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
This project has been accepted for publication in eLife. A link to the manuscript will be provided soon. The pre-print is available on BioRxiv: https://www.biorxiv.org/content/10.1101/2021.04.08.439033v2.full **Aims** In a recent behavioural experiment we explored the relationship between Type-I (perceptual) and Type-II (confidence) decision-making. We found evidence to suggest that Type-I and Type-II decision-making are two partially dissociable processes, with an important interaction: the Type-II decision evidence may act on the process of accumulating Type-I evidence to initiate the Type-I decision, by setting and maintaining the Type-I decision bound. We therefore wish to examine Type-I and Type-II decision evidence signals decoded from EEG around the time that a Type-I decision is made. **Hypotheses** 1. Type-I and Type-II evidence accumulation are partially dissociable processes. This will be evident from computational modelling, where we expect to observe correlated but systematic differences in the parameters required to fit Type-I and Type-II responses. We further aim to examine the EEG data for a similar signature of this partial dissociation. 2. Type-I evidence accumulation can incur a covert bound, whilst Type-II evidence accumulation continues. This will be evident from computational modelling, where we expect to observe an improvement in the fit of a model with a covert bound compared to without a covert bound, in the fit to Type-I responses but not Type-II responses. This should also be evident from the EEG data, where an earlier response readiness signal should be present in the ‘More’ condition compared to the ‘Less’ condition of the ‘Replay’ task. We further aim to explore how this covert bound might be implemented, using the EEG data. **Methods** *Participants* A total of 20 participants will be will be recruited from the RISC mailing list and by word of mouth. Participants will be excluded from further analysis if their performance does not rise significantly above chance in the Type-I or Type-II task, or if their EEG data is too noisy for decoding the orientation of the stimuli. Written consent will be requested before beginning the experiment, after a full explanation of the task has been given. Participants will be required to have normal or corrected to normal vision. Ethical approval has been granted by the INSERM ethics committee (ID RCB: 2017-A01778-45 Protocol C15-98). *Materials* Stimuli will be presented on a 24” BenQ LCD monitor running at 60 Hz with resolution 1920x1080 pixels and mean luminance 45 cd/m2. Stimulus generation and presentation is controlled by MATLAB (Mathworks) and the Psychophysics toolbox (Brainard, 1997; Kleiner et al., 2007; Pelli, 1997), run on a Dell Precision M4800 Laptop. Observers will view the monitor from a distance of 57 cm, with their head supported by a chin rest. EEG data will be collected using a 64-channel BioSemi ActiveTwo system, run on a dedicated mac laptop (Apple Inc.), with a sample rate of 512 Hz. Electrodes will be placed according to the international 10-20 system. *Stimuli* Stimuli will be oriented gabor patches displayed at 70% contrast, subtending 4 dva and with spatial frequency 2 cyc/deg. On each trial the orientations of the presented Gabors will be drawn from one of two circular Gaussian (Von Mises) distributions centred on +/- 45 deg from vertical (henceforth called the blue and orange distributions respectively), with concentration parameter κ = 0.5. On each trial a series of stimuli will be presented, at a rate of 3 Hz, with the stimulus presented at full 70% contrast for a variable duration of 50:16.7:83 ms, with a sudden onset and followed by an offset ramp of two flips, where the stimulus contrast will be decreased by 50% and 75% before offset. The stimulus onset will be initiated pseudo-randomly within each stimulus interval such that the stimulus onset is irregular but with at least 166.7 ms between stimuli. These timings and stimulus examples are shown in Figure 1a. Stimuli will be displayed within an annular ‘colour-guide’ where the colour of the annulus corresponds to the probability of the orientation under each distribution, using the red and blue RGB channels to represent the probabilities of each orientation under each distribution. Stimuli will be presented in the centre of the screen, with a black central fixation point to guide observers’ gaze. *Procedure* The task is a modified version of the weather prediction task (Knowlton et al., 1996; Poldrack et al., 2001; Gluck et al., 2002; Yang and Shadlen, 2007). Throughout the experiment, the observer’s task is to categorise which distribution the stimuli were drawn from. They will be instructed to press the ‘d’ key (of a standard querty keyboard) for the blue distribution and the ‘k’ key for the orange distribution. There will be two variants of this task: the ‘Free’ task, and the ‘Replay’ task, each containing 300 trials. The trials will be composed of three repetitions of 100 pre-defined trials of 40 stimuli for each observer (50 trials from each distribution). In the ‘Free’ task, observers will continually be shown stimuli (up to 40) until they enter their response. They will be instructed to enter their response as soon as they ‘feel ready’ to make a decision, with emphasis on both accuracy (they should make their decision when they feel they have a good chance of being correct) and on time (they shouldn’t take too long to complete each trial but try to respond as fast as they can whilst still performing the task). A graphical description of this task is shown in Figure 1b. After completing the ‘Free’ task, observers will then complete the ‘Replay’ task. In this task they will be shown a specific number of stimuli and can only enter their response after the sequence finishes, signalled by the fixation point turning to red. The number of stimuli is determined based on the number of stimuli observers chose to respond to in the ‘Free’ task. There will be four intermixed conditions: ‘Less’, where the observer is shown two fewer stimuli than the minimum they chose to respond to on that trial in the ‘Free’ task; ‘Same’, where the observer is shown the same number as their median on that trial in the ‘Free’ task; and ‘More’ where they will be shown an additional four stimuli compared to the maximum they chose to respond to in the ‘Free’ task. After entering their categorisation (Type-I) response, they will then be cued to give a confidence rating (Type-II decision). The confidence rating is made on a scale of 1 to 4 where 1 represents very low confidence that the Type-I decision was correct, and 4, certainty that the Type-I decision was correct. They enter their response by pressing the ‘space bar’ when the confidence dial reaches their desired rating. The confidence dial is composed of a black radius line that rotates repeatedly through 30, 70, 110, and 150 degrees from horizontal at a rate of 1.33 Hz, with the corresponding confidence number displayed at each angle. The confidence dial starts at a random confidence level on each trial. A graphical description of this task is shown in Figure 1c. ![][1] ***Figure 1. Procedure.** **a)** Stimulus timings. Each stimulus interval is 333 ms and begins with a variable blank, vs (vs = 83:16.7:133 ms), followed by 116 ms of stimulus-on, followed by a variable blank, ve, where vs + ve = 216 ms. The stimulus-on period contains a ramp off at the end, vf, which consists of one frame (16.7 ms) of 0.5xcontrast and one of 0.25xcontrast, before the stimulus is flipped off. The variability is in whether this off-ramp begins 4, 3, or 2 frames before the end of the 116 ms duration, meaning that the actual stimulus duration is jittered trial to trial between 50 and 83 ms at full contrast. **b)** Free task. Stimuli were continually presented until a response is entered, or until all 40 predefined stimuli are shown. c) Replay task. A specific number of stimuli are presented, based on the number of stimuli the observer chose to respond to in the ‘Free’ task, followed by a response cue (the fixation changing to red). After entering their response, observers enter a confidence rating by pressing the ‘space bar’ when the confidence dial reaches their confidence level. In the figure, the grey dashed lines demonstrate the other dial positions, but only a single black dial (currently at 2) was shown to the observer at any one time.* **Analysis** 1. Behavioural performance. Correct responses are coded as the observer responding with the category that the stimuli were actually drawn from. Significantly above chance performance is one of the inclusion criteria for further analysis. Further analysis will be performed on sensitivity (d’), where it is expected that observers will perform significantly worse in the ‘Less’ condition of the ‘Reply’ task compared to those same trials from the ‘Free’ task, but with no significant differences between the ‘Same’ condition and those trials from the ‘Free’ task, and the ‘More’ condition and those trials from the ‘Free’ task, based on previous results. It is also important that confidence ratings are not made randomly. We expect to see increasing performance with increasing confidence within each observer, and this will also be used as inclusion criteria for further analysis. 2. Computational modelling. Observers’ behavioural responses will be fit with a computational model to describe the accumulation of evidence for making their decisions, based on the work of Drugowitsch, Wyart, and colleagues (2017). The same analysis will be carried out as with the previous behavioural experiment: It is expected that fitting a ‘covert bound’ to Type-I decisions in the ‘Replay’ task will significantly improve the fit of the model. To fit the Type-II responses, we expect to find that these decisions incur significantly more noise with significantly less temporal biases. We do not expect to see evidence for a covert bound on the accumulation of evidence for Type-II decisions. 3. EEG Analysis. The EEG data will be pre-processed using the PREP pipeline (Bigdely-Shamlo, et al., 2015) implemented in EEGlab in Matlab. Following this, the data will be filtered to frequencies between 1 and 40 Hz, and downsampled to 256 Hz. The data will then be epoched to each stimulus, and to each response, and an Independent Components Analysis will be used to remove artefacts caused by blinks and excessive muscle movement. A decoding analysis will then be performed to decode different aspects of the internal processing: the stimulus orientation (Itthipuripat et al., 2017), and the decision update (Wyart et al., 2015). If the stimulus orientation cannot be decoded, this indicates that the data is excessively noisy and it is likely that further analysis cannot be completed. We will take differences in decoding performance and the timing of peak decoding accuracy as evidence of partially dissociable processes, and expect the timing of these decoding functions to reflect the hierarchical nature of the decision processes. We will then look at the data around the time of the responses, and around the time of hitting the covert bound (which can be dissociated from the time of the response in the ‘Replay’ task). We expect to see evidence that observers are making their Type-I decisions early in the ‘More’ condition of the ‘Replay’ task, whilst the Type-II decision evidence should continue to accumulate until the Type-I response (or possibly later). **References** Bigdely-Shamlo, N., Mullen, T., Kothe, C., Su, K. M., & Robbins, K. A. (2015). The PREP pipeline: standardized preprocessing for large-scale EEG analysis. Frontiers in neuroinformatics, 9, 16. Brainard, D. H. (1997). The psychophysics toolbox. Spatial vision, 10, 433-436. Drugowitsch, J., Wyart, V., Devauchelle, A. D., & Koechlin, E. (2016). Computational precision of mental inference as critical source of human choice suboptimality. Neuron, 92(6), 1398-1411. Gluck, M.A., Shohamy, D., and Myers, C. (2002). How do people solve the ‘‘weather prediction’’ task?: individual variability in strategies for probabilistic category learning. Learn. Mem. 9, 408–418. Itthipuripat, S., Chang, K. Y., Vo, V., Nelli, S., & Serences, J. (2017). Dissociable effects of stimulus strength, task demands, and training on occipital and parietal EEG signals during perceptual decision-making. Journal of Vision, 17(10), 37-37. Kleiner, M., Brainard, D., Pelli, D., Ingling, A., Murray, R., & Broussard, C. (2007). What’s new in Psychtoolbox-3. Perception, 36(14), 1. Knowlton, B.J., Mangels, J.A., and Squire, L.R. (1996). A neostriatal habit learning system in humans. Science 273, 1399–1402. Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial vision, 10(4), 437-442. Poldrack, R.A., Clark, J., Pare ́ -Blagoev, E.J., Shohamy, D., Creso Moyano, J., Myers, C., and Wyart, V., Myers, N. E., & Summerfield, C. (2015). Neural mechanisms of human perceptual choice under focused and divided attention. Journal of neuroscience, 35(8), 3485-3498. Yang, T., and Shadlen, M.N. (2007). Probabilistic reasoning by neurons. Nature 447, 1075–1080. [1]: https://mfr.osf.io/export?url=https://osf.io/4k2pr/?action=download&mode=render&direct&public_file=False&initialWidth=625&childId=mfrIframe&parentTitle=OSF%20%7C%20Procedure.jpg&parentUrl=https://osf.io/4k2pr/&format=2400x2400.jpeg
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.