Main content

Date created: | Last Updated:

: DOI | ARK

Creating DOI. Please wait...

Create DOI

Category: Project

Description: Description: In this MRI experiment we will test the underlying neural mechanisms of non-reinforced learning, by changing preferences of face images using the Cue Approach Task (CAT). Recently, Schonberg et al.(2014) showed a preferences increase toward snack food-items following a short (less than one hour) cue-approach training in which some items were consistently associated with a neutral tone and a button press. Stimuli: Sixty colour images of unfamiliar faces taken from the Siblings database (Vieira et al., 2014) and modified for the current procedure will be used as stimuli for the current experiment. Procedure: Subject will undergo several tasks both outside and inside MRI scanner. Inside the scanner, in order to control for viewing times, and make sure participants are engaged with the task, their eye position will be recorded using a MRI-compatible eye-tracker. Outside of the scanner: 1. Baseline evaluation - Binary ranking: Subjects will be presented with a forced choice between 300 random pairs of stimuli. Each stimulus will be presented exactly 10 time, in a binary choice against 10 other random stimuli. for each choice subjects may respond within a 2.5 seconds time-window. After all choices are done, a ranking algorithm (Colley, 2008), will be applied to deduce a quantitive preference value for each item. These rankings will be used later in the probe phase to create pairs of similarly valued items. From all 60 items, a subset of 40 item in predetermined ranks will be chosen for the rest of the experiment. Inside of the scanner: 2. Passive viewing task: Subjects will passively view the subset of 40 images in order to evaluate the baseline neural activation for these stimuli prior to training. As a way of monitoring their engagement in the task, subject will be asked to count how many male or female faces they had seen. 3. Cue approach training: Subject will undergo 16 cue approach training runs; in each run, each image will be presented once. in each trial a single face will be presented on the centre of the screen. 30% of stimuli will be consistently associate with a tone, appearing ~750ms following visual stimulus onset. Subjects are asked to press a button on the response box with their index finger, before visual stimulus offset (1000 ms following the onset). Subjects are informed in advance that the stimulus-tone association is consistent. 4. Seven minutes break: during which anatomical scans are made 5. Passive viewing task: Subjects will passively view the subset of 40 images in order to evaluate the post-training neural activation for these stimuli. As a way of monitoring their engagement in the task, subject will be asked to count how many male or female faces they had seen. 6. Probe: subject will be presented with a forced choice task between two faces out of the 40 faces subset. for each choice there will be a 1500 ms time, followed by a jittered duration ITI. Three pair type are used: a) High-value (HV) pairs - stimuli ranked 7-18 - 6 Go items, and 6 NoGo item (both Go item and NoGo item have a mean rank of 12.5). In each pair subject will be asked to choose between a HV Go item and a HV NoGo item. b) Low-value (LV) pairs - stimuli ranked 43-54 - 6 Go items, and 6 NoGo item (both Go item and NoGo item have a mean rank of 48.5). In each pair subject will be asked to choose between a LV Go item and a LV NoGo item. c) Sanity check - 2 HV NoGo stimuli ranked 5-6, and 2 LV NoGo stimuli ranked 55-56. In each pair subject will be asked to choose between a HV NoGo item and a LV NoGo item. We hypothesize that in pair a and b, subjects will choose the Go item over the NoGo items, and in pair type c subjects will choose the HV items over the LV items. 7. Face localizer (1-back task): In order to functionally locate face selective brain areas, we will use a block design of dynamic videos of faces and objects. Subject will be presented with short 1000 ms videos of either faces or objects in motion (as well as blocks of fixation). In a one-back task, subjects will be asked to indicate when a video was presented two consecutive times.

Files

Loading files...

Citation

Components

number of subjects

Based on previous power analysis, of vmPFC BOLD signal, we decided to use n=45 subjects for the current experiment

Recent Activity

Loading logs...

Hypothesis - Behavioral results

From previous behavioral experiment with the cue approach task on identical face stimuli, we hypothesised that following cue approach training, subjec...

Recent Activity

Loading logs...

Hypothesis - Imaging results

Our main imaging analysis will focus on the vmPFC region, which is known to be involved in processes of value-based decision making. another set of ...

Recent Activity

Loading logs...

Analysis Plan - GLM and Behavior

Here we specify the analysis plan for the behavioral and function data. The current plan covers GLM analysis which will be done using FSL. Furthure an...

Recent Activity

Loading logs...

Analysis plan - PPI


Recent Activity

Loading logs...

Recent Activity

Loading logs...

OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.