Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Data, material and additional information for NeuroImage paper: Ort, Fahrenfort, Reeder, Pollmann, & Olivers (2019). Frontal cortex differentiates between free and imposed target selection in multiple-target search. *NeuroImage (202)*, 116133. DOI: https://doi.org/10.1016/j.neuroimage.2019.116133 This paper describes a study in which we combined a gaze-contingent eye tracking design with fMRI. Participants performed a visual search task for two target colors. The critical manipulation was whether both search targets appeared in a search display or only one of them. We hypothesized that when both targets are present, individuals will use proactive cogntive control to prepare the search, while if only one target was present, they had to use reactive control mechanism to find the target. FMRI was used to differentiate between these two modes of cognitive control. To facilitate replication efforts, we provide some raw and derivative data together with essential scripts to get to the results that we report. Additionally, the key statistical brain maps on the group level were published on [Neurovault](https://neurovault.org/collections/5550/). **Data** The uploaded dataset consists of raw eye tracking gaze data as well as behavioral files. Additionally, I uploaded preprocessed fMRI data (as well as behavioral and gaze data). All these data can be found in the folder *data*. The experiment was recorded in fMRI, therefore, the smallest units of data files is the run (one behavioral, gaze data, fMRI file per run). *A) Raw data* 1) **Raw behavioral data** is *csv* format in which every row represents one trial. Data was recorded with OpenSesame, a Python-based experiment builder. The experiment is included in the folder *material*. In addition, we provide a file that explains all relevant variables and what they represent (*variables.txt*). 2) **Raw Gaze data** is in *edf* format, as is the standard of the *Eyelink* Eye tracking system (London, Ontario, Canada). *B) Processed data* 1) **Processed behavioral data** is a file in which gaze data and behavioral data are combined into a single file (per run, per participant). Further, some variables that depend on gaze position were added. When the behavioral analysis (*ET_analysis.py*) is run on these data, the same behavioral results as shown in the paper should emerge. 2) **Preprocessed fMRI data** are *nifti* files (nii.gz) that were preprocessed with [FMRIPrep](https://fmriprep.readthedocs.io/en/stable/usage.html). The details on the preprocessing steps can be found in the manuscript. The folder contains anatomy and functional images (brainmask as well as actual data). Additionally, text files are included that have TR-based confounds included (also here, see the manuscript for details). Finally, also field maps are included, these were not used in the data analysis though. Note, no raw fMRI data is provided as the informed consent that was given to participants did not include the waiving of privacy rights. Upon reasonable request, I am willing to share the raw data. **Code** Data were analysed with Python 2.7 and R. FMRI data was analysed with the Python-based tool nipype that calls function from a number of fMRI analysis softwarte (incl. fsl, spm, freesurfer, afni). Below I provide the scripts that were used and describe in which way they were used in the general workflow. *A) Behavioral analysis* The file *preproc_pipeline.py* can be called to parse the gaze data files into events (*parseASCII.py*), combine in it with the behavioral file (*ET_preproc*)) and add necessary variables. This script produces the data files that can be used to run the analysis on with *ET_analysis.R*. The drift diffusion modeling is done in the files *behav_hddm.py* to run the models and *hddm_load.py* to run the statistics on the model. Helper functions are defined in various files, all ending on *Utils.py*. *B) GLM Analysis* After preprocessing in *fMRIprep* (command can be found in *runFMRIPrep.sh*), the data was further processed with nipype. The script *GLM_nipype.py* implements the GLM analysis incl. some residual preprocessing steps (i.e. highpass filter, smoothing), firstlevel, secondlevel and thirdlevel statistics. Ideally, this script will produce the statistical maps that were uploaded on Neurovault. To produce the figures in the paper, the volumetric maps have to be first converted to surface maps and then edited somewhat with Python's library *cv2*, as implemented in the script *vol2surf.py* and *contours.py*. *C) Deconvolution Analysis* The deconvolution analysis was done with the nideconv package (check paper for reference), by de Hollander & Knapen. The scripts *runFytter.py* and *groupFytter.py* and their dependencies implement the analysis and the plotting. To statistically estimate the onsets, we ran the script *onsetDiff.py* **Other material** Finally, we also provide a list of some information on the participants of this study, the experimental code (that can be run with version [3.1.9](https://github.com/smathot/OpenSesame/releases/tag/release%2F3.1.9) of Opensesame) and the license file for Freesurfer (necessary for fMRIprep). **Disclaimer** Even though the scripts provided here were tested to some extent, they will not work out of the box. In the best scenario, the folder structure needs to be updated, in the worst scenario, more problems/bugs will arise. I do not take any responsibility if the scripts will not work and I apologize for the messiness of the files. Nevertheless, if you have problems or questions, please feel free to contact me. I'll try my utmost to make the scripts work! **Eduard Ort -- eduardxort@gmail.com**
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.