Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
General Overview ================ In our paper “Relative precision of top-down attentional modulations is lower in early visual cortex compared to mid- and high-level visual areas”, data from a fMRI experiment and an eyetracking experiment are presented (in Supplementary Figure 9). Below, you will find the experiment code used to present stimuli to our participants, the data we collected, as well as the analysis scripts wrote that generate the figures and statistics in the paper. If any questions arise, you can email me at supark@ucsd.edu ________________________________________ Experiment code =============== Task scripts consist of a main top-down attention task (`AttPrec_fmri.m`), a bottom-up mapping task (`sIEM_1D_mapping task.m`), and a functional localizer task (`AttPrec_localizer.m`). ________________________________________ Data ==== fMRI experiments ---------------- (1) There’s one file with the behavioral data collected during the scanning session (for all participants combined). (2) There is a “SampleFile” for every participant, with its main ingredient a big time point by voxel matrix. Each time point is a TR, and time is concatenated across all scanning sessions (spatial alignment and preprocessing was already performed). Each voxel is a retinotopically mapped visual voxel (voxel indices are 3 dimensional matrix indices, so you could reconstruct where they are in the brain). If you'd like un-preprocessed and/or whole brain data, please contact me. (3) Finally, there is a “TimeCourses” file for every participant. This file is made by `MakeTimecourseFile.m` analysis script, and contains further processed data relative to the "SampleFile". For example, we performed z-scoring, located visually responsive voxels, averaged delay data for each trial, etc. Eyetracking experiment --------------------- (1) There’s one file with the behavioral data collected during the eyetracking session (for all participants combined). (2) There is a "ET_gaze" file for every participant that contains gaze position coordinates for each trial (preprocessing already performed). ________________________________________ Analysis ======== fMRI analysis: Introduction --------------------------- The scripts are edited so that they will run successfully if all the data files are in the same folder as the scripts. If data files are organized in a different way, please make sure the directory info at the top of each script matches. While most scripts use a known random seed, I ran some of these from scratch, and others in stages. This means that you may not always replicate the exact p-values reported in our paper, and some minor numerical differences can emerge. Potentially of note is that I used Matlab 2016b on Linux for all these analyses (few exceptions noted below), so cannot guarantee behavior in other environments. fMRI analysis: Scripts ---------------------- It is recommended that scripts are ran consecutively in the order listed here, as there are some dependencies between scripts (some scripts generate .mat outputs that are read in by other scripts). To make everything run quickly, I have also included the .mat outputs that scripts generate. Of course, you are free to regenerate them, it just may take a while. - `MakeTimecourseFile.m` takes the data from the “SampleFiles” and perform a number of further processing steps (such as z-scoring, defining visually responsive voxels, etc.). They output the “TimeCourses” data files. - `MVPA.m` takes the data from the "TimeCourses" files and runs MVPA. Three analyses are done: train & test on bottom-up mapping data (referred to as 'map' in the scripts) using leave-one-run-out procedure; train on bottom-up data and test on top-down attention task data (referred to as 'main' in the scripts); train & test on top-down data using a 4-fold cross-validation process. It saves out `result_MVPA_TrnMapTstMain_12loc2wedge_meancenter.mat`. - `MVPA_24loc.m` takes the data from the "TimeCourses" files and runs MVPA, not collapsing data across pairs of wedges and running a 24-way decoding analysis. It saves out `result_MVPA_TrnMapTstMain_24loc_meancenter.mat`. - `MVPA_timeseries.m` takes the data from the "TimeCourses" files and runs MVPA to acquire a timecourse of decoding accuracies, using a sliding window on TRs. It saves out `result_MVPA_timeseries_win3TR.mat`. - `MVPA_top300voxels.m` takes the data from the "TimeCourses" files and runs MVPA on 300 voxels that are most responsive to the localizer. It saves out `result_MVPA_TrnMapTstMain_12loc2wedge_meancentered_top300.mat`. - `MVPA_topdownvoxsel.m` takes the data from the "TimeCourses" files and runs MVPA using an alternative voxel selection method, in which top-down attention task data was used to select voxels that are most responsive to cued quadrants, using a leave-one-run-out method to avoid circularity. It saves out `result_MVPA_tdvs.mat`. - `MVPA_topdownvoxsel_circular.m` takes the data from the "TimeCourses" files and runs MVPA using an alternative voxel selection method, in which top-down attention task data was used to select voxels that are most responsive to cued quadrants. It uses all runs of the top-down task to select voxels in an attempt to bias the selection in favor of the top-down decoding as much as possible. It saves out `result_MVPA_tdvs_circular.mat`. - `plotMVPA.m` takes in result files created from the MVPA scripts above and plots Figure 2, 3, 4, and Supplementary Figure 3, 4, 5, 6, 8, 9. Make sure to switch the result files according to which figure you want to plot at the top of the script. - `plotMVPA_timeseries.m` takes in `result_MVPA_timeseries_win3TR.mat` (created with `MVPA_timeseries.m`) and plots Supplementary Figure 10. - `modelConfusionMatrix.m` takes in `result_MVPA_TrnMapTstMain_12loc2wedge_meancenter.mat` (created with `MVPA.m`) and fits off-/diagonal models to confusion matrices. It generates Figure 5A and 5B and saves out `result_MVPA_BetaRatio.mat`. - `permMVPA.m` takes the data from the "TimeCourses" files (on its first run), `result_MVPA_TrnMapTstMain_12loc2wedge_meancenter.mat` (created with `MVPA.m`), and `result_MVPA_BetaRatio.mat` (created with `modelConfusionMatrix.m`), and performs randomization tests of cue and ROI effects on the top-down decoding accuracy, accuracy ratio scores, and beta ratio scores from fitting confusion matrices. It saves out `DecodingForStat.mat`. - `permMVPA_withinSubj.m` takes in `DecodingForStat.mat` (created with `permMVPA.m`) and performs randomization tests of the main results at a single-subject level. - `plotUnivariate.m` takes the data from the “SampleFiles” and the "TimeCourses" files (on its first run) and plots BOLD signal changes for top-down attention task conditions. It generates Supplementary Figure 2 and saves out `UnivariateForStat.mat`. - `permUnivariate.m` takes in `UnivariateForStat.mat` (created with `plotUnivariate.m`) and performs randomization tests of cue and ROI effects on the univariate BOLD signal changes. Behavioral analysis --------------------- - `analyzeBehavior.m` performs analysis of the behavioral performance of the top-down attention task, during the scanning and/or the eyetracking session. It outputs Figures 1c and 1d, as well as Supplementary Figure 1. It also performs the statistics that test cue effects on accuracy and tilt offset. Change the value of `whichData` at the top of the script to analyze data from either scanning, eyetracking, or both sessions. Note that this script was ran in Matlab 2019a. Eyetracking analysis -------------------- - `analyzeEyetracking.m` performs analysis of participants' gaze position during the top-down attention task, from data collected during a separate eyetracking session. ________________________________________ Figure index ============ Provided here is an index for each figure in the paper, noting the analysis script required to generate it. Figure 1C & 1D: `analyzeBehavior.m` Figure 2A & 2B: `MVPA.m`, `plotMVPA.m` Figure 3A & 3B: `MVPA.m`, `plotMVPA.m` (see comment inside) Figure 4: `MVPA.m`, `plotMVPA.m` Figure 5A & 5B: `MVPA.m`, `modelConfusionMatrix.m` Supplementary Figure 1: `analyzeBehavior.m` Supplementary Figure 2: `plotUnivariate.m` Supplementary Figure 3 & 4: `MVPA.m`, `plotMVPA.m` Supplementary Figure 5: `MVPA_top300voxels.m`, `plotMVPA.m` (see comment inside) Supplementary Figure 6: `MVPA_24loc.m`, `plotMVPA.m` (see comment inside) Supplementary Figure 7B: `analyzeEyetracking.m` Supplementary Figure 8: `MVPA_topdownvoxsel.m`, `plotMVPA.m` (see comment inside) Supplementary Figure 9: `MVPA_topdownvoxsel_circular.m`, `plotMVPA.m` (see comment inside) Supplementary Figure 10: `MVPA_timeseries.m`, `plotMVPA_timeseries.m`
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.