Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
## General Overview For our paper “Coexisting representations of sensory and mnemonic information in human visual cortex” a total of three experiments were ran. Two fMRI experiments (E1 and E2) as well as a behavior-only experiment (which is presented in Supplementary Figure 9 of the paper). Below, you will find the experiment code used to present stimuli to our participants, the data we collected, as well as the analyses scripts wrote that generate the figures and statistics in the paper. If any questions arise, you can email me at rosanne.rademaker@gmail.com ---------- ## Experiment code Task scripts used to present stimuli to our participants are organized by experiment. **Experiment 1** (E1) consisted of a main memory task (helper functions included) and a mapping task. The scripts used for participants to practice the tasks are also included. **Experiment 2** (E2) consisted of a main memory task and two mapping tasks. For the main task, helper functions are included, and so are the gazebo images used as distractors. Face images are not included due to privacy/sharing permissions. Scripts used for participants to practice the main task and one of the mapping tasks are also included (the second mapping task was never formally practiced). **Behavioral experiment** consisted of a single experimental script and a practice script. ---------- ## Data **fMRI experiments** Data for Experiment 1 (E1) and Experiment 2 (E2) can be found in separate folders. Note that in the main “Data” directory there is a file called “fMRI_Data_ReadMe.m" detailing the exact contents of the fMRI data files. Here I will only roughly outline the contents of each experiment specific data folder: (1) There’s one file with the behavioral data collected during scanning (for all participants combined). (2) There is a “SampleFile” for every participant, with its main ingredient a big time point by voxel matrix. Each time point is a TR, and time is concatenated across all 3 scanning sessions (spatial alignment and preprocessing was already performed). Each voxel is a retinotopically mapped visual voxel (voxel indices are 3 dimensional matrix indices, so you could reconstruct where they are in the brain). If you'd like un-preprocessed and/or whole brain data, please contact me. (3) Finally, there is a “TimeCourses” file for every participant. This file is made by the “VoxelSelectionE1.m” and “VoxelSelectionE2.m” analysis scripts, and contains further processed data relative to the "SampleFile". For example, we performed z-scoring, located visually responsive voxels, averaged delay data, etc. **Behavioral experiment** Data for the behavioral experiment consist of 21 files, i.e. one file per participant (this also includes participants who were excluded from analysis). The analysis script corresponding to this behavioral experiment only loads data from the 17 included participants. The primary reason for exclusion was participants dropping out of the experiment before its conclusion. ---------- ## Analyses **fMRI analyses: Introduction** To run the analysis scripts successfully, you will need to put the scripts in a folder where you also add the folder called “HelperScripts” which has a bunch of functions that will be called from various scripts. Additionally, you need a folder named “SamplesE1” which must contain the data files for E1, and a folder named “SamplesE2” which must contain the data files for E2. Make sure to accurately define the full path locating these data folders (usually under “my_path” or “samplefilepath”). While most scripts use a known random seed, I ran some of these from scratch, and others in stages. This means that you may not always replicate the exact p-values reported in our paper, and some minor numerical differences can emerge. Potentially of note is that I used Matlab 2016b on Linux for all these analyses, so cannot guarantee behavior in other environments. **fMRI analyses: Scripts** Here I have ordered the scripts in a way that seems sensible to me. Moreover, this order will ensure that, if ran consecutively by you, everything works. This is because some dependencies exist - scripts might generate .mat outputs that are read in by other scripts. To make everything run quickly, I have included these .mat outputs. Of course, you are free to regenerate them, it just may take a while. - “VoxelSelectionE1.m” & “VoxelSelectionE2.m” take the data from the “SampleFiles” and perform a number of further processing steps (such as z-scoring, defining visually responsive voxels, etc.). They output the “TimeCourses” data files. - "BehavioralAnalysis.m" performs the analysis of participant’s behavior in the scanner. It outputs Figures 1b and 3b, as well as Supplementary Figures 1 and 2. It also performs the statistics that allow formal comparison of working memory performance during the three distractor conditions. - "UnivariateResponses.m" performs the univariate analysis (deconvolution). It outputs Supplementary Figure 3. - “IEM_avg_independent.m” performs the IEM analysis for a model trained on the independent mapping data, and tested on the averaged working memory delay data. It outputs Figures 1c, 1e, 3c, and 3d, as well as Supplementary Figure 5. It also outputs “Recons_avg_independent.mat” which has (1) the modeled reconstructions for every participant, ROI, and condition, (1) the fidelity metric for every participant, ROI, and condition, and (3) the names of the ROIs in the analysis. - “IEM_avg_leave1out.m” performs the IEM analysis for a model trained and tested on the averaged working memory delay data. As set currently, it will perform a leave-one-trial-out cross-validation scheme, but you can also set a flag to leave one session out. It outputs Figure 4, as well as Supplementary Figure 11. It also outputs “Recons_avg_leave1out.mat” which has (1) the modeled reconstructions for every participant, ROI, and condition, (1) the fidelity metric for every participant, ROI, and condition, and (3) the names of the ROIs in the analysis. - “IEM_avg_stats.m” performs resampling and statistics on the IEM analyses using averaged delay data. You can opt to do this for the independently trained or leave-one-out trained model responses. Thus, this will load either “Recons_avg_independent.mat” or “Recons_avg_leave1out.mat”. After resampling, and calculating fidelities for the resampled data, it will add the reshuffled fidelities to these files. Furthermore, this script will output new files, either “Stats_avg_independent.mat'” or “Stats_avg_leave1out.mat”, which hold p-values and anova tables for the statistical analyses that were ran. The contents of Supplementary tables 1 through 8 were generated using this script. - “IEM_overtime_independent.m” performs the IEM analysis for a model trained on the independent mapping data, and tested on every TR of the working memory trial. It outputs Figures 2 and 3e, as well as Supplementary Figure 6. It also outputs “Recons_overtime_independent.mat” which has (1) the modeled reconstructions for every participant, ROI, condition, and TR (2) the fidelity metric for every participant, ROI, condition, and TR, and (3) the names of the ROIs in the analysis. - “IEM_overtime_leave1out.m” performs the IEM analysis at every TR of a working memory trial. The model is trained on all-but-one memory trials within a single distractor condition, and tested on the left-out trial. It outputs Supplementary Figure 12. It also outputs “Recons_overtime_leave1out.mat” which has (1) the modeled reconstructions for every participant, ROI, condition, and TR (2) the fidelity metric for every participant, ROI, condition, and TR, and (3) the names of the ROIs in the analysis. - “IEM_overtime_stats.m” performs resampling and statistics on the TR-by-TR IEM analyses. You can opt to do this for the independently trained or leave-one-out trained model responses. Thus, this will load either “Recons_overtime_independent.mat” or “Recons_overtime_leave1out.mat”. After resampling, and calculating fidelities for the resampled data, it will add the reshuffled fidelities to these files. Depending on the flag you set, it outputs either Figures 2b and 3e, as well as Supplementary Figure 6 (for independently trained model), or Supplementary Figure 12 (for leave-one-out trained model). - "MVPA_SVM_avg_independent.m" performs the decoding analysis for a model trained on the independent mapping data, and tested on the averaged working memory delay data. It outputs Figure 5b and Supplementary Figure 10. It also outputs “MVPA_Bowtie_independent.mat” which stores decoding accuracies, shuffled decoding accuracies, and statistics. - "MVPA_SVM_avg_leave1out.m" performs the decoding analysis for a model trained and tested on the averaged working memory delay data (via leave-one-trial-out cross validation). It outputs Figure 5c. It also outputs “MVPA_Bowtie_leave1out.mat” which stores decoding accuracies, shuffled decoding accuracies, and statistics. - “IEM_avg_independent_E1_targetdistdiff.m” analyzes the distractor grating condition from Experiment 1 as a function of target-distractor orientation difference. It outputs Supplementary Figures 7 & 8. Note: this script loads "Recons_avg_independent.mat", thus relying on it having been created at an earlier stage. - “IEM_avg_independent_IPS_all_vox.m” does the same thing as “IEM_avg_independent.m” but only for IPS regions with all voxels included (i.e. no selection for visually responsive voxels). It outputs the left panels of Supplementary Figure 13. - “IEM_avg_leave1out_IPS_all_vox.m” does the same thing as “IEM_avg_leave1out.m” but only for IPS regions with all voxels included (i.e. no selection for visually responsive voxels). It outputs the right panels of Supplementary Figure 13. - “IEM_avg_stats_IPS_all_vox.m” does the same thing as “IEM_avg_stats.m” but only for IPS regions with all voxels included (i.e. no selection for visually responsive voxels). **Behavioral experiment** This folder contains one script called “WM_DistRand_analysis.m” which outputs Supplementary Figure 9. It analyses the data from the behavioral experiment collected in the lab (not in the scanner). To work, it assumes the behavioral data is in a “Data” folder in the same directory. It also relies on functions in the folder “HelperScripts” (mainly functions from the circular statistics toolbox) so make sure you set all your paths correctly. ---------- ## Figure index Provided here is an index for each figure in the paper, noting the analysis script required to generate it. *Figure 1b:* BehavioralAnalysis.m *Figure 1c:* IEM_avg_independent.m *Figure 1e:* IEM_avg_independent.m *Figure 2a:* IEM_overtime_independent.m *Figure 2b:* IEM_overtime_independent.m (IEM_overtime_stats.m for a version with significance labeled) *Figure 3b:* BehavioralAnalysis.m *Figure 3c:* IEM_avg_independent.m *Figure 3d:* IEM_avg_independent.m *Figure 3e:* IEM_overtime_independent.m (IEM_overtime_stats.m for a version with significance labeled) *Figure 4:* IEM_avg_leave1out.m *Figure 5b:* MVPA_SVM_avg_independent.m (note: the grey bars are higher than in the original paper, due to a small bug in a previous script) *Figure 5c:* MVPA_SVM_avg_leave1out.m *Supplementary Figure 1:* BehavioralAnalysis.m *Supplementary Figure 2:* BehavioralAnalysis.m *Supplementary Figure 3:* UnivariateResponses.m *Supplementary Figure 5:* IEM_avg_independent.m *Supplementary Figure 6:* IEM_overtime_independent.m (IEM_overtime_stats.m for a version with significance labeled) *Supplementary Figure 7 & 8:* IEM_avg_independent_E1_targetdistdiff.m *Supplementary Figure 9:* WM_DistRand_analysis.m *Supplementary Figure 10:* MVPA_SVM_avg_independent.m *Supplementary Figure 11:* IEM_avg_leave1out.m *Supplementary Figure 12:* IEM_overtime_leave1out.m (IEM_overtime_stats.m for a version with significance labeled) *Supplementary Figure 13:* IEM_avg_independent_IPS_all_vox.m (left) and IEM_avg_leave1out_IPS_all_vox.m (right)
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.