Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
## Introduction Data from [McMahon et al. (2023)](https://psyarxiv.com/x3avb/) ## Code Analysis code used to produce the results in [McMahon et al. (2023)](https://psyarxiv.com/x3avb/) is available on [github](https://github.com/Isik-lab/SIfMRI_analysis.git). ## Stimuli For stimuli used in the experiment, please email Emalie McMahon: emaliemcmahon@jhu.edu ## Directory contents ### additional figures Additional subjects and views of the results in the main text. The folders are named based on the main text figure that the additional subjects/views correspond to. ### anatomy Preprocessed anatomical data from [fMRIPrep](https://fmriprep.org/en/stable/) for each of the four subjects ### annotations #### annotations.csv The average annotation for each video on each of the annotated dimensions and indoor which was rated only be Emalie McMahon #### individual_subject_ratings.csv The anonymized data for individual subjects who rated each video. #### test_categories.csv The Moments in Time (MiT) category that each video in the test set belongs to. #### train_categories.csv The MiT category that each video in the training set belongs to. #### test.csv List of videos in the test set #### train.csv List of videos in the training set #### video_names.txt List of all videos names in both the training and test set #### face_bounding_boxes.csv A csv file with bounding box annotations for the location of the faces in the videos. ### betas #### sub-01_space-T1w_desc-train-fracridge_data.nii.gz Example naming for the average beta values from the final stage (fractional ridge regression) of [GLMSingle](https://glmsingle.readthedocs.io/en/latest/wiki.html) for videos in the training set. The order of values cooresponds to the order in `train.csv`. These values were smoothed to 3 mm FWHM using [nilearn.image.smooth_image](https://nilearn.github.io/stable/modules/generated/nilearn.image.smooth_img.html). The beta estimates were normalized within session and then averaged across all sessions. These are in the subjects' native space. #### sub-01_space-T1w_desc-test-fracridge_data.nii.gz Same as `sub-01_space-T1w_desc-test-fracridge_data.nii.gz` for videos in the train set cooresponding to the order in `test.csv` #### sub-01_space-T1w_desc-train-fracridge_even_data.nii.gz Example naming for the average beta values on even presentations of each video. Normalized within session and then averaged as for the full data set. This file is for data in the training set. Data in the test set is named `sub-01_space-T1w_desc-test-fracridge_even_data.nii.gz` for example. Odd presentations are named as `sub-01_space-T1w_desc-train-fracridge_odd_data.nii.gz`. ### eyetracking.zip The zipped eyetracking data for each of the 11 subjects. Eyetracking subj010 is fMRI sub-02 , and eyetracking subj016 is fMRI sub-03. The data is minimally preprocessed but converted into trial structure. ### freesurfer.zip The zipped freesurfer reconstructions of all four subjects ### localizers Localized regions of interest (ROIs) are defined in individual subjects (e.g., `sub-01`) separately for the two hemispheres (`hemi-lh` or `hemi-rh`) and in each subjects' native space (`space-T1w`). #### task-biomotion Task presentation is from [Yargholi et al. (2022)](https://academic.oup.com/cercor/article/33/4/1462/6577159) * **roi-biomotion:** 10% most activate voxels in a biological motion parcel from [Deen et al. (2015)](https://academic.oup.com/cercor/article/25/11/4596/2367585?login=true). Not used in main analyses because region was difficult to localize in all four subjects. * **roi-MT:** anatomical parcel from [Wang et al. (2015)](https://academic.oup.com/cercor/article/25/10/3911/393661?login=true) morphed to subjects' native space #### task-EVC * **roi-EVC:** anatomical parcel for V1v, V1d, V2v, and V2d combined from [Wang et al. (2015)](https://academic.oup.com/cercor/article/25/10/3911/393661?login=true) and morphed to subjects' native space #### task-FBOS (faces, bodies, objects, and scenes) Task presentation is a dynamic localizer from [Pitcher et al. (2011)](https://www.sciencedirect.com/science/article/pii/S1053811911003466?via%3Dihub) * **roi-EBA:** 10% most activate voxels based on a bodies-objects EBA parcel from [Julia et al. (2012)](https://www.sciencedirect.com/science/article/pii/S1053811912002339?via%3Dihub). * **roi-face-pSTS:** 10% most activate voxels based on a faces-object contrast in a faces-pSTS parcel from [Julia et al. (2012)](https://www.sciencedirect.com/science/article/pii/S1053811912002339?via%3Dihub). * **roi-FFA:** 10% most activate voxels based on a faces-objects contrast in an FFA parcel from [Julia et al. (2012)](https://www.sciencedirect.com/science/article/pii/S1053811912002339?via%3Dihub). * **roi-LOC:** 10% most activate voxels based on a objects-scrambled objects contrast (except sub-01 who did not see scrambled objects and objects-faces was used instead) in an LOC parcel from [Julia et al. (2012)](https://www.sciencedirect.com/science/article/pii/S1053811912002339?via%3Dihub). * **roi-PPA:** 10% most activate voxels based on a scene-object contrast in an PPA parcel from [Julia et al. (2012)](https://www.sciencedirect.com/science/article/pii/S1053811912002339?via%3Dihub). #### task-SIpSTS Task presentation is from [Isik et al. (2017)](https://www.pnas.org/doi/abs/10.1073/pnas.1714471114) * **roi-pSTS:** 10% most activate voxels based on a interacting minus non-interacting contrast in the posterior half of the STS as defined anatomically by [Deen et al. (2015)](https://academic.oup.com/cercor/article/25/11/4596/2367585?login=true) * **roi-aSTS:** 10% most activate voxels based on a interacting minus non-interacting contrast in the anterior half of the STS as defined anatomically by [Deen et al. (2015)](https://academic.oup.com/cercor/article/25/11/4596/2367585?login=true) #### task-tom Task is adapted from [Dodell-Feder et al. (2011)](https://www.sciencedirect.com/science/article/pii/S1053811910016241) * **roi-TPJ:** 10% most activate voxels based on a belief minus photo contrast in the TPJ as defined anatomically by [Deen et al. (2015)](https://academic.oup.com/cercor/article/25/11/4596/2367585?login=true). Not used in the final analyses as the TPJ largely falls outside of the reliable voxels. ### SIdyads_behavior The timing, order, and responses of participants in the scanner for the main experimental task.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.