Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
The data stored here have been used in a number of projects comparing visual object representations between the human ventral visual regions and 14 different convolutional neural networks. Details of the fMRI data can be found in these two publications: - Vaziri Pashkam, M. and Xu, Y. (2019). An information-driven two-pathway characterization of occipto-temporal and posterior parietal visual object representations. Cerebral Cortex, 29, 2034-2050. ([pdf][1]) - Vaziri-Pashkam, M., Taylor, J. and Xu, Y. (2019). Spatial frequency tolerant visual object representations in the human ventral and dorsal visual processing pathways. Journal of Cognitive Neuroscience, 31, 49-63. ([pdf][2]) The following projects have used these datasets: - Xu, Y. and Vaziri-Pashkam, M. (2020). Limited correspondence in visual representation between the human brain and convolutional neural networks. bioRxiv. ([link][3]) - Xu, Y. and Vaziri-Pashkam, M. (2020). The development of transformation tolerant visual representations differs between the human brain and convolutional neural networks. bioRxiv. ([link][4]) - Xu, Y. and Vaziri-Pashkam, M. (2020). The relative coding strength of object identity and nonidentity features in human occipito-temporal cortex and convolutional neural networks. bioRxiv. ([link][5]) - Xu, Y. and Vaziri-Pashkam, M. (2021). Limits to visual representational correspondence between convolutional neural networks and the human brain. Nature Communications, 12, 2065. ([link][6]) - Xu, Y. and Vaziri-Pashkam, M. (2021). Examining the coding strength of object identity and nonidentity features in human occipito-temporal cortex and convolutional neural networks. Journal of Neuroscience, 41, 4234–4252. ([pdf][7]) There are five data folders here, one for each of the experiments. Each folder contains the following: - The visual images used - An fMRI data file - Responses from 14 different CNNs. See the project papers listed above for the CNNs and the sampled layers examined. Each fMRI data file contains the responses from a single experiment: - Responses from the top 75 most reliable voxels from each of the 15 brain regions. - The brain regions included are: V1-V4, LOT, VOT, V3a, V3b, IPS0-IPS4, inferior IPS and superior IPS (see the fMRI papers above for details). - Each data file contains similarity measures from the different object categories from all the runs, the odd and even halves of the runs separately, and the odd and even halves of the runs together. - Three similarity measures are included for each brain region: (1) correlations of the fMRI response patterns, (2) Euclidean distance measures of the fMRI response patterns, normalized by the number of voxels in each brain region, and (3) z-normalized Euclidean distance measures of the fMRI response patterns, normalized by the number of voxels in each brain region. - The size of a data file may look like 16x16x15x6. The first and second 16 refer to the number of conditions in each experiment, 15 refers to the 15 brain regions included, and 6 is the total number of human subjects. - The fMRI data contained in the folder "data_fMRI_no_cat_voxels" are from responses in VOT and LOT in which the category selective voxels for bodies, faces and houses are excluded. Eeach CNN data file contained the output from a single experiment. - Three similarity measures are included for each sampled CNN layer: (1) correlations of the CNN response patterns, (2) Euclidean distance measures of the CNN response patterns, normalized by the number of units in that CNN layer, and (3) z-normalized Euclidean distance measures of the CNN response patterns, normalized by the number of units in that CNN layer. - The size of a data file may look like 16x16x6. The first and second 16 refer to the number of conditions in each experiment, and 6 refers to the 6 CNN layers sampled. There are five experiments. The order for the 8 object categories are in the same order as they appear in the image directory and are always body, car, cat, chair, elephant, face, house and scissor. The order for the 9 artifical categories are in the same order as they appear in the image directory. - Format or Image Stats. This experiment examined responses from original vs controlled real-world object categories. For the experimental conditions, original images are the first 8 and controlled images are the second 8. - Position. This experiment examined responses from real-world object categories appearing at the top vs bottom positions. For the experimental conditions, top images are the first 8 and bottom images are the second 8. - Size. This experiment examined responses from real-world object categories appearing at the small vs large sizes. For the experimental conditions, small images are the first 8 and large images are the second 8. - SF. This experiment examined responses from real-world object categories appearing at the original spatial frequency (SF), high SF and low SF ranges. Note that only 6 object categories were included (i.e., body, car, chair, elephant, face and house). For the experimental conditions, original images are the first 6, high SF images are the next 6, and low SF images are the last 6. IMPORTANT: The order for the 6 object categories are NOT in the same order as they appear in the image directory or as they are in the other experiments. The correct order should be: body, chair, elephant, face, house and car. (You need to use this order for the CNNs; otherwise you won't see a close match between the brain and CNN data. The brain data are saved in this strange order originally for some uninteresting historical reasons.) - Nature vs artificial. This experiment examined responses from 9 artificial object categories and 8 real-world object categories. For the experimental conditions, artificial object images are the first 9 and real-world object images are the last 8. The "Tolerance" folder contains all the data for the three tolerance measures reported for the brain and 8 CNNs, namely voxel/unit rank order correlation, consistency in RDM across a transformation, and SVM cross decoding. - For the CNN SVM data, 10 simulations (i.e., equate to 10 hypothetical subjects) were created by resent the random noise level each time. See the paper for more detials. For each CNN, the data has the following structure: [layers x decoding x exp x subjects], with layers (n) corresponding to the layers sampled for each CNN, decocoding (2) for within - 1 and cross decoding - 2, exp (4) for the four exps (in the order of image stats - 1, position - 2, size - 3, and SF - 4), and subjects (10). Results for "original" are for using original unequated images. Note that this was only done for the 2nd (position) and 3rd (size) experiments. - For the CNN rank order and RDM correlation data, the data has the structure of [exps x layers], following the above convention. - For the human brain data, "rank_diff" and "rank_same" are rank order correlation from split-half analysis in which "same" is the rank order correlation within the same transformation across odd and even runs and "diff" is for across a transformation across the odd and even runs. See the paper for more details. "rdm_diff" and "rdm_same" fololw the same convention. Each of these files has the structure of [brain regions x subjects x exps]. Note that the four exps have 6, 7, 7, and 10 subjects, respectively. So for the exps 1 to 3, the last few subjects are empty. For the SVM results, the file has the structure of [brain regions x decoding x subjects x exps], in decoding, 1 is for within and 2 for cross decoding. All others are the same as before. Shown here are all the analyzed fMRI data. The fMRI voxel data for each subject, each region and each experiment can be found [here][8]. [1]: https://drive.google.com/file/d/15_DHhxuoSA41i5O-QjwDFNkcwd-RNitc/view [2]: https://drive.google.com/file/d/13ue7S4Id4zn7X621JgRW7IRwwz5xlWK7/view [3]: https://www.biorxiv.org/content/10.1101/2020.03.12.989376v1 [4]: https://www.biorxiv.org/content/10.1101/2020.08.11.246934v1 [5]: https://www.biorxiv.org/content/10.1101/2020.08.11.246967v1 [6]: https://www.nature.com/articles/s41467-021-22244-7 [7]: https://drive.google.com/file/d/1q6fMw401JPBa_883jZsayxc1RiYAr2lt/view [8]: https://osf.io/7u65t/
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.