Here we provide BOLD data from our paper:
Liu, T., Cable, D. & Gardner, J. L. (2018) Inverted encoding models of human population response conflate noise and neural tuning width. *Journal of Neuroscience*, 38, 398-408. [Link][1]
If you use these data in a publication, please cite the above paper and the following data citation:
Liu, T., & Gardner, J. L. (2018). BOLD data for testing contrast invariant orientation tuning in human visual cortex. Retrieved from https://doi.org/10.17605/OSF.IO/9D3EX
The goal of this study is to evaluate how well inverted encoding models (IEM) can recover the underlying neural tuning changes. We conducted a BOLD imaging experiment in which human subjects viewed oriented gratings (8 possible orientations) at two contrast levels (low and high). We used IEM to reconstruct channel response functions for orientation and found broader functions for low than high contrast, which violates neurophysiological finding of contrast-invariant orientation tuning. For details of the methods and our results, please consult the published paper. Here we provide the data for public dissemination. Our hope is that this dataset will provide a benchmark for further computational modeling of BOLD data to infer neural population responses.
To facilitate future modeling work, we are providing processed BOLD data from the primary visual cortex (V1), instead of raw images. Specifically, the data contain BOLD response amplitude for each V1 voxel on each trial, referred to as "instances" (see the paper for analytical details). The following is an explanation of the data structure included in this distribution. Note the files are stored as Matlab formatted data, i.e., .mat files.
There were six subjects in the study, with each subject's data contained in a separate directory (i.e., s005). The name of the mat file is the same for all subjects (decon1gIns). Loading a mat file gives a data structure called "s":
>>load decon1gIns
>> s
s =
instanceMethod: 'deconv'
doClassification: 1
doForwardModel: 1
rebuild: 1
dispFig: 0
verbose: 1
canonicalType: []
args: {'instanceMethod=deconv' 'rebuild=1'}
saveName: 'decon1gIns'
whichCond: 3
mode: 'WI'
lvf: [1x1 struct]
rvf: [1x1 struct]
bvf: [1x1 struct]
The structure s has a number of fields, with the most important ones being s.lvf and s.rvf: s.lvf contains data for analysis conditioned on left visual field stimulus, and s.rvf is the same thing for the right visual field stimulus. Note in the experiment two gratings were presented simultaneously, one in the left visual field and one in the right visual field, with their orientation and contrast randomized independently. Thus the lvf field uses the labels of the stimulus in the left visual field, and rvf field uses labels of the stimulus in the right visual field.
The stimulus conditions can be found in the stimName subfield:
>> s.lvf.stimNames
ans =
Columns 1 through 4
'contrast1=0.2 an...' 'contrast1=0.2 an...' 'contrast1=0.2 an...' 'contrast1=0.2 an...'
Columns 5 through 8
'contrast1=0.2 an...' 'contrast1=0.2 an...' 'contrast1=0.2 an...' 'contrast1=0.2 an...'
Columns 9 through 12
'contrast1=0.8 an...' 'contrast1=0.8 an...' 'contrast1=0.8 an...' 'contrast1=0.8 an...'
Columns 13 through 16
'contrast1=0.8 an...' 'contrast1=0.8 an...' 'contrast1=0.8 an...' 'contrast1=0.8 an...'
Contrast can be low (0.2) or high (0.8), orientation can be one of 8 possible values, for a total of 16 conditions. For example, the 1st condition is a low contrast, 0 deg grating (in the left visual field).
>> s.lvf.stimNames{1}
ans =
contrast1=0.2 and orientation1=0
And the 16th condition is a high contrast, 157.5 deg grating (in the left visual field).
>> s.lvf.stimNames{16}
ans =
contrast1=0.8 and orientation1=157.5
The same labeling applies to the right visual field stimulus (in s.rvf).
Now onto the BOLD responses for all voxels and trilas, i.e., instances. These are stored in the the roi subfield, i.e.,
>> s.lvf.roi
ans =
[1x1 struct] [1x1 struct]
There are two ROIs, one for left hemisphere, one for right hemisphere, the name of the ROI can be found in the name subfield of the roi struct. e.g.,
>> s.lvf.roi{1}
ans =
color: 'magenta'
coords: [4x1396 double]
date: '14-Jul-2014 14:23:53'
name: 'lV1res'
[other output omitted]
This one is for left V1 (and s.lvf.roi{2} is for right V1). Note both left and right V1's BOLD response are modeled by the stimulus in the left visual field here. Similarly, s.rvf contain instances from both left and right V1. This design allowed us to examine contralateral and ipsilateral responses with the latter serving as a control.
The actual instances are stored in instance.instances e.g.,
>>s.lvf.roi{1}.instance
ans =
instances: {1x16 cell}
info: [1x1 struct]
instancesSTE: {1x16 cell}
r2: [78x1 double]
instanceVol: {1x16 cell}
fit: [1x1 struct]
There are several fields with auxilary information, but the most critical is the instances field:
>> s.lvf.roi{1}.instance.instances
ans =
Columns 1 through 4
[27x78 double] [27x78 double] [27x78 double] [27x78 double]
Columns 5 through 8
[27x78 double] [27x78 double] [27x78 double] [27x78 double]
Columns 9 through 12
[27x78 double] [27x78 double] [27x78 double] [27x78 double]
Columns 13 through 16
[27x78 double] [27x78 double] [27x78 double] [27x78 double]
The instances field is a cell array that contains the instances for each of the 16 condition (corresponding to the stimulus conditions, see stimNames above). Each cell here is a 27x78 array, for 27 trials and 78 voxels from left V1. Note 27 is the number of trials per stimulus condition in our experiment, which is fixed for all subjects and ROIs, whereas number of voxels is variable across subjects and ROIs.
[1]: https://doi.org/10.1523/JNEUROSCI.2453-17.2017