Loading wiki pages...

Wiki Version:
<p>This provides the MATLAB based source code for fMRI data processing on the auditory time window experiment using SPM12 package. [1]</p> <p>Tonotopic information from each individual animal can be used in the parcellation of function-anatomical areas of the auditory cortex. For characterising tonotopy using the BOLD response to spectral frequencies sound stimuli were based on random-phase noise carrier with different pass-bands: 0.125-0.25 kHz, 0.25-0.5 kHz, 0.5-1 kHz, 1-2 kHz, 2-4 kHz, 4-8 kHz, and 8-16 kHz resulting in stimuli that encompassed different spectral ranges. The carriers were amplitude modulated with a sinusoidal envelope of 90% depth at 10 Hz to achieve a robust response in the auditory system.</p> <p>To record data from the auditory system that is devoid of activity due to the high-intensity noise generated by the MRI scanner, a 'sparse temporal' design is utilized. With the use of a pseudo-random sequence, each adjacent trial was ensured to have a different spectral frequency sound stimulus. The duration of each sound stimulus was 6 seconds which were presented during the last 6 of the 10 s trial duration. This duration is sufficient for the BOLD response in the macaque auditory cortex to reach a plateau (Baumann et al., 2010). The monkey performed visual fixation and was rewarded immediately by the delivery of a juice via a gravity-fed dispenser.</p> <p>In the pre-processing steps, first, rigid body motion compensation was performed. Next, image volumes from multiple sessions were combined by realigning all volumes to the first volume of the first session. Then, this data was spatially smoothened using a Gaussian kernel with full-width-at-half-maximum (FWHM) of 3 mm. A standard SPM regression model was used to partition components of the BOLD response at each voxel. The five conditions, each of five different spectrotemporal correlation values were modelled as effects of interest compared to silent baseline and their stimulus onsets were convolved with a canonical hemodynamic response function. Next, the time series was high-pass filtered with a cut-off of 120 s to remove low-frequency signal drifts mainly due to scanner instabilities. Finally, this data was adjusted for global signal fluctuations also known as global scaling to account for differences in system responses across multiple sessions. </p> <p>In a general linear model (GLM) analysis of the combined sessions that included the motion parameters, the voxel-wise response estimates the regression coefficients (denoted beta). The t-values for the contrast of the different stimuli versus the silent baseline were also calculated. The data were masked retaining voxels with significant values for the combined stimuli versus silent baseline (p&lt;0.001, uncorrected for multiple comparisons across the auditory cortex).</p> <p>Map of preferred response to different frequency bands is known as 'best-frequency map'. This map was calculated by identifying voxel by voxel, in each animal across all voxels whose sound versus silence contrast was significant (T&gt;3.1, p&lt;0.001 uncorrected for multiple comparisons across the auditory cortex), which of the frequency conditions showed the highest beta i.e. regression coefficient. The resulting map represents the preferred frequency for each voxel.</p> <p>If you use this code then please cite this paper:</p> <p>[1] Pradeep Dheerendra, Simon Baumann, Olivier Joly, Fabien Balezeau, Sukhbinder Kumar, Christopher I Petkov, Alexander Thiele, Timothy D Griffiths, "The representation of time windows in primate auditory cortex", </p>
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.