Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
The processing of multisensory information is based upon the capacity of brain regions, such as the superior temporal cortex, to extract and combine shared information across modalities. However, it is still unclear whether the cortical representation of co-occurrent auditory and visual events does require any prior audiovisual experience to develop and function. Intersubject correlation analysis measured the synchronization of brain responses elicited by the presentation of an audiovisual, audio-only or video-only versions of the same long-lasting narrative in different samples of sensory-deprived (congenitally blind and deaf) and typically-developed individuals. Here, we showed that the lack of any prior audiovisual experience does not alter the functional architecture of the superior temporal cortex. Further, the synchronization is primarily mediated by low-level perceptual features, and a modality-independent topographical organization of temporal dynamics emerges within this region. The human superior temporal cortex results to be provided with an innate functional scaffolding to yield a common neural representation across coherent auditory and visual inputs. See also GitHub page for code: https://github.com/giacomohandjaras/101_Dalmatians
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.