Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Sensitivity to emotional meaning in speech and music forms the basis of the integration of auditory cues in cross-modal emotion recognition. Relating to the emotion recognition and multisensory integration difficulties in ASD, this study examines the extent to which facial emotion recognition is facilitated through the process of integrating auditory emotional cues in ASD. Under a cross-modal affective priming paradigm, participants identified emotions in faces/face-like objects (targets) after hearing a spoken/sung word (primes) with either congruent or incongruent emotions. Results from 21 Cantonese speaking adults with ASD and 17 controls showed that, overall, facial emotion recognition was significantly facilitated by both spoken and sung primes in both groups. The magnitude of the priming effects was however found to be greater for the control group in comparison to the ASD group. A significant association between group and error patterns corresponding to the auditory primes was nevertheless revealed across target types, such that controls were more likely to make a response that matched the auditory prime than the ASD group. Altogether, it was found that controls were influenced by incongruent auditory cues to a greater extent than the ASD group. Thus, controls, but not individuals with ASD, integrated auditory and visual cues for emotion recognition.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.