Sensitivity to emotional meaning in speech and music forms the basis of the integration of auditory cues in cross-modal emotion recognition. Relating to the emotion recognition and multisensory integration difficulties in ASD, this study examines the extent to which facial emotion recognition is facilitated through the process of integrating auditory emotional cues in ASD. Under a cross-modal affective priming paradigm, participants identified emotions in faces/face-like objects (targets) after hearing a spoken/sung word (primes) with either congruent or incongruent emotions.
Results from 21 Cantonese speaking adults with ASD and 17 controls showed that, overall, facial emotion recognition was significantly facilitated by both spoken and sung primes in both groups. The magnitude of the priming effects was however found to be greater for the control group in comparison to the ASD group. A significant association between group and error patterns corresponding to the auditory primes was nevertheless revealed across target types, such that controls were more likely to make a response that matched the auditory prime than the ASD group. Altogether, it was found that controls were influenced by incongruent auditory cues to a greater extent than the ASD group. Thus, controls, but not individuals with ASD, integrated auditory and visual cues for emotion recognition.