Main content
Multisensory integration of musical emotion perception in singing
- Elke B. Lange
- Jens Fünderich
- Hartmut Grimm
Date created: | Last Updated:
: DOI | ARK
Creating DOI. Please wait...
Category: Project
Description: We investigated how visual and auditory information contributes to emotion communication during singing. Singers applied two different facial expressions (expressive/suppressed) to songs from their opera repertoire. Recordings of the singers were evaluated by laypersons or experts, presented to them in three different modes: auditory, visual, and audio-visual. A manipulation check confirmed that the singers succeeded in manipulating the face while keeping the sound highly expressive. Analyses focused on whether the visual difference or the auditory concordance between the two versions determined perception of the audio-visual stimuli. When evaluating expressive intensity or emotional content a clear effect of visual dominance showed. Experts made more use of the visual cues than laypersons. Consistency measures between uni-modal and multimodal presentations did not explain the visual dominance. The evaluation of seriousness was applied as a control. The uni-modal stimuli were rated as expected, but multisensory evaluations converged without visual dominance. Our study demonstrates that long-term knowledge and task context affect multisensory integration. Even though singers’ orofacial movements are dominated by sound production, their facial expressions can communicate emotions composed into the music, and observes do not rely on audio information instead. Studies such as ours are important to understand multisensory integration in applied settings. For the full paper, see: Psychological Research, https://doi.org/10.1007/s00426-021-01637-9