Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Introduction: The ability to regulate affective states and interact with others may be influenced by an individual’s capacity to efficiently process multisensory socio-emotional cues. Despite evidence of altered multisensory processes in individuals with socio-emotional deficits (e.g. autism), there are few studies examining multisensory processes in relation to the level of anxiety traits. Research using nonclinical individuals with low vs. high trait anxiety reported that multisensory processing of faces and voices is modulated by the degree of anxiety, such that there is greater weighting of the modality where negative information is being presented, even when information from this modality is not relevant to the task. The present study aims to extend this research using a similar study design with faces-voices, but also including bodily stimuli and non-social stimuli, to determine whether anxiety-related differences in multisensory processes are specific to integration of particular socio-emotional signals or whether there is a more domain-general deficit. Methods: Non-clinical adults with varying levels of trait anxiety will categorise visual, auditory and audio-visual stimuli as happy or sad. Faces-voices will be used to replicate the finding that individuals with higher anxiety are less able to recognise the emotion conveyed by individual cues when presented with incongruent multisensory information. Body motion-voice stimuli will be used to determine whether this effect extends to other types of social stimuli, and flash-bleep stimuli to determine whether this extends to non-social stimuli. We aim to show whether there are differences between lower and higher trait anxiety in how cue modality and congruency of cues in audio-visual displays affect sensory modulation. Approach for statistical analysis: Participants will be split into two groups based on whether their trait anxiety score falls above or below the median for the whole sample. The DVs will be accuracies and reaction times, and ANOVA’s will be used to analyse the data with anxiety (lower/higher) as a between-participants factor and emotion (angry/happy), modality (visual, audio or audio-visual), congruency for bimodal stimuli (congruent/incongruent) and stimulus (face-voice, body motion-voice, non-social) as within-participant factors. We expect to see between-group differences for all stimuli, but most acutely for the social stimuli, due to the social deficits present in anxiety.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.