Main content

Contributors:
  1. Emery Brown
  2. Lisa Feldman Barrett

Date created: | Last Updated:

: DOI | ARK

Creating DOI. Please wait...

Create DOI

Category: Project

Description: It is commonly hypothesized that there is a reliable, specific mapping between certain emotional states and the facial movements that express those states. This hypothesis is often tested by asking untrained participants to pose the facial movements they believe they use to express emotions during generic scenarios. Here, we test this hypothesis using photographs of facial configurations posed by professional actors in response to contextually-rich scenarios. The scenarios portrayed in the photographs were rated for the extent to which they evoked an instance of 13 emotion categories, and actors’ facial poses were coded for their specific movements. Both unsupervised and supervised machine learning find that actors portrayed emotional states with variable facial configurations; instances of only three emotion categories (fear, happiness, and surprise) were portrayed with moderate reliability and specificity. The photographs were separately rated for the extent to which they portrayed an instance of the 13 emotion categories; they were rated when presented alone and when presented with their associated scenarios, revealing that emotion inferences also vary in a context-sensitive manner. Together, these findings suggest that expressions and perceptions of emotion are tailored to situations and transcend stereotypes of emotional expressions. Future research can focus on contextual variation in emotional expression and perception by incorporating dynamic stimuli and studying a broader range of cultural contexts.

License: CC-By Attribution 4.0 International

Wiki

Add important information, links, or images here to describe your project.

Files

Loading files...

Zotero

Loading citations...

Citation

Tags

Recent Activity

Loading logs...