Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
**ABSTRACT** Laughter is ubiquitous, universal, and variable. This dissertation tests a new social functional account that explains the many physical forms laughter takes and the many social contexts in which it occurs. In contrast to previous perspectives that emphasize the internal state of the producer or the eliciting context, the current social functional account distinguishes laughter according to the behavioral intention it conveys and the subsequent behavioral response it elicits in the recipient. Laughter is a communicative signal that solves (at least) three basic social tasks that can occur across social contexts and relationships. The first proposed social function of laughter, both evolutionarily and developmentally, is to reward the behavior of others and reinforce the ongoing interaction. The second task accomplished by the production of modified laughter is the easing of social tension and signaling of affiliation and nonthreat. A third form of laughter non-confrontationally enforces social norms, negotiates status, and corrects undesirable behavior in others by conveying dominance or superiority. We propose that people modify physical properties of their laughter in the service of the three social tasks, and that the acoustic modulations follow principles common to human and non-human vocal signaling. Three studies examined whether laughter is associated with the three social tasks, and if so, how its acoustic form is modulated to accomplish each task. Participants rated the extent to which laughs convey the social functions (Study 1), judged the similarity of laughs to validated smiles that accomplish the social tasks (Study 2), and produced natural laughter with a partner while watching and discussing videos that elicit responses relevant to the three social tasks (Study 3). We complemented traditional inferential statistics with machine learning algorithms trained to predict the social functions accomplished by instances of laughter. In Study 1, perceivers’ judgments of how rewarding, affiliative, and dominant laughter sounded were guided by distinct patterns of acoustic variables, which were in turn used by a machine learning model to accurately estimate the perceived social function of the laughs. This study suggested that perceivers infer nuanced social information from laughter based on its acoustic form. Study 2, which relied entirely on non-linguistic judgments about laughter-smile similarity, resulted in a laughter similarity embedding that retained the laughs’ social functional category assignment from Study 1 participants’ linguistic judgments. Study 2 therefore provided convergent evidence about the perceived social meaning of the laughter. Study 3 was the first study to test the social functional account of either smiles or laughter using naturally-occurring signals. Laughs generated by participant pairs across 3 functionally-relevant contexts differed on several acoustic variables, some of which converged with the perceiver-based data in Study 1. Throughout, we connect existing findings in the human and nonhuman vocalization literature to the current work’s findings on the acoustic properties of reward, affiliation, and dominance laughter. In sum, this research accounts for some of the substantial variability in the physical form of laughter and, more generally, demonstrates the predictive power of a social functional approach to emotion expressions. **NOTES ON FILES** * Analyses and their output are contained in .html files in each study's storage folder. The analysis scripts are in R Markdown (.Rmd) format. To run a script, make sure all datafiles associated with a study (in the study's storage folder) are downloaded. I haven't made sure all necessary libraries are listed at the top of each R script yet, so you may need to install some R packages. * All **Study 1** files except the machine learning analyses can be found here: [Study 1 materials][1]. The machine learning analyses are in the folder in this project called "Study 1." * **Study 2** data and analyses files are in the "Study 2" folder. The laugh stimuli used are a subset of Study 1's laughs (available in the above link), and the smile stimuli are compressed in a zip file in Study 2's storage folder. * **Study 3** data and analyses files are available in the "Study 3" folder. The participant laugh clips can be found in a Box folder [here][2]. Some of the humorous video files are from [Cowen & Keltner, 2017][3] and are therefore not mine to distribute (but I used their original filenames and the video library can be requested using this [link][4].) The rest of the videos used in Study 3 were from youtube and can be found in the following Box links: [affiliation][5], [reward][6], and [dominance][7]. [1]: https://osf.io/ca66s/ [2]: https://uwmadison.box.com/s/3jvvilu9hii1zbo5y5jjqtlxb32ppgjl [3]: http://www.pnas.org/content/114/38/E7900#sec-9 [4]: https://goo.gl/forms/XErJw9sBeyuOyp5Q2 [5]: https://uwmadison.box.com/s/bb7sg64ghpjoxyfa9su9avds7wp3ivd9 [6]: https://uwmadison.box.com/s/iknn1by62pcxxt50bcgyfojjxjrfkrg2 [7]: https://uwmadison.box.com/s/u6ziamqyohtbj98s9f6l5zd9ki9xie04
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.