Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
# License Files without an explicit license or source notice are licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/4.0/. The R scripts provided in this project are copyrighted by the authors under the GNU General Public License. This license entails that you can redistribute this script and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or any later version you seem fit. This script is distributed without any warranty; and without the implied warranty of merchantability or fitness for a particular purpose. See the GNU General Public License for more details. To view a copy of this license, visit http://www.gnu.org/licenses/ # Project summary In conversation, recognizing the speaker’s social action (e.g., a request) early may help potential next speakers understand the intended message and plan a timely response. Human language is a multimodal phenomenon, and by now, a substantial number of studies have demonstrated the contribution of the body to the communication of meaning. However, comparatively few studies have investigated (non-emotional) conversational visual signals coming from the speaker’s face, and very little is known about how facial signals contribute to the communication of social actions. Here, we contribute to filling this gap by asking how the production of different facial signals map onto the expression of two fundamental social actions in conversation: asking questions and providing responses. We studied the distribution of a wide range of facial signals across 6778 questions and 4553 responses by annotating a corpus of 34 dyadic face-to-face Dutch conversations. Moreover, we analyzed facial signal clustering to find out whether there are different visual signatures (i.e., specific combinations of co-occurring facial signals) that map onto questions versus responses. Finally, we considered the timing of these facial signals with regard to speaking turns. # This project contains: ## Poster - Contains the latest poster of the project ## Method - *Reliability* contains the original Cohen's kappa pairwise comparisons between coders, an overview of the reliability scores per facial signal, and the R script used for the reliability analysis in markdown format ## Results - *Data* contains a description of the data - *Script* contains the R script used for the data analysis in html format ## Software versions: - R (version 3.6.1; R Core Team, 2019) - RStudio (version 1.2.5019; RStudio Team, 2019)
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.