Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Implementation Details ---------------------- This page describes how our lab implemented the procedures required by the official protocol for the RRR. It also describes and justifies any additions to or departures from that protocol. You can view the official protocl and the main project page for this RRR using these links: - Official Protocol: [https://osf.io/6wvj4/][1] - Main RRR project page: [https://osf.io/hgi2y/][2] - Project page with all materials: [https://osf.io/pkd65/][3] ---------- #### Experimenters #### **Daniel N. Albohn** - Masters of Science, Graduate Student, 5+ years research experience, multiple years experience studying emotion, facial expressions, and psychophysiology. Collaboratively engaged in research experience with U. Hess and R. Adams on NIA research grants. **Troy G. Steiner** - Bachelor of Arts, Graduate Student, 5+ years research experience, multiple years experience studying emotions, facial expressions, and evolutionary psychology. Collaboratively engaged in research experience with U. Hess and R. Adams on NIA research grants. **José Soto** - PhD, Associate Professor of Psychology, licensed clinical psychologist, 15+ years research experience studying the psychophysiology of emotion and the intersectionality of emotion on areas such as diversity, health, and culture. **Ursula Hess** - PhD, Professor of Psychology, 25+ years of research experience investigating the role that social factors have on influencing emotion perception, leading expert on electromyography and other psychophysiological techniques, over 70 peer reviewed published works. **Reginald B. Adams, Jr.** - PhD, Associate Professor of Psychology, Director of the Social Vision and Interpersonal Perception Laboratory, 15+ years of research experience studying emotion perception, person perception, and social cues, nearly 60 peer reviewed published works. **Research Assistants** - Our lab employs a number of highly qualified and competent undergraduate research assistants for help with data collection and analysis. We have six RAs who are trained on the approved protocol and familiar with the experiment. ---------- #### Setting/Lab/Equipment #### **Research lab** - Our lab space consists of two breakout rooms where participants can take the study individually and distraction free. These breakout rooms are isolated from the rest of the lab and have tightly controlled non-fluorescent lighting and sound-damping interiors. ![Picture of SVIP Lab's breakout room][4] **Equipment** - The lab space designated for this project consists of 2 Dell 7010 Optiplex computers with sufficient hardware to run most modern programs and videos without issue. Each room/participant will be monitored and recorded by a Logitech HD Pro Webcam C920. Video cameras will be positioned slightly below and above the participant so that it can capture the participant's full face while they are performing the task. Stimuli (instruction videos) will be displayed using the open source stimulus presentation software OpenSesame. In addition, we will be using FaceReader 6.0 to conduct post hoc analyses of participant recordings. Lastly, in a separate session we will be using a Biopac MP150 for the collection and recording psychophysiological data (e.g., heart and respiration rate, respiratory sinus arrhythmia). ---------- #### Sample, subjects, and randomization #### **Target sample size:** We plan to test a minimum of 100 participants (50 per group). **Target sample demographics:** Our subject population is drawn from the subject pool at The Pennsylvania State University. Most participants are undergraduate students in introductory psychology courses with an age range of 18-20. The Penn State subject pool roughly approximates the [demographic breakdown][5] of the university. **Minimum sample size after exclusions:** 50 participants per group (i.e., 100 total). **Stopping rule(s):** We will stop initial data collection after we have run 50 participants with usable data for each group (i.e., 100 total) through the experimental procdure. During collection of the data, we will determine usable data from unsuable data on a day-to-day basis. Our stopping rule allows us to still meet the minimum sample size requirements for the protocol because 1) we will not stop until we have reached 100 usable subjects (as evaluated on a daily basis). If, for whatever reason, during the data analysis phase we find that we have failed to collect data from 100 usable subjects, we will continue to run until we have met this minimum goal. **Randomization to conditions:** Subjects will be assinged to conditions in a pseudoramdom fashion using Python's built in random function. Participants are assigned a random number when the experiment instructions start. The script then assigns even numbers to the 'smile' condition, and odd numbers to the 'pout' condition. Should we need to test additional participants, we will pseudoramdomly assign participants to groups based off of how many participants we need for each group. **Blinding to conditions:** Participants will be unaware that other participants were asked to hold the pen in a different manner due to our pseudorandomization process of effectively segregating the groups. That is, participants in group 1 will never be debriefed in the same room as participants from group 2. Additionally, to help maintain this process outside of the lab, participants will be told during debriefing not to share the details of the experiment with anyone for the rest of the semester. **Exclusion rules:** We will be using the same exclusion criteria as outlined in the official protocol. In addition, RAs will note any pecularities or confounds that may occur based on unobtrusive obersvation of the participant. These include a) participant appears overly tired and/or intoxicated such that it impairs their ability to concentrate on the task or interact with the RA in a meaningful way, b) failure to complete the experimental procedure in a timely or appropriate fashion (i.e., over the designated time slot of 30 minutes per session), c) a procedure error where the RA fails to follow the procedures as outlined in the approved protocol and/or d) equipment malfunction such that instruction videos fail to appear, or data collection programs quit unexpectedly. ---------- #### Software/Code #### Our lab will be using materials outlined in the official protocol. Instructions for each condition will be displayed using OpenSesame stimulus presentation software. This allows for collection of informed consent electronically, as well as coded randomization of conditions. All software has been checked and works on laboratory equipment. ---------- #### Differences from the official protocol #### Our replication study will deviate from the approved and supplied protocol in four ways, but will in no way interrupt the outlined replication study until after replication data has been obtained. The deviations from the supplied protocol are as follows: 1. The testing packet will be placed in front of the participant on a slightly raised desk. This alteration is necessary so that the camera recording can see the participant's face while they are filling out the materials. The camera will be positioned slightly below and above the testing packet in an unobtrusive place. Because participants are looking down in front of them while completing the materials, their faces would not be visible on the recording, and thus hinder our post hoc analysis of facial mimicry and action units. 2. Following completion of the protocol replicating Strack, et al. (1988), participants will be invited back for a second session (~1 week) following the first session. Participants will fill out several scales related to their perception and preference for humor as well as empathy. These scales include: situational humorous response questionnaire (Martin & Lefcourt, 1984), sense of humor questionnaire (Svebak, 1974), coping humor scale (Martin, 1996), and Empathy Quotient Scale (Baron-Cohen & Wheelwright, 2004). We predict that an individual’s preference for and outlook towards humor may be a mediating factor for the facial feedback hypothesis. Specifically, we suspect: 1) Those with overall higher sensitivity to humor will respond with greater facial feedback as measured by action unit intensity, and 2) those high in situation humor style will respond with greater facial feedback to humor cartoons. In regards to empathy, we suspect that those who score higher on the Empathy Quotient will 1) show higher sensitivity to facial feedback in either condition. 3. Post acquisition of the data following the protocol provided, video analysis will be conducted using FaceReader 6. FaceReader will be used to determine Action Units (AUs) while participants were engaged in the study. We predict that participants who engage in a genuine Duchenne smile, as determined by the co-occurrence of AUs 6 and 12 (Ekman, Friesen, & Hager, 2002), will rate humorous videos as more funny and enjoyable than individuals who do not show a genuine smile. These predictions are in line with the results obtained by Soussignan (2002). 4. Post acquisition of the data following the protocol provided, we will invite participants back into the lab (~1 week later) to obtain resting heart and respiratory rate. These psychophysiological measures will be used to compute RSA. Interestingly, Soussignan (2002) collected both heart and respiratory rate but did not compute RSA. We believe that RSA is integrally tied to the facial feedback hypothesis insomuch as higher baseline RSA has been linked to greater social functioning. We predict that those with higher baseline RSA will mimic the induced expression to a greater extent, and in turn show higher reactivity to facial feedback. Mimicking will be assessed via FaceReader’s FACS module. [1]: https://osf.io/6wvj4/ [2]: https://osf.io/hgi2y/ [3]: https://osf.io/pkd65/ [4]: http://i.imgur.com/ljagLfa.jpg [5]: http://admissions.psu.edu/apply/statistics/
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.