Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
**Pilot** With the boom of social media, the ability to publicly post, and leave comments on others’ posts, cyberbullying has become a ubiquitous evil. Nearly 40% of Americans and 59% of American teenagers have experienced online harassment and/or cyberbullying (Pew Research Center, 2017). While there are multiple possible explanations for the pervasiveness of this phenomenon, the present study aims to test the hypothesis that **cyberbullying is facilitated by the limits of writing as a mode of expression**. It is plausible that speaking, rather than typing, could reduce the tendency to bully. **If people use voice-to-text software and have to hear themselves speak, will it make them less likely to bully?** The present political and social climate has demonstrated the fact that online threats can turn into actions resulting in deaths and destruction, thus further demonstrating the dire need for strategies to reduce the presence of and/or severity of such comments. We intend to continue to pilot a simple intervention to test this hypothesis. Subjects, recruited through Amazon’s Mechanical Turk and tested on FindingFive, will be instructed to read a provocative paragraph posted by an unfamiliar Facebook user. Then, subjects will have to post a comment. Participants in the control group will simply type the comment. Participants in the treatment group will be instructed to say aloud their comment on the post. Following their comments, participants will complete a short survey to determine their empathy quotient (Lawrence, Shaw, Baker, Baron-Cohen, & David, 2004) and disclose their demographic information. We will use the RStudio library tidytext to perform sentiment analysis and explore the hypothesis that hearing your own voice say cruel words could mitigate the cruelty of online comments. **While the methods, the platform through which we conduct the study, and the additional questionnaires are subject to slight modifications, this work would be a continuation of and improvement upon the pilot study that we conducted during the Diverse Intelligences Summer Institute in the Summer of 2020.** In the pilot study, participants were recruited from Amazon’s Mechanical Turk and were shown a Facebook-like post by a woman wanting to send her recently-adopted dog back to the shelter. The woman complains about the dog’s abilities, the shelter, and refuses to take responsibility for the situation. This prompt was designed to elicit a strong response. Participants were instructed to read the post then leave a comment. The method by which participants were instructed to respond varied by condition. Overall, a total of 39 participants participated in the study. The 28 participants in the control group were instructed to type a response using the standard computer keyboard. However, in the treatment group, the 11 participants were asked to leave a comment by recording their voice. For both groups, the study concluded with the collection of demographic information, an attention check, and empathy quotient assessment (Lawrence, Shaw, Baker, Baron-Cohen, & David, 2004). The comments from the two groups were analyzed in multiple ways. First, the comments were scored via sentiment analysis using RStudio package tidytext. Sentiment analysis measures the emotional attitude of a text. A positive value means a positive, happier, calmer attitude, while a negative value shows a rather aggressive, sad, negative attitude. There was no significant difference between the average sentiment score of the two groups (Mann-Whitney test, p=0.95), however, the variance in the treatment group (5.81) was higher than the control group (4.41) suggesting that there is more variation in reactions when responding via voice than via text. Additionally, the researchers manually scored the attitude of the comments using a 5-level scale from “very mean” to “very nice.” A higher score means a more cooperative, nice comment, while a lower score shows a less helpful, meaner reaction. While the sample size of the pilot data was insufficient for conducting formal inferential statistics, a lower proportion of participants left ‘mean’ or ‘very mean’ comments in the treatment group than in the control group. Lastly, the average of the nice-score of the control group (3.14) was slightly lower than in the treatment group (3.4) indicating that the treatment might result in nicer comments. **While these results are inconclusive, these preliminary data suggests that the use of voice to leave comments could have an effect on the valence of online comments.** We anticipate that, with a larger sample size, the effect would be statistically significant. Collection of the initial pilot data was funded by the Templeton World Charity Foundation as part of the Diverse Intelligences Summer Institute.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.