Home

Menu

Loading wiki pages...

View
Wiki Version:
<p>During the <a href="https://bcem-conference.weebly.com/posters.html" rel="nofollow">BCEM poster session</a>, Wed 20 May 2020 14.45-15.30 BST, we are all available on Zoom to discuss the poster!</p> <p>Join Zoom Meeting <a href="https://us04web.zoom.us/j/76809657232?pwd=bWdUdTNvQnlEcmI5SFI3dkYydWRPUT09" rel="nofollow">https://us04web.zoom.us/j/76809657232?pwd=bWdUdTNvQnlEcmI5SFI3dkYydWRPUT09</a></p> <p>Meeting ID: 768 0965 7232 Password: musicemoti</p> <p>Or contact us by email: gral2 / a.k.jordanous / <a href="http://c.li" rel="nofollow">c.li</a> @kent.ac.uk</p> <p>George Langroudi<em> - School of Computing, University of Kent, Anna Jordanous</em> - School of Computing, University of Kent, Ling Li<em> - School of Computing, University of Kent </em>presenting author</p> <p>Music Emotion Capture: Ethical issues around emotion-based music generation</p> <p>People’s emotions are not always detectable, e.g. if a person has difficulties/lack of skills in expressing emotions, or if people are geographically separated/communicating online). Brain-computer interfaces (BCI) could enhance non-verbal communication of emotion, particularly in detecting and responding to users’ emotions e.g. music therapy, interactive software. Our pilot study Music Emotion Capture <a href="https://bcem-conference.weebly.com/posters.html" rel="nofollow">1</a> detects, models and sonifies people’s emotions based on their real-time emotional state, measured by mapping EEG feedback onto a valence-arousal emotional model <a href="https://us04web.zoom.us/j/76809657232?pwd=bWdUdTNvQnlEcmI5SFI3dkYydWRPUT09" rel="nofollow">2</a> based on [3]. Though many practical applications emerge, the work raises several ethical questions, which need careful consideration. This poster discusses these ethical issues. Are the work’s benefits (e.g. improved user experiences; music therapy; increased emotion communication abilities; enjoyable applications) important enough to justify navigating the ethical issues that arise? (e.g. privacy issues; control of representation of/reaction to users’ emotional state; consequences of detection errors; the loop of using emotion to generate music and music affecting the emotion, with the human in the process as an “intruder”).</p> <p><a href="https://bcem-conference.weebly.com/posters.html" rel="nofollow">1</a> Langroudi, G., Jordanous, A., & Li, L. (2018). Music Emotion Capture: emotion-based generation of music using EEG. Emotion Modelling and Detection in Social Media and Online Interaction symposium @ AISB 2018, Liverpool. <a href="https://us04web.zoom.us/j/76809657232?pwd=bWdUdTNvQnlEcmI5SFI3dkYydWRPUT09" rel="nofollow">2</a> Paltoglou, G., & Thelwall, M. (2012). Seeing stars of valence and arousal in blog posts. IEEE Transactions on Affective Computing, 4(1) [3] Russell, J.A. (1980). ‘A circumplex model of affect’, Journal of Personality and Social Psychology, 39</p>
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.