Main content
Would you mind? Harming robot vs human protagonists in moral dilemma scenarios
Date created: | Last Updated:
: DOI | ARK
Creating DOI. Please wait...
Category: Project
Description: Robots are an increasingly important part of modern human society. Thus, the likelihood of human-robot interactions with moral implications increases. Previous research has mainly focused on how (fictional) moral decisions of robots are evaluated by humans and whether artificial moral agents evoke different responses compared to human agents. However, another key question is whether harming robots (vs. humans) is considered permissible in moral dilemmas. In this study, we investigate in moral conflict scenarios to what extend harm to artificial vs. human protagonists for the greater good is acceptable. We also examine variables that are hypothesized to affect the (non-)acceptance of harm, i.e., robots' appearance, robots' perceived mind and dilemma characteristics (high- vs. low-stakes). Pictures of four different robots based on ratings in the ABOT database (Phillips et al., 2018) were selected. Three different "mind" conditions will be included emphasizing either (a) agency or (b) experience of the robot in contrast with (c) a neutral condition. Dilemmas will comprise two low-stakes and two high-stakes scenarios. We will also investigate effects of individual differences, i.e., the Dark Triad, religiosity, sense of duty/conscientiousness, negative attitude towards robots, knowledge about and experience with robots, and demographic variables (e.g., sex, age). Participants will also rate the presented robots on selected perceived (social) features (i.e., anthropomorphism, morality/sociability, activity/cooperation, and eeriness).