Main content

Artificial Moralities  /

Date created: | Last Updated:

: DOI | ARK

Creating DOI. Please wait...

Create DOI

Category: Uncategorized

Description: The second online study investigates moral decisions in fictional low-stake dilemmas. Four core scenarios are used which potentially result in negative but not lethal outcomes (decsion on early parole, granting of student funding, revocation of access to the library, unhealthy diet). Similar to the high-stake dilemma study, in each scenario either (1) a human or (2) an artificial agent decides to (a) perform a certain action or (b) refrain from it resulting in four versions of each scenario. We aim to investigate whether actions resp. inactions of humans are judged differently than those of artificial agents and how much blame is attributed to the agent in question. We will also contrast answers of participants with different cultural backgrounds (Western European vs. Chinese). In addition, the role of personality traits, affinity for technology and attitudes towards robots will be investigated.

License: CC-By Attribution-NonCommercial-NoDerivatives 4.0 International

Files

Loading files...

Citation

Tags

Recent Activity

Loading logs...

OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.