Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
This page contains all the collected study materials and instructions from authors that we have compiled. Contact the Executive Reviewer, Michelle Hurst (michelle.hurst@rutgers.edu), or Jordan Wagge and the CREP team (CREP.Psych@gmail.com) with questions or problems. **Original Paper** Kneer, M., & Machery, E. (2019). No luck for moral luck. Cognition, 182, 331-348. https://doi.org/10.1016/j.cognition.2018.09.003 **Original Abstract** Moral philosophers and psychologists often assume that people judge morally lucky and morally unlucky agents differently, an assumption that stands at the heart of the Puzzle of Moral Luck. We examine whether the asymmetry is found for reflective intuitions regarding wrongness, blame, permissibility, and punishment judgments, whether people’s concrete, case-based judgments align with their explicit, abstract principles regarding moral luck, and what psychological mechanisms might drive the effect. Our experiments produce three findings: First, in within-subjects experiments favorable to reflective deliberation, the vast majority of people judge a lucky and an unlucky agent as equally blameworthy, and their actions as equally wrong and permissible. The philosophical Puzzle of Moral Luck, and the challenge to the very possibility of systematic ethics it is frequently taken to engender, thus simply do not arise. Second, punishment judgments are significantly more outcome-dependent than wrongness, blame, and permissibility judgments. While this constitutes evidence in favor of current Dual Process Theories of moral judgment, the latter need to be qualified: punishment and blame judgments do not seem to be driven by the same process, as is commonly argued in the literature. Third, in between-subjects experiments, outcome has an effect on all four types of moral judgments. This effect is mediated by negligence ascriptions and can ultimately be explained as due to differing probability ascriptions across cases. **Materials** The study was run in Qualtrics, but you are not limited to just using Qualtrics. A PDF and word document of the survey and a QSF file are provided. [See here for instructions on how to import a QSF file into Qualtrics.](https://www.qualtrics.com/support/survey-platform/survey-module/survey-tools/import-and-export-surveys/#ImportingASurvey) Study 1 and 2 were originally collected at the same time and participants were randomly assigned to one of four conditions (2 from Study 1a, 1 from Study 1b, 1 one Study 2). **CREP is only replicating Study 2**. For easier replication, we have provided a QSF file, word document, and PDF for the modified survey to only administer Study 2. These modified materials have the preface "Study2Only" in the filename (these were modified by Michelle Hurst based on the author provided materials). However, for transparency/completeness, the original materials that randomly assign participants to Study 2 AND Study 1 are provided with the preface "Original" in the filename (these were provided by the authors). As described in the [Supplemental Materials](https://www.sciencedirect.com/science/article/pii/S0010027718302403#s0215) of the paper, several measures were administered after the main task: the Rational-Experiential Inventory (Epstein et al. 1996; Pacini and Epstein, 1999), the Belief in a Just World Scale (Rubin and Peplau, 1975), the 12-item Social and Economic Conservativism Scale (Everett, 2013), and the 20-item Moral Foundations Questionnaire (Graham et al. 2009; Graham et al. 2011). These measures are included in the survey materials. We encourage teams to include these measures in their replication (see author note below), but it is not required. Analysing these measures is also not required. **Sample Size** N = 100 is the required minimum number of participants **Notes from the author** About moderators: "we expect culture to moderate the results somewhat. In the original study, we report some moderation effects with the scales we used (see supplementary materials), and we suggest you include these scales to replicate these results and make sure the samples are comparable" **IRB Templates and Information** Note that one of the vignettes includes some sensitive content and you may need to work with your IRB on that. **CREP Note about Replications** For this study, we are piloting a new system for organizing CREP replciations. You will still request the study through the usual way, but will be sent different information about how to use the new system. While we are piloting this new system, please feel free to follow up immediately if you have any questions or concerns (to Michelle or the general CREP email listed above). We want to make the process as smooth as possible and very much appreciate your feedback.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.