Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
This dataset contains data from two tasks, performed by 82 participants (50% Black, 41.4% White & 8.5% other). In the first task, participants were shown 18 mugshots of convicted individuals, each followed by a line-up of four individuals of which one was the target. Participants were asked to identify this target. ![enter image description here][1] The target image was shown for 1000ms, and participants could take as much time as they wanted to give their response, using the number keys to indicate the position in the line where they thought the target was. The structure of the data file (AggregatedData/Combined_Witness.csv) is as follows: ![enter image description here][2] - 'Age' indicates the participant's age group. - 'Ethnicity' indicates the participant's ethnicity - 'Gender' indicates the participant's gender - 'Pos 1' to 'pos 4' indicate the four images shown in the line-up - 'Race' indicates the suspect's race - 'Target' indicates the image used as the target - 'Accuracy' indicates accuarcy up and until that trial - 'Avg_rt' indicates the average RT up and until that trial - 'Correct' indicates the accuracy of the current response (1=0 correct, 0=incorrect) - 'Response' indicates the response given - 'Response_time' indicates the response time for that trial - 'subject_number' is the participant number The average data for this task look as follows (reaction times on the left, accuracy on the right). ![enter image description here][3] While reaction times do not show the own race bias, accuracy does (although white suspects were identified accurately by both groups). The second set of data (Aggregated Data/combined_RTs.csv) involves a race decision task. The same 82 participants each saw 60 photos of faces (22 Black faces, 22 White & 4 Asian) from the MR face database (Strohminger et al., 2015). Their task was to indicate, as quickly and accurately as possible the race of the image shown (black, white, other), by pressing one of three keys. The images are stored in the Opensesame folder (RaceDecision.zip), together with the Opensesame file ([Opensesame website][4]). The data for this task look as follows: ![enter image description here][5] - 'Block' indicates whether the trial was a practice (0) or a main (1) block trial. - 'Image' indicates the image shown - 'Race' indicates the race of the image shown - 'Trial' indicates the trial number before randomization - 'Accuracy' indicates accuarcy up and until that trial - 'Avg_rt' indicates the average RT up and until that trial - 'Correct' indicates the accuracy of the current response (1=0 correct, 0=incorrect) - 'Correct_response' indicates the expected response - Count-variables: were used to present a message after half of the trials to swap the key mapping - 'Practice' indicates whether the trial was practice - 'response' indicates the key pressed by the participant - 'Response_time' indicates the response time of that trial - 'subject_number' indicates the participant number The overall data (response times and error rates for this task look as follows. ![enter image description here][6] Decision times were shorter and accuracy higher for black faces than white faces, but no interaction with the observer's race was found. To examine whether facial features of the image shown influenced reaction times and accuracy in the race decision task, each face was analysed for: - The skin tone (average RGB value), higher values mean lighter skin tone - The thickness of the lips (as a function of the height of the face) - The width of the nose (as a function of the width of the face) - Eye tone (average RGB value), higher values mean lighter eyes Correlations have been computed between these values and the reaction times and accuracy pooled across both races of the faces shown, but should still be analyzed for each race separately (as the correlations may go in different directions for both races). [1]: https://mfr.osf.io/export?url=https://osf.io/gz76s/?action=download&direct&mode=render&initialWidth=848&childId=mfrIframe&format=1200x1200.jpeg [2]: https://mfr.osf.io/export?url=https://osf.io/wj3c4/?action=download&direct&mode=render&initialWidth=848&childId=mfrIframe&format=1200x1200.jpeg [3]: https://mfr.osf.io/export?url=https://osf.io/dmxy3/?action=download&direct&mode=render&initialWidth=848&childId=mfrIframe&format=1200x1200.jpeg [4]: http://osdoc.cogsci.nl/ [5]: https://mfr.osf.io/export?url=https://osf.io/yfq2v/?action=download&direct&mode=render&initialWidth=848&childId=mfrIframe&format=1200x1200.jpeg [6]: https://mfr.osf.io/export?url=https://osf.io/e76nm/?action=download&direct&mode=render&initialWidth=848&childId=mfrIframe&format=1200x1200.jpeg
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.