Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Milestone 1: Develop tools for multiple paired comparisons and selection of fillers --- --- - Develop software for performing multiple paired comparison experiments. - Develop software for data analysis: perceptual scaling of faces presented in multiple paired comparison experiments. - Develop metrics of face similarity that will allow us to add a new face (such as the face of a suspect) to a set of previously scaled faces. - Prepare software for performing simultaneous and sequential comparisons. --- Milestone 2: Evaluate eyewitness performance as a function of mean and variance of filler set --- --- Having established methods of face scaling in the studies for Milestone 1, we will be prepared to study how the images selected as fillers affect eyewitness performance. We will create and validate a Scaled Face Library which will allow is to find the *optimal level of the similarity* of the target and fillers and the *optimal level of heterogeneity* within the fillers. Results of this milestone hold promise for making an important self-contained contribution to the criminal justice system, independent of the results of the subsequent Milestone 3. - Perform face scaling using paired comparisons in one group of subjects (Subject Set 1) and accumulate ranked faces from multiple runs of paired comparisons to form a Scaled Face Library. - Validate the Scaled Face Library in two other sets of subjects (Subject Sets 2 and 3). - Use simultaneous lineup to evaluate eyewitness performance with different filler sets under parametric variation of the mean and variance of the filler set (see the figure just below). --- Milestone 3: Compare witness performance in three types of lineup --- --- From the studies described above, we will gain a systematic understanding of conditions most propitious for correct identification of a previously seen face from a set of six images. We will then test the hypothesis that the method of multiple paired comparisons yields better performance than simultaneous and sequential lineups. We will use both within-subject and between-subject designs. The advantage to the former is that the primary source of variance in the results should be the lineup type (since lineup faces are constant and each subject serves as their own control). The between-subject design will allow us to validate results of lineup comparison and avoid potential problems of within-subject design associated with multiple exposures to the same faces by presenting each subject with only one lineup type. - Perform **within subject** experiments using all three types of lineup: paired, simultaneous, and sequential. - Perform **between subject** experiments using all three types of lineup. --- Milestone 4: Translate results of preceding milestones for application in the criminal justice system --- Our ultimate goal is to translate the knowledge gained from these basic studies of identification performance into practical guidelines and procedures for eyewitness identification in the criminal justice system. We expect that the new tools for selecting optimal filler sets and comparative analysis of the three methods of lineup will allow us to come up with recommendations about the most effective and most efficient method of lineup administration. We will investigate how the procedure for generating a set of facial images with desirable properties can be scripted. We propose to divide translation of our results to two stages called Scripting and Blind Testing: **Scripting** We will translate the recommended procedures for efficient choice of filler sets into written instructions. We will then test whether closely following the written instructions produce results comparable to results of studies for Milestones 2 and 3. The testing will reveal the imperfections of written instructions (i.e., lower identification performance), which is why we expect that the translation will require several iterations. **Blind Testing** We will then invite several individuals (“Testers”) uninformed about the logic of our analysis, teach them to use our software, and instruct them to use our written scripts to create filler sets using the same Targets as in the stage of Scripting (unbeknown to the Testers). We will then deploy the filler sets created by the Testers and compare the identification performance using these filler sets with the performance using the sets selected by us. This way we will ensure that the written scripts faithfully translate the principles of filler-set design derived from our studies.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.