Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
**References:** Kantner, J., & Lindsay, D. S. (2012). Response bias in recognition memory as a cognitive trait. Memory & Cognition, 40(8), 1163-1177. Andersen, S. M., Carlson, C. A., Carlson, M. A., & Gronlund, S. D. (2014). Individual differences predict eyewitness identification performance. Personality And Individual Differences, 6036-40. Bindemann, M., Brown, C., Koyas, T., & Russ, A. (2012). Individual differences in face identification postdict eyewitness accuracy. Journal of Applied Research in Memory and Cognition, 1, 96-103. Morgan, C. A., Hazlett, G., Baranoski, M., Doran, A., Southwick, S., & Loftus, E. (2007). Accuracy of eyewitness identification is significantly associated with performance on a standardized test of face recognition. International Journal of Law and Psychiatry, 30, 213-223. **Rationale for the study and design:** We are running the next in a series of studies based on findings by Kantner and Lindsay (2012), in which the authors found reliable individual differences in memory response bias across several stimuli, testing sessions, testing locations, *etc*. One experiment showed a significant correlation between response bias on a recognition memory test for words and false positive rate for a set of lineups based on earlier-seen crime videos. We began expanding upon these findings in early 2012 and are currently writing a manuscript detialing the studies conducted up to now. Most recently presented at the 2015 meeting of SARMAC, our work shows that general face recognition skill and lineup responses may be linked in a different way than has been previously researched. Other authors have found links in accuracy rates for culprit present (CP) lineups, but we have found a way to postdict an individual's likelihood to have been accurate on a culprit absent (CA) lineup, which we call their individual Proclivity to Choose (PTC). Based on an idea from Larry Jacoby, we have been testing face memory with a two-alternative non-forced choice (2ANC) recognition task in which 50% of the trials contain a studied face and an unstudied face and the other 50% contain two unstudied faces. False positive selection rates on these pairs of unstudied faces reliably (across 5 different samples, *r* ≈ 0.4) predict false positive selection rates on five CA lineups completed before the face recognition study list begins. This relationship has held steady for local samples of university students, samples of MTurk workers, and samples of both after a two-day delay between video viewing and lineup/PTC test completion. This study will be a continuation of that line of research based on another suggestion from Larry. He is conducting 2AUC recognition tests now in which similar words are paired at test to increase the possibility of confusion and error. This is akin to description-matching, which is the traditional way lineups are filled out with foil members. We theorize that such a change will prove a more deft measurement of face ID skill via accuracy in pairs with one studied and one unstudied face that are at least somewhat similar in appearance. **Data collection done to date:** Starting ASAP after this is posted, September 10 2015. **Collaborators:** Mario Baldassari, D. Stephen Lindsay, a slew of helpful undergraduates **Timeline:** We aim to collect this dataset through fall of 2015 and begin preparing a next step for January. **Participants:** Local undergraduates at the University of Victoria. Power analyses indicate that a sample size of 80 will provide a power level of .95 to detect an effect of *r* = .35 at the .05 alpha level. Such a finding would provide a 95% confidence interval for *r* between .14 and .59. We will shoot for slighly more than 80 participants, because some participants will fall under the exclusion rules we have previously established for these studies: 1. Those who admit to distractions or skip portions of the study 2. Those who RA's catch doing another task instead of watching the videos 3. Those who respond to lineups in less than 1000ms or longer than 15000ms **Planned Analyses:** We will measure the correlation between rejection rate of N/N pairs and rejection rate of lineups, correct selection rate of O/N pairs and correct selection rate of lineups (including additional analyses with false rejections removed), the predictive value of confidence and reaction time on both tasks. **Most recent presentation of this work:** Baldassari, M. J., Kantner, J. D., & Lindsay, D. S. (2015). Individual proclivity to choose (PTC) for face recognition predicts PTC on lineups. Paper presented at the annual meeting of the Society for Applied Research in Memory and Cognition, Victoria, BC, Canada. **To be included in this pre-registration are:** Procedures and materials including EPrime files, face photos, crime videos, lineups, and a list of the pairs as matched. Face stimuli were acquired from the kind folks at the Park Aging Mind Lab at UT-Dallas: http://agingmind.utdallas.edu/facedb Minear, M. & Park, D.C.(2004). A lifespan database of adult facial stimuli. Behavior Research Methods, Instruments, & Computers. 36, 630-633. - See more at: http://agingmind.utdallas.edu/facedb#sthash.gjoQZoLI.dpuf **Contact information:** For more information please contact Mario Baldassari (mjbldssr@uvic.ca) or Steve Lindsay (slindsay@uvic.ca).
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.