Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
# Recognition memory follow up ## Rationale When we perceive the visual world, we bring to bear prior knowledge and expectations to help guide our interpretations. But how does our knowledge and expectations affect the amount of visual information we can hold in mind at once? Our memory capacity very much depends on prior knowledge, both in working memory and in long-term memory. For example, expert fingerprint examiners have improved working memory for fingerprints compared to novices. With their vast experience with fingerprints, experts have developed highly sophisticated encoding and retrieval structures such that all relevant prior knowledge can be immediately accessed to guide action in new situations. Memory capacity also relies on our expectations. Generally speaking, people tend to have a better memory for distinctive or surprising items. For instance Standing (1973) demonstrated a memory advantage for images that depicted oddities (e.g., a dog holding a pipe, a crashed airplane), compared to everyday scenes and events. This is arguably because such items attract more attention at encoding. ## The Current Project In previous [experiment][2] on recognition memory, we found that expert fingerprint examiners outperformed novices by a large degree (see graph below). Given that experts do not deliberately train their working memory for fingerprint configurations, this memory advantage is likely an incidental consequence of their training and experience. In the current experiment, we wanted to unpack this expert-novice difference to understand what other variables might be driving this effect. We are particularly interested in exploring the role that print distinctiveness might play in aiding memory for both experts and novices. Most of the researching looking into the effect of distinctiveness on memory retrieval has primarily focussed on learners who have no special expertise with the stimuli they are remembering. However, research on the applicability of memory for distinctive or surprising items, as well as how this interacts with other sources of knowledge, is limited yet would be highly relevant to many domains (e.g., forensic examination, baggage screening, diagnostic medicine). ![experiment 1 results](https://mfr.au-1.osf.io/export?url=https://osf.io/shr3z/?direct%26mode=render%26action=download%26public_file=False&initialWidth=774&childId=mfrIframe&parentTitle=OSF+%7C+printRec_exp1.png&parentUrl=https://osf.io/shr3z/&format=2400x2400.jpeg) ## Participants ### Expert Data will be collected from as many fingerprint examiners as possible, subject to their availability. Our goal is to reach 30 expert participants. We define an ‘expert’ as one who is appropriately qualified in their jurisdiction, and is court-practicing. We will also collect data from professional fingerprint examiners who are not yet fully qualified. These examiners are often referred to as ‘trainees’. These data will not be analysed in this experiment, but may be used in a future project. ### Novice Data will be collected from an equal number of novices as experts. So, for example, if we collect data from 34 experts, then we will collect data from 34 novices. Novice participants who have no formal experience in fingerprint examination will be recruited from The University of Adelaide, The University of Queensland, and/or Murdoch University communities, and the general Australian public. Novice participants may be offered a cash or gift card incentive for their participation, or participate for course credit. ### Sensitivity Analysis Based on previous expert-novice studies in this domain, we anticipate a large difference in performance between professional fingerprint examiners and novices (d > .80). 36 (18 per condition) observations from 30 participants per group provides sufficient sensitivity (power = 1.0) to detect an effect size of d = 0.45. ### Exclusion Criteria Participants who respond in less than 500ms on more than 20 percent of trials will be excluded. If a participant fails to complete the experiment because of fatigue, illness, or excessive response delays (i.e., longer than the session allows for), then their responses will not be recorded. Data from participants who provide the same response (e.g., the first print in the array) consecutively on more than 25 percent of trials will also be excluded. If a participant is excluded on the basis of any of these criteria, then another participant will be recruited to take their place. Participants are also required to bring and use any necessary aids, including glasses, contact lenses, and hearing aids. ### Data Collection No human data have been collected for this experiment yet. We are visiting various police departments around Australia in February 2020, and plan to have all expert and novice data collected by the end of the year. ## Design This experiment employs a 2 (Expertise: expert, novice; between subjects) x 2 (Print Distinctiveness: very distinctive, very nondistinctive; within subjects) mixed design yoked to expertise. 100 unique participant sequences have been pre-generated, to allow for as many expert participants as possible. Each sequence consists of 36 trials, half of which are distinctive trials and the other half are non distinctive trials. On each trial, a plain print is presented on the screen for 10 seconds. After the 10 seconds has elapsed, 10 rolled prints are presented sequentially. One of these 10 prints is the target that belongs to the same finger as the crime scene print. The other 9 rolled prints are distracters sampled from different fingers of the same person (presented in random order). The 100 pre-generated participant sequences can be downloaded in the Sequences component of this project. ## Materials The fingerprints used in this experiment were plain and fully-rolled (“arrest”) prints sourced from the [NIST Special Database 300](https://www.nist.gov/itl/iad/image-group/nist-special-database-300). Each set contains prints collected in operational policing contexts, preserving natural variation in quality. In this experiment, we used a sample of 200 plain impressions and the corresponding 10 fully rolled that belonged to the same individual (2000 prints in total). ### Print Distinctiveness To gather distinctive and non distinctive prints, we asked expert examiners and inexperienced novices to rate a large database of fingerprints on a scale of distinctiveness (see [here][1] for more information). For the current experiment, we computed the z-score for each distinctiveness rating and calculated the average rating across both experts and then novices for each fingerprint impression. From here we sampled 100 of the most distinctive prints as well as 100 of the least distinctive prints (ensuring no duplicate sets between or within the 2 conditions). ## Procedure Participants first read an information sheet, and watch an instructional video about the task with examples (see above). They are then each presented with a randomly sampled set of 36 (18 distinctive, 18 non distinctive) plain (or slap) fingerprint impressions to memorise. Participants are given 10 seconds to study each impression. After the 10 seconds elapses, the plain print disappears, and 10 fully rolled impressions appear sequentially on the screen. The participants are instructed to sort through the rolled impressions and select the target. In each trial, one of the 10 impressions is the target, matching the same finger as the studied print. The other 9 prints are distractors sampled from different fingers of the same individual. The order of the target and distractors is shuffled randomly on every trial and remains on the screen until the participant makes a selection. However, a text prompt appears during the inter-trial interval if participants take longer than 20 seconds to respond, with the message: “try to decide in less than 20 seconds.” Once they make their selection, corrective feedback appears on the screen for 500 milliseconds before the next trial begins. ## Software The video instructions and experimental task are presented to participants on a 13-inch MacBook Pro or MacBook Air laptop screen, with over-ear headphones. The software used to generate the trial sequences, present stimuli to participants, and record their responses was developed in LiveCode (version 9.0.2; the open source ‘community edition’). The LiveCode source files, experiment code, and all the materials needed to run the experiment locally can be downloaded in the Software component of this project. ## Hypotheses and Predictions 1. In recognising old prints, we expect novice proportion correct scores to be better than chance resulting in a medium effect size (d = .2 to .5), and expert proportion correct scores to be better than chance resulting in a large effect size (d > .8). 2. When comparing novices to experts, we expect a main effect of expertise such that experts will outperform novices. We expect a large effect size (d > .8) in the difference between their proportion correct scores. 3. We also expect a main effect of distinctiveness such that participants perform better on distinctive trials compared to non distinctive trials. 4. We expect expertise to interact with print distinctiveness where the difference between experts and novices will be greater for non distinctive prints ## Planned Analyses ### Comparison to Chance To test experts’ and novices’ print recognition, we will compute their mean proportion correct over the 36 trials. One-sample t-tests (or nonparametric equivalent if distributional assumptions are not met) will be used to determined whether each group performs significantly above chance (10%). ### Main effects and interactions A mixed ANOVA (or nonparametric equivalent if distributional assumptions are not met) on mean proportion correct scores will be used to test the extent to which expert fingerprint examiners outperform novices, the extent to which participants perform better on distinctive vs. non distinctive trials and to explore the interaction between the two variables. ### Exploratory analyses Participant response times will be recorded on each trial during this experiment. We will explore the relationship between accuracy and response time. ![Predictions](https://mfr.au-1.osf.io/export?url=https://osf.io/7dqek/?direct%26mode=render%26action=download%26public_file=False&initialWidth=774&childId=mfrIframe&parentTitle=OSF+%7C+predictions.png&parentUrl=https://osf.io/7dqek/&format=2400x2400.jpeg) ## Ethics We have ethics clearance from human research ethics committees at The University of Queensland for the project titled “Identifying perceptual experts in fingerprint identification” (Approval Number: 2018001369), The University of Adelaide: (33115), and Murdoch University (2018/149). ## Updated analysis In response to the feedback provided by the reviewer, we analysed our data using a Generalised Linear Model (GLM) with a binomial distribution. This statistical approach was specifically chosen to more appropriately analyse our binary outcome data, where each trial resulted in either a correct (1) or incorrect (0) response. Originally, we considered the percentage of correct responses as our dependent variable, which led us to use an ANOVA. However, this approach did not adequately address the binary and bounded nature of individual trial outcomes. The GLM with a binomial distribution, on the other hand, is well-suited for such binary outcomes and allows for a more nuanced investigation of the effects of expertise level and stimulus type, as well as their interaction. The transition to GLM aligns with statistical best practices for handling binary outcome data, offering a more precise understanding of the factors influencing trial-by-trial performance in our recognition memory experiment. *See analysis component for updated analysis scripts* [1]: (https://osf.io/fzrb3/?view_only=9dc852fcda434ef589435b837b2483cb). [2]: (https://osf.io/qy2su/?view_only=917b14dc324a49c7a74c34af65b0888a)
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.