| Last Updated:
Creating DOI. Please wait...
Identifying relative idiosyncratic and shared contributions to judgments is a fundamental challenge to the study of human behavior, yet there is no established method for estimating these contributions. Using edge cases of stimuli varying in intra-rater reliability and inter-rater agreement – faces (high on both), objects (high on the former, low on the latter), and complex patterns (low on both) – we show that variance component analyses (VCAs) accurately captured the psychometric properties of the data (Study 1). Simulations showed the VCA generalizes to any arbitrary continuous rating and both sample and stimulus set size affect estimate precision (Study 2). Generally, a minimum of 60 raters and 30 stimuli provided reasonable estimates within our simulations. Furthermore, VCA estimates stabilized given more than two repeated measures, consistent with the finding that both intra-rater reliability and inter-rater agreement increased nonlinearly with repeated measures (Study 3). The VCA provides a rigorous examination of where variance lies in the data, can be implement using mixed models with crossed random effects, and is general enough to be useful in any judgment domain where agreement and disagreement are important to quantify and multiple raters independently rate multiple stimuli.
CC-By Attribution 4.0 International