Main content
A meta-analysis of score differences in ability assessments in proctored and unproctored settings /
A Meta-Analysis of Test Scores in Proctored and Unproctored Ability Assessments
Date created: | Last Updated:
: DOI | ARK
Creating DOI. Please wait...
Category: Project
Description: Unproctored, web-based assessments are frequently compromised by a lack of control over the participants’ test taking behavior. It is likely that participants cheat if personal consequences are high. This meta-analysis summarizes findings on context effects in unproctored and proctored ability assessments and examines mean score differences and correlations between both assessment contexts. As potential moderators, we consider (a) the perceived consequences of the assessment, (b) countermeasures against cheating, (c) the susceptibility to cheating of the measure itself, and (d) the use of different test media. For standardized mean differences, a three-level random-effects meta-analysis based on 108 effect sizes from 49 studies (total N = 100,434) identified a pooled effect of Δ = 0.20, 95% CI [0.10, 0.31], indicating higher scores in unproctored assessments. Moderator analyses revealed significantly smaller effects for measures that are difficult to research on the Internet. Regarding rank order stability, a small subsample of studies (n = 5) providing 15 effect sizes (total N = 1,280) indicated considerable rank order changes (ρ = .58, 95% CI [.38, .78]). These results demonstrate that unproctored ability assessments are markedly biased by cheating. Unproctored assessments may be most suitable for tasks that are difficult to search on the Internet.