| Last Updated:
Creating DOI. Please wait...
Measurement invariance is a key concept in psychological assessment and a fundamental prerequisite for meaningful comparisons across groups. In the prevalent approach, multi-group confirmatory factor analysis (MGCFA), specific measurement parameters are constrained to equality across groups. The degrees of freedom (df) for these models readily follow from the hypothesized measurement model and the invariance constraints. In light of the research questioning the soundness of statistical reporting in psychology, we examined how often reported df match with the df recalcualted based on information given in the publications. More specifically, we reviewed 128 studies from six leading peer-reviewed journals focusing on psychological assessment and recalculated the df for 302 measurement invariance testing procedures. Overall, about a quarter of all articles included at least one discrepancy with metric and scalar invariance being more frequently affected. We discuss moderators of these discrepancies and identify typical pitfalls in measurement invariance testing. Moreover, we provide example syntax for different methods of scaling latent variables and introduce a tool which allows for the recalculation of df in common MGCFA models to improve the statistical soundness of invariance testing in psychological research.