Main content
Is regularisation uniform across linguistic levels? Comparing learning and production of unconditioned probabilistic variation in morphology and word order.
Date created: | Last Updated:
: DOI | ARK
Creating DOI. Please wait...
Category: Project
Description: Languages exhibit variation at all linguistic levels, from phonology, to the lexicon, to syntax. Importantly, that variation tends to be (at least partially) conditioned on some aspect of the social or linguistic context. When variation is unconditioned, language learners regularise it, either by removing some or all variants, or by conditioning variant use on context. Previous studies using artificial language learning experiments have documented regularising behaviour in learning of lexical, morphological, and syntactic variation. These studies implicitly assume that regularisation reflects uniform mechanisms and processes across linguistic levels. However, studies on natural language learning and pidginisation suggest that morphological and syntactic variation may be treated differently. In particular, there is evidence that morphological variation may be more susceptible to regularisation. In this paper we provide the first systematic comparison of the strength of regularisation across these two linguistic levels. In line with previous studies, we find that the presence of a favoured variant can induce different degrees of regularisation. However, when input languages are carefully matched—with comparable initial complexities, and no variant-specific biases—regularisation is comparable across morphology and word order. This is the case both when the task is not explicitly communicative, and when the language must be used for communication with a partner. Overall, our findings suggests a single regularising mechanism at work, with apparent differences among levels likely due to differences in inherent complexity or variant-specific biases.