Usage-based theories of morphological language acquisition - and more recent rule-based accounts - assume that learners rely on rote storage/retrieval of phonological forms and phonological analogy. The majority of previous computational modelling has been focused on simple systems like English or investigated only a small part of a paradigm. We present neural network simulations of how children acquire inflectional morphology for the full paradigm of present-tense person/number marking and case marking in three morphologically complex languages. Three-layer networks with 200 hidden units were trained on verb (Finnish, Polish) or noun forms (Finnish, Polish, Estonian) according to natural child-directed speech data. The results are compared in detail with large-scale elicited production studies with children around the age of 4;2 in each language. The input to the model consisted of syllable-grouped phoneme sequences representing verb stems (verb models) or noun forms in nominative case (noun models) and a code for the target person/number or case context, respectively. The models were trained using backpropagation to output the correct phoneme sequence of the target form. Inputs were presented probabilistically according to their token frequencies in child-directed speech corpora. The verb models acquired adult-like mastery of the system after about 3 million training trials and could generalise (i.e., produce the correct target for untrained items) to about 85% of the test items used in the elicited production experiments. In addition, the models yielded all of the key phenomena observed in the elicited-production studies; specifically, effects of token frequency and phonological neighbourhood density (inflectional class size) of the target form, and a pattern whereby errors generally reflect the replacement of low frequency targets by higher-frequency forms of the same verb, or forms with the same person/number as the target, but with a suffix from an inappropriate conjugation class. Only when using the difference between output and target activation as a continuous measure of error instead of a binary correct/incorrect distinction (as was used in the experiments), the modelling additionally indicated that the effect of phonological neighbourhood was smaller for items of higher token frequency (as predicted by usage-based accounts), suggesting that a binary measure may be insufficiently sensitive for some effects to be detected. Hierarchical clustering of the models' internal representations revealed that verbs were grouped on the basis of phonological similarities that included so-called enemies from a different class. Errors could therefore be better predicted when defining phonological neighbourhood in terms of friends (neighbours with the same inflection) and enemies (neighbours with a different inflection). Simulations for learning noun case marking are currently in progress. Our findings for verb learning demonstrate that acquisition of even highly complex systems of inflectional morphology can be accounted for by a theoretical model that assumes rote storage and phonological analogy, as opposed to formal symbolic rules. In addition, the simulations suggest that future behavioural and computational studies should explore the possibility of a more sensitive, non-binary dependent variable, and should use more sophisticated measures of phonological neighbourhood density when predicting the pattern of error in children's speech.