Video overview [here][1]. Feel free to ask questions in the comments, or in [this Zoom call][2] Thursday 12-2!
Short Abstract:
In English, subjects and their corresponding verb must agree in number: "Dogs run", but "A dog runs." However, when an intervening NP has a different number feature than the subject, speakers sometimes mistakenly have the verb agree with this intervening attractor rather than the subject. A number of studies have found various factors that manipulate this agreement attraction effect, but no existing model of agreement can account for all of the data. Recent work in Natural Language Processing has shown that simple neural language models (NLMs) learn subject-verb agreement, and make some human-like errors. In this project, we test a set of simple NLMs on their ability to replicate seven experimental results from the agreement attraction literature, and find that these general purpose sequence learners can replicate 3 of them. We see this as indicating (1) that some results from the literature can be explained through domain-general sequence processing mechanisms and (2) others may require more specific inductive biases to learn from the input. We suggest investigating these inductive biases as a complementary approach to modeling agreement attraction.
[1]: https://mfr.osf.io/render?url=https://osf.io/jhxsp/?direct&mode=render&action=download&mode=render
[2]: https://JHUBlueJays.zoom.us/j/986571876