Many current models of sentence comprehension employ a content addressable
memory architecture which does not privilege structural information (Lewis
& Vasishth 2005; Van Dyke Lewis and Vasishth 2013 a.o.). A seminal finding
comes from Van Dyke & McElree (2006), who suggest that even nouns outside
the current sentential context generate interference with intra-sentential
dependencies. In a preregistered study, we show that these results do not
reflect similarity-based interference but instead demonstrate facilitated
thematic integration, in the context of increased memory load, when the
verb and the filler are semantically related compared to when they are not.
Meeting ID
meet.google.com/jhc-oezk-ohn
Available for questions on Google hangout Friday March 20 2020, 12:10 - 4pm (Easter Time Zone)