Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
To recognize vowels in speech, people use acoustic information such as the spectral quality and duration of the vowel. Typically, the contribution of these cues (i.e., cue weighting) is inferred from categorization data. Researchers have learned that the two cues are processed differently making it difficult to interpret the categorization data alone. Although eye tracking is a promising method in speech perception, it requires an explicit linking hypothesis and is based on aggregated data. We advocate the use of quantitative cognitive modeling of reaction time data instead. In our study, we analyze data from an experiment in which listeners categorize the Dutch /A/ and /a:/ vowels. Using the linear ballistic accumulator model, we found that that the differences in the spectral quality and duration contributions were driven by how the cues affect speed of information processing. The influence that spectral cues have on speed of processing was not only larger than duration, but larger for /A/ than for /a:/. We found that listeners were biased to respond /a:/ and this was driven by them processing /a:/ vowels faster than the /A/ vowels. The duration cue was processed faster for the /a:/ vowel compared to the /A/ vowel. Finally, we found that participants did not wait until the vowel offsets to begin processing duration information.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.