Home

Menu

Loading wiki pages...

View
Wiki Version:
<p><strong>Abstract</strong></p> <p>The goal of this study is to develop a novel online hearing screening system by integrating an automated speech-in-noise test viable for use in individuals of unknown language and artificial intelligence (AI) algorithms. The speech-in-noise test uses vowel-consonant-vowel stimuli in speech-shaped noise on a three-alternative forced choice task. In addition to the speech reception threshold (SRT), estimated using a novel staircase, the system extracts features such as average reaction time, test duration, percentage of correct responses, and number of trials. These features are fed into AI algorithms to train a multivariate classifier and identify ears with mild hearing loss. A range of AI algorithms was tested on a dataset including 156 tested ears (55 with mild or moderate hearing loss). We used both explainable AI methods (XAI, e.g. Decision Tree, Logic Learning Machine), and conventional methods (e.g., Logistic Regression, Support Vector Machines, K-Nearest-Neighbor). Compared to a conventional univariate classifier based on SRT (cut-off SRT: -8 dB SNR; accuracy = 0.82; sensitivity = 0.70; specificity = 0.90), AI-based multivariate classifiers reached improved performance, particularly in terms of sensitivity (e.g., logistic regression: accuracy = 0.79; sensitivity = 0.79; specificity = 0.79; Support Vector Machines: accuracy = 0.78; sensitivity = 0.78; specificity = 0.79). XAI methods revealed that specific features, i.e. number of correct responses, age, and SRT, were the most important in identifying hearing loss. Further research will be needed to investigate the potential of AI to identify hearing loss and monitor the individual risk, for example by addressing the risk factors for hearing loss. An additional module including an icon-based interface to assess specific risk factors (e.g., smoking, noise exposure, diabetes, cardiovascular disease) is under development and will be tested on a population of &gt;150 individuals.</p> <p><strong>Acknowledgement</strong> This study was partially supported by Capita Foundation (project WHISPER, Widespread Hearing Impairment Screening and PrEvention of Risk, 2020 Auditory Research Grant).</p> <p>Contact: marta.lenatti@ieiit.cnr.it </p> <p>Available dates: May 3, May 4, May 5</p>
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.