Dyspnea Severity Assessment Based on Vocalization Behavior with Deep Learning on the Telephone
Abstract
In this paper, a system to assess dyspnea with the mMRC scale, on the phone, via deep learning, is proposed. The method is based on modeling the spontaneous behavior of subjects while pronouncing controlled phonetization. These vocalizations were designed, or chosen, to deal with the stationary noise suppression of cellular handsets, to provoke different rates of exhaled air, and to stimulate different levels of fluency. Time-independent and time-dependent engineered features were proposed and selected, and a k-fold scheme with double validation was adopted to select the models with the greatest potential for generalization. Moreover, score fusion methods were also investigated to optimize the complementarity of the controlled phonetizations and features that were engineered and selected. The results reported here were obtained from 104 participants, where 34 corresponded to healthy individuals and 70 were patients with respiratory conditions. The subjects’ vocalizations were recorded with a telephone call (i.e., with an IVR server). The system provided an accuracy of 59% (i.e., estimating the correct mMRC), a root mean square error equal to 0.98, false positive rate of 6%, false negative rate of 11%, and an area under the ROC curve equal to 0.97. Finally, a prototype was developed and implemented, with an ASR-based automatic segmentation scheme, to estimate dyspnea on line.
Más información
Título según WOS: | Dyspnea Severity Assessment Based on Vocalization Behavior with Deep Learning on the Telephone |
Título según SCOPUS: | ID SCOPUS_ID:85149737607 Not found in local SCOPUS DB |
Título de la Revista: | SENSORS |
Volumen: | 23 |
Editorial: | MDPI |
Fecha de publicación: | 2023 |
DOI: |
10.3390/S23052441 |
Notas: | ISI, SCOPUS |