In:
The Journal of the Acoustical Society of America, Acoustical Society of America (ASA), Vol. 155, No. 3_Supplement ( 2024-03-01), p. A292-A292
Abstract:
The potential of using speakers as a sensor to detect ear canal conditions was demonstrated previously. This research contains our ongoing and continuous effort to utilize a single speaker as a sensor by measuring electrical impedance varying acoustic loads. Electrical impedance data (magnitude and phase) from six different acoustic load conditions were collected as features for machine learning (ML) model training. To enhance the learning performance, the data were pre-processed and augmented with normalization and level-shifting techniques, respectively. The raw data were converted to images to optimize the learning performance to classify acoustic loads from the impedance measurement. Several forms of images were experimented such as magnitude only, overlapped magnitude and phase, and rectangular form. A total of 2100 data (350 each) were used with CNN-based State of the Art (SOTA) models such as AlexNet, ResNet, and DenseNet. Both binary and multiclass classifications were performed, showing 0.9716 average accuracy and 0.907 accuracy, respectively. This innovative single-speaker approach using impedance as ML features is poised to revolutionize traditional acoustic sensing research by harnessing the limitless power of AI.
Type of Medium:
Online Resource
ISSN:
0001-4966
,
1520-8524
Language:
English
Publisher:
Acoustical Society of America (ASA)
Publication Date:
2024
detail.hit.zdb_id:
1461063-2
detail.hit.zdb_id:
219231-7
Permalink