In:
The Journal of the Acoustical Society of America, Acoustical Society of America (ASA), Vol. 136, No. 4_Supplement ( 2014-10-01), p. 2104-2104
Abstract:
A critical step toward a neurological understanding of speech generation is to relate neural activity to the movement of articulators. Here, we describe a noninvasive system for simultaneously tracking the movement of the lips, jaw, tongue, and larynx for human neuroscience research carried out at the bedside. We combined three methods previously used separately: videography to track the lips and jaw, electroglottography to monitor the larynx, and ultrasonography to track the tongue. To characterize this system, we recorded articulator positions and acoustics from six speakers during production of nine American English vowels. We describe processing methods for the extraction of kinematic parameters from the raw signals and methods to account for artifacts across recording conditions. To understand the relationship between kinematics and acoustics, we used regularized linear regression between the vocal tract kinematics and speech acoustics to identify which, and how many, kinematic features are required to explain both across vowel and within vowel acoustics. Furthermore, we used unsupervised matrix factorization to derive "prototypical" articulator shapes, and use them as a basis for articulator analysis. These results demonstrate a multi-modal system to non-invasively monitor speech articulators for clinical human neuroscience applications and introduce novel analytic methods for understanding articulator kinematics.
Type of Medium:
Online Resource
ISSN:
0001-4966
,
1520-8524
Language:
English
Publisher:
Acoustical Society of America (ASA)
Publication Date:
2014
detail.hit.zdb_id:
1461063-2
Permalink