Abstract
Speech articulation varies across speakers for producing a speech sound due to the differences in their vocal tract morphologies, though the speech motor actions are executed in terms of relatively invariant gestures [1]. While the invariant articulatory gestures are driven by the linguistic content of the spoken utterance, the component of speech articulation that varies across speakers reflects speaker-specific and other paralinguistic information. In this work, we present a formulation to decompose the speech articulation from multiple speakers into the variant and invariant aspects when they speak the same sentence. The variant component is found to be a better representation for discriminating speakers compared to the speech articulation which includes the invariant part. Experiments with real-time magnetic resonance imaging (rtMRI) videos of speech production from multiple speakers reveal that the variant component of speech articulation yields a better frame-level speaker identification accuracy compared to the speech articulation as well as acoustic features by 29.9% and 9.4% (absolute) respectively.
Original language | English |
---|---|
Title of host publication | 2015 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2015 - Proceedings |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 4265-4269 |
Number of pages | 5 |
Volume | 2015-August |
ISBN (Electronic) | 9781467369978 |
DOIs | |
Publication status | Published - 01-01-2015 |
Externally published | Yes |
Event | 40th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2015 - Brisbane, Australia Duration: 19-04-2014 → 24-04-2014 |
Conference
Conference | 40th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2015 |
---|---|
Country/Territory | Australia |
City | Brisbane |
Period | 19-04-14 → 24-04-14 |
All Science Journal Classification (ASJC) codes
- Software
- Signal Processing
- Electrical and Electronic Engineering