Two step convolutional neural network for automatic glottis localization and segmentation in stroboscopic videos

Varun Belagali, M. V. Achuth Rao, Pebbili Gopikishore, Rahul Krishnamurthy, Prasanta Kumar Ghosh

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Precise analysis of the vocal fold vibratory pattern in a stroboscopic video plays a key role in the evaluation of voice disorders. Automatic glottis segmentation is one of the preliminary steps in such analysis. In this work, it is divided into two subproblems namely, glottis localization and glottis segmentation. A two step convolutional neural network (CNN) approach is proposed for the automatic glottis segmentation. Data augmentation is carried out using two techniques : 1) Blind rotation (WB), 2) Rotation with respect to glottis orientation (WO). The dataset used in this study contains stroboscopic videos of 18 subjects with Sulcus vocalis, in which the glottis region is annotated by three speech language pathologists (SLPs). The proposed two step CNN approach achieves an average localization accuracy of 90.08% and a mean dice score of 0.65.

Original languageEnglish
Pages (from-to)4695-4713
Number of pages19
JournalBiomedical Optics Express
Volume11
Issue number8
DOIs
Publication statusPublished - 2020

All Science Journal Classification (ASJC) codes

  • Biotechnology
  • Atomic and Molecular Physics, and Optics

Fingerprint

Dive into the research topics of 'Two step convolutional neural network for automatic glottis localization and segmentation in stroboscopic videos'. Together they form a unique fingerprint.

Cite this