Fovea Segmentation Using Semi-Supervised Learning

Ankita Ghosh*, Sahil Khose, Yogish S. Kamath, Neetha I.R. Kuzhuppilly, J. R. Harish Kumar

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Despite the accessibility of retinal fundus images in recent years, fovea segmentation is an exacting task due to the insufficiency of labelled data. In this paper, we propose a deep learning pipeline which utilizes unlabelled data alongside labelled data for the segmentation of fovea. We train 484 labelled images using the Deeplabv3+ architecture and deploy EfficientNet-B3 as the encoder in the framework. Additionally, we introduce semi-supervised learning in our pipeline and train 1200 unlabelled images by generating their pseudo labels. We evaluate our results on the Jaccard index, Dice score, sensitivity, specificity and accuracy. Our Dice score of 82.43% and Jaccard index of 70.52% surpasses the existing methods. We obtain 91.74% sensitivity, 99.75% specificity and 99.57% accuracy.

Original languageEnglish
Title of host publication2023 IEEE 20th India Council International Conference, INDICON 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages590-595
Number of pages6
ISBN (Electronic)9798350305593
DOIs
Publication statusPublished - 2023
Event20th IEEE India Council International Conference, INDICON 2023 - Hyderabad, India
Duration: 14-12-202317-12-2023

Publication series

Name2023 IEEE 20th India Council International Conference, INDICON 2023

Conference

Conference20th IEEE India Council International Conference, INDICON 2023
Country/TerritoryIndia
CityHyderabad
Period14-12-2317-12-23

All Science Journal Classification (ASJC) codes

  • Hardware and Architecture
  • Signal Processing
  • Information Systems and Management
  • Energy Engineering and Power Technology
  • Instrumentation

Fingerprint

Dive into the research topics of 'Fovea Segmentation Using Semi-Supervised Learning'. Together they form a unique fingerprint.

Cite this