Towards a device-independent deep learning approach for the automated segmentation of sonographic fetal brain structures: A multi-center and multi-device validation

Abhi Lad, Adithya Narayan, Hari Shankar, Shefali Jain, Pooja Punjani Vyas, Divya Singh, Nivedita Hegde, Jagruthi Atada, Jens Thang, Saw Shier Nee, Arunkumar Govindarajan, Roopa Ps, Muralidhar V. Pai, Akhila Vasudeva, Prathima Radhakrishnan, Sripad Krishna Devalla

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Quality assessment of prenatal ultrasonography is essential for the screening of fetal central nervous system (CNS) anomalies. The interpretation of fetal brain structures is highly subjective, expertise-driven, and requires years of training experience, limiting quality prenatal care for all pregnant mothers. With recent advancement in Artificial Intelligence (AI), computer assisted diagnosis has shown promising results, being able to provide expert level diagnosis in matter of seconds and therefore has the potential to improve access to quality and standardized care for all. Specifically, with advent of deep learning (DL), assistance in precise anatomy identification through semantic segmentation essential for the reliable assessment of growth and neurodevelopment, and detection of structural abnormalities have been proposed. However, existing works only identify certain structures (e.g., cavum septum pellucidum [CSP], lateral ventricles [LV], cerebellum) from either of the axial views (transventricular [TV], transcerebellar [TC]), limiting the scope for a thorough anatomical assessment as per practice guidelines necessary for the screening of CNS anomalies. Further, existing works do not analyze the generalizability of these DL algorithms across images from multiple ultrasound devices and centers, thus, limiting their real-world clinical impact. In this study, we propose a deep learning (DL) based segmentation framework for the automated segmentation of 10 key fetal brain structures from 2 axial planes from fetal brain USG images (2D). We developed a custom U-Net variant that uses inceptionv4 block as a feature extractor and leverages custom domain-specific data augmentation. Quantitatively, the mean (10 structures; test sets 1/2/3/4) Dice-coefficients were: 0.827, 0.802, 0.731, 0.783. Irrespective of the USG device/center, the DL segmentations were qualitatively comparable to their manual segmentations. The proposed DL system offered a promising and generalizable performance (multi-centers, multi-device) and also presents evidence in support of device-induced variation in image quality (a challenge to generalizibility) by using UMAP analysis. Its clinical translation can assist a wide range of users across settings to deliver standardized and quality prenatal examinations.

Original languageEnglish
Title of host publicationMedical Imaging 2022
Subtitle of host publicationComputer-Aided Diagnosis
EditorsKaren Drukker, Khan M. Iftekharuddin
ISBN (Electronic)9781510649415
Publication statusPublished - 2022
EventMedical Imaging 2022: Computer-Aided Diagnosis - Virtual, Online
Duration: 21-03-202227-03-2022

Publication series

NameProgress in Biomedical Optics and Imaging - Proceedings of SPIE
ISSN (Print)1605-7422


ConferenceMedical Imaging 2022: Computer-Aided Diagnosis
CityVirtual, Online

All Science Journal Classification (ASJC) codes

  • Electronic, Optical and Magnetic Materials
  • Atomic and Molecular Physics, and Optics
  • Biomaterials
  • Radiology Nuclear Medicine and imaging


Dive into the research topics of 'Towards a device-independent deep learning approach for the automated segmentation of sonographic fetal brain structures: A multi-center and multi-device validation'. Together they form a unique fingerprint.

Cite this