Transfer learning techniques for medical image analysis: A review

Padmavathi Kora, Chui Ping Ooi, Oliver Faust, U. Raghavendra, Anjan Gudigar, Wai Yee Chan, K. Meenakshi, K. Swaraja, Pawel Plawiak, U. Rajendra Acharya

Research output: Contribution to journalReview articlepeer-review

37 Citations (Scopus)

Abstract

Medical imaging is a useful tool for disease detection and diagnostic imaging technology has enabled early diagnosis of medical conditions. Manual image analysis methods are labor-intense and they are susceptible to intra as well as inter-observer variability. Automated medical image analysis techniques can overcome these limitations. In this review, we investigated Transfer Learning (TL) architectures for automated medical image analysis. We discovered that TL has been applied to a wide range of medical imaging tasks, such as segmentation, object identification, disease categorization, severity grading, to name a few. We could establish that TL provides high quality decision support and requires less training data when compared to traditional deep learning methods. These advantageous properties arise from the fact that TL models have already been trained on large generic datasets and a task specific dataset is only used to customize the model. This eliminates the need to train the models from scratch. Our review shows that AlexNet, ResNet, VGGNet, and GoogleNet are the most widely used TL models for medical image analysis. We found that these models can understand medical images, and the customization refines the ability, making these TL models useful tools for medical image analysis.

Original languageEnglish
Pages (from-to)79-107
Number of pages29
JournalBiocybernetics and Biomedical Engineering
Volume42
Issue number1
DOIs
Publication statusPublished - 01-01-2022

All Science Journal Classification (ASJC) codes

  • Biomedical Engineering

Fingerprint

Dive into the research topics of 'Transfer learning techniques for medical image analysis: A review'. Together they form a unique fingerprint.

Cite this