TY - JOUR
T1 - WeakSegNet
T2 - Combining Unsupervised, Few-Shot and Weakly Supervised Methods for the Semantic Segmentation of Low-Magnification Effusion Cytology Images
AU - Aboobacker, Shajahan
AU - Vijayasenan, Deepu
AU - Sumam David, S.
AU - Suresh, Pooja K.
AU - Sreeram, Saraswathy
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2025
Y1 - 2025
N2 - Effusion cytology analysis can be time consuming for cytopathologists, but the burden can be reduced through automatic malignancy detection. The main challenge in the automation process is pixel-wise labeling. We proposed WeakSegNet, a new model that addresses the challenge of semantic segmentation in low-magnification images by utilizing only four images with pixel-wise labels. WeakSegNet combines unsupervised, few-shot, and weakly supervised learning methods. In the first stage, an unsupervised model, DeepClusterSeg, learns the homogeneous structures from different images. The few-shot method uses only four images with pixel-wise labels to map homogeneous structures to the required classes. The final stage utilized image-level labels to predict precise classes using weakly supervised learning. We conducted our experiments using a dataset from KMC Hospital, MAHE, which consisted of 345 images. We performed 5-fold cross-validation to evaluate the results. Our proposed model achieved promising results, with an F-score of 0.85 and an IoU of 0.81 for the malignant class, surpassing the performance of the standard k-means algorithm with weakly supervised learning (F-scores of 0.65 and an IoU of 0.61). The semantic segmentation of low-magnification images using our approach eliminated 47% of the sub-regions that need to be scanned at high magnification. This innovative approach reduces the workload of cytopathologists and maintains a high accuracy in effusion cytology malignancy detection.
AB - Effusion cytology analysis can be time consuming for cytopathologists, but the burden can be reduced through automatic malignancy detection. The main challenge in the automation process is pixel-wise labeling. We proposed WeakSegNet, a new model that addresses the challenge of semantic segmentation in low-magnification images by utilizing only four images with pixel-wise labels. WeakSegNet combines unsupervised, few-shot, and weakly supervised learning methods. In the first stage, an unsupervised model, DeepClusterSeg, learns the homogeneous structures from different images. The few-shot method uses only four images with pixel-wise labels to map homogeneous structures to the required classes. The final stage utilized image-level labels to predict precise classes using weakly supervised learning. We conducted our experiments using a dataset from KMC Hospital, MAHE, which consisted of 345 images. We performed 5-fold cross-validation to evaluate the results. Our proposed model achieved promising results, with an F-score of 0.85 and an IoU of 0.81 for the malignant class, surpassing the performance of the standard k-means algorithm with weakly supervised learning (F-scores of 0.65 and an IoU of 0.61). The semantic segmentation of low-magnification images using our approach eliminated 47% of the sub-regions that need to be scanned at high magnification. This innovative approach reduces the workload of cytopathologists and maintains a high accuracy in effusion cytology malignancy detection.
UR - https://www.scopus.com/pages/publications/105013397400
UR - https://www.scopus.com/inward/citedby.url?scp=105013397400&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2025.3598953
DO - 10.1109/ACCESS.2025.3598953
M3 - Article
AN - SCOPUS:105013397400
SN - 2169-3536
VL - 13
SP - 144467
EP - 144478
JO - IEEE Access
JF - IEEE Access
ER -