Abstract
Accurate segmentation of skin lesions from dermatological images is essential for the early detection of melanoma and other skin cancers. Conventional methods based on convolutional neural networks (CNNs) and transformer architectures often struggle to capture both local and global contextual features, delineate irregular lesion boundaries, and remain robust against artifacts such as hair, shadows, and illumination variations. To overcome these challenges, we introduce LeSegGAN, a hybrid attention-enhanced generative adversarial network (GAN) framework for robust skin lesion segmentation. The generator combines convolutional and inception modules with residual connections and channel attention to extract multi-scale features, while a vision transformer (ViT)-based discriminator improves segmentation accuracy through adversarial learning. A composite loss function integrating weighted binary cross-entropy, Dice, and focal losses further addresses class imbalance and enhances performance. LeSegGAN is evaluated on four benchmark datasets namely, Waterloo skin cancer, MED-NODE, SD-260, and ISIC-2016. The proposed LeSegGAN consistently outperformed five state-of-the-art deep learning models (UNet, UNet++, SegNet, FCN, and DTP-Net), achieving accuracies of 0.9943, 0.9759, 0.9873, and 0.9724, with corresponding IoU scores of 0.9451, 0.9664, 0.8709, and 0.7717. These results highlight LeSegGAN’s strong generalization ability and robustness, demonstrating its potential for integration into computer-aided diagnostic systems for automated skin cancer detection.
| Original language | English |
|---|---|
| Pages (from-to) | 177019-177035 |
| Number of pages | 17 |
| Journal | IEEE Access |
| Volume | 13 |
| DOIs | |
| Publication status | Published - 2025 |
All Science Journal Classification (ASJC) codes
- General Computer Science
- General Materials Science
- General Engineering