Skip to main navigation Skip to search Skip to main content

A Custom Deep Learning Model With Explainable Artificial Intelligence for Interpretable Brain Tumor Classification

  • Duppala Rohan
  • , Boddepalli Yaswanth
  • , V. S.Sai Vardhan
  • , G. Pradeep Reddy
  • , K. Purna Prakash
  • , Y. V.Pavan Kumar*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Brain tumors are critical neurological disorders affecting mankind. The Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scans play an important role in diagnosing brain tumors, but need an expert interpretation. Although deep learning methods can automate tumor detection, their lack of interpretability and clinical trust remains a major limitation. To address this concern, this paper presents a BrainTumorClassificationNetwork-Convolutional Neural Network (BTCNet-CNN) model with several layers for detecting and classifying MRI scanned images into four categories: glioma, meningioma, no tumor, and pituitary. The model was trained and evaluated on a publicly available dataset comprising 5824 MRI images. Augmentation techniques such as horizontal flipping, random rotations, zooming, shifting, and shearing were implemented on the training set, increasing data diversity and enhancing BTCNet-CNN's robustness to variations in tumor orientation, scale, and position. To evaluate the effectiveness of the proposed model, various popular pre-trained networks such as ResNet50 and InceptionV3 are also implemented on the dataset with metrics such as accuracy, loss, precision, recall, F1 score, and AUROC. Among these three models, the BTCNet-CNN has achieved superior performance with an accuracy of 99.31%, a loss of 0.118, and an average precision, recall, and F1 score of 99.25%, 99.25%, and 99.5%, respectively, on the unseen data. Statistical significance was evaluated using McNemar's test, confirming that BTCNet-CNN's predictions are significantly better than those of ResNet50 and InceptionV3 (p < 0.05). To enhance trust and interpretability of models' decisions, various Explainable AI (XAI) techniques, including Occlusion Sensitivity, LIME, Smooth Gradients, and Saliency Maps, were integrated. Finally, a Streamlit-based web application was developed to facilitate real-time prediction and visualization, ensuring practical applicability in clinical diagnosis.

Original languageEnglish
Article numbere70515
JournalEngineering Reports
Volume7
Issue number12
DOIs
Publication statusPublished - 12-2025

All Science Journal Classification (ASJC) codes

  • General Computer Science
  • General Engineering

Fingerprint

Dive into the research topics of 'A Custom Deep Learning Model With Explainable Artificial Intelligence for Interpretable Brain Tumor Classification'. Together they form a unique fingerprint.

Cite this