TY - GEN
T1 - Advancements in Federated Learning and Differential Privacy for Medical Data Analysis
AU - Anusuya, R.
AU - Karthika Renuka, D.
AU - Ashok Kumar, L.
AU - Abirami, B.
AU - Naveen Raj, R.
AU - Ckv, Dharaneesh
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Handling sensitive medical data is critical in health-care, particularly with the rise of artificial intelligence applications. This study addresses these challenges by proposing a teacher-student framework employing differential privacy (DP) to protect data while maintaining model performance. Using Laplacian noise, predictions from teacher models are anonymized before aggregation to create student labels. The approach di- verges from traditional Federated Learning (FL) by employing the Private Aggregation of Teacher Ensembles (PATE) methodology, specifically tailored for medical imaging datasets, such as COVID-19 CT scans. Experiments demonstrate a privacy- performance trade-off, with accuracies ranging from 72% to 85%, depending on the noise level. These findings underscore the framework's efficacy in balancing robust privacy preservation with utility. The study offers a scalable and secure method for privacy-critical healthcare applications, paving the way for reliable AI systems that adhere to stringent data protection standards. This work advances the practical integration of privacy in sensitive medical data analysis.
AB - Handling sensitive medical data is critical in health-care, particularly with the rise of artificial intelligence applications. This study addresses these challenges by proposing a teacher-student framework employing differential privacy (DP) to protect data while maintaining model performance. Using Laplacian noise, predictions from teacher models are anonymized before aggregation to create student labels. The approach di- verges from traditional Federated Learning (FL) by employing the Private Aggregation of Teacher Ensembles (PATE) methodology, specifically tailored for medical imaging datasets, such as COVID-19 CT scans. Experiments demonstrate a privacy- performance trade-off, with accuracies ranging from 72% to 85%, depending on the noise level. These findings underscore the framework's efficacy in balancing robust privacy preservation with utility. The study offers a scalable and secure method for privacy-critical healthcare applications, paving the way for reliable AI systems that adhere to stringent data protection standards. This work advances the practical integration of privacy in sensitive medical data analysis.
UR - https://www.scopus.com/pages/publications/105011824544
UR - https://www.scopus.com/pages/publications/105011824544#tab=citedBy
U2 - 10.1109/ICDSIS65355.2025.11071217
DO - 10.1109/ICDSIS65355.2025.11071217
M3 - Conference contribution
AN - SCOPUS:105011824544
T3 - 3rd International Conference on Data Science and Information System, ICDSIS 2025
BT - 3rd International Conference on Data Science and Information System, ICDSIS 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 3rd International Conference on Data Science and Information System, ICDSIS 2025
Y2 - 16 May 2025 through 17 May 2025
ER -