Skip to main navigation Skip to search Skip to main content

Multi-task deep neural network models for learning COVID-19 disease representations from multimodal data

  • Veena Mayya*
  • , K. Karthik
  • , Krishnananda Prabhu Karadka
  • , S. Sowmya Kamath
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Over the continued course of the COVID-19 pandemic, a significant volume of expert-written diagnosis reports has been accumulated that capture a multitude of symptoms and observations on diagnosed COVID-19 cases, along with expert-validated chest X-ray scans. The utility of rich, latent information embedded in such unstructured expert-written diagnosis reports and its importance as a source of valuable disease-specific information has been explored to a very limited extent. In this work, a convolutional attention-based dense (CAD) neural model for COVID-19 prediction is proposed. The model is trained on the rich disease-specific parameters extracted from chest X-ray images and expert-written diagnostic text reports to support an evidence-based diagnosis. Scalability is ensured by incorporating content based learning models for automatically generating diagnosis reports of identified COVID-19 cases, reducing radiologists' cognitive burden. Experimental evaluation showed that multimodal patient data plays a vital role in diagnosing early-stage cases, thus helping hasten the diagnosis process.

Original languageEnglish
Pages (from-to)501-515
Number of pages15
JournalInternational Journal of Medical Engineering and Informatics
Volume15
Issue number6
DOIs
Publication statusPublished - 2023

All Science Journal Classification (ASJC) codes

  • Medicine (miscellaneous)
  • Biomaterials
  • Biomedical Engineering
  • Health Informatics

Fingerprint

Dive into the research topics of 'Multi-task deep neural network models for learning COVID-19 disease representations from multimodal data'. Together they form a unique fingerprint.

Cite this