Multimodal Embedding Framework with Few-Shot Learning for Disease Prediction From Multimodal EHRs

Hemamalini, U. and Dhamayanthi, M. K. and Santhosh, Geeta and Rajeshkumar, C. and Rajakumari, K. and Sakthivanitha, M. (2025) Multimodal Embedding Framework with Few-Shot Learning for Disease Prediction From Multimodal EHRs. In: 2025 6th International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India.

Full text not available from this repository. (Request a copy)

Abstract

Predictive modeling for healthcare tends to be based on large, labeled datasets that are hard to obtain because of privacy limitations, expense, and data heterogeneity. To overcome such limitations, this study suggests a framework for few-shot disease prediction from multimodal Electronic Health Records (EHRs), incorporating structured data (e.g., lab tests, vitals, diagnoses) and unstructured clinical text (e.g., progress notes, discharge summaries). The present approach utilizes Large Language Models (LLMs) and contrastive representation learning to put heterogeneous data modalities in alignment with a single latent space. The study proposes an in-context learning approach using a prompt that conditions the model using a few labeled instances, supporting fast adaptation towards new diseases as well as different clinical environments. The proposed approach outperforms traditional models trained under sparse guidance in low-resource situations. The findings demonstrate the potential of multimodal few-shot learning to advance personalized treatment, especially in settings with a dearth of labeled data.

Item Type: Conference or Workshop Item (Paper)
Subjects: Computer Science Engineering > Automated Machine Learning
Domains: Computer Applications
Depositing User: Mr IR Admin
Date Deposited: 29 Aug 2025 10:33
Last Modified: 29 Aug 2025 10:33
URI: https://ir.vistas.ac.in/id/eprint/10784

Actions (login required)

View Item
View Item