Score: 2

Robust Multimodal Learning for Ophthalmic Disease Grading via Disentangled Representation

Published: March 7, 2025 | arXiv ID: 2503.05319v2

By: Xinkun Wang , Yifang Wang , Senwei Liang and more

Potential Business Impact:

Helps doctors diagnose eye problems better.

Business Areas:
Image Recognition Data and Analytics, Software

This paper discusses how ophthalmologists often rely on multimodal data to improve diagnostic accuracy. However, complete multimodal data is rare in real-world applications due to a lack of medical equipment and concerns about data privacy. Traditional deep learning methods typically address these issues by learning representations in latent space. However, the paper highlights two key limitations of these approaches: (i) Task-irrelevant redundant information (e.g., numerous slices) in complex modalities leads to significant redundancy in latent space representations. (ii) Overlapping multimodal representations make it difficult to extract unique features for each modality. To overcome these challenges, the authors propose the Essence-Point and Disentangle Representation Learning (EDRL) strategy, which integrates a self-distillation mechanism into an end-to-end framework to enhance feature selection and disentanglement for more robust multimodal learning. Specifically, the Essence-Point Representation Learning module selects discriminative features that improve disease grading performance. The Disentangled Representation Learning module separates multimodal data into modality-common and modality-unique representations, reducing feature entanglement and enhancing both robustness and interpretability in ophthalmic disease diagnosis. Experiments on multimodal ophthalmology datasets show that the proposed EDRL strategy significantly outperforms current state-of-the-art methods.

Country of Origin
🇦🇺 🇦🇪 United Arab Emirates, Australia

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition