FedRecon: Missing Modality Reconstruction in Heterogeneous Distributed Environments
By: Junming Liu , Yanting Gao , Yifei Sun and more
Potential Business Impact:
Fixes AI learning when data is missing.
Multimodal data are often incomplete and exhibit Non-Independent and Identically Distributed (Non-IID) characteristics in real-world scenarios. These inherent limitations lead to both modality heterogeneity through partial modality absence and data heterogeneity from distribution divergence, creating fundamental challenges for effective federated learning (FL). To address these coupled challenges, we propose FedRecon, the first method targeting simultaneous missing modality reconstruction and Non-IID adaptation in multimodal FL. Our approach first employs a lightweight Multimodal Variational Autoencoder (MVAE) to reconstruct missing modalities while preserving cross-modal consistency. Distinct from conventional imputation methods, we achieve sample-level alignment through a novel distribution mapping mechanism that guarantees both data consistency and completeness. Additionally, we introduce a strategy employing global generator freezing to prevent catastrophic forgetting, which in turn mitigates Non-IID fluctuations. Extensive evaluations on multimodal datasets demonstrate FedRecon's superior performance in modality reconstruction under Non-IID conditions, surpassing state-of-the-art methods. The code will be released upon paper acceptance.
Similar Papers
Learning Reconfigurable Representations for Multimodal Federated Learning with Missing Data
Machine Learning (CS)
Helps computers learn from messy, incomplete data.
A Multi-Modal Federated Learning Framework for Remote Sensing Image Classification
CV and Pattern Recognition
Lets computers learn from different kinds of satellite pictures.
How Far Are We from Generating Missing Modalities with Foundation Models?
Multimedia
Helps computers fill in missing picture or text parts.