Modality-Aware SAM: Sharpness-Aware-Minimization Driven Gradient Modulation for Harmonized Multimodal Learning
By: Hossein R. Nowdeh , Jie Ji , Xiaolong Ma and more
Potential Business Impact:
Helps computers learn better from different kinds of information.
In multimodal learning, dominant modalities often overshadow others, limiting generalization. We propose Modality-Aware Sharpness-Aware Minimization (M-SAM), a model-agnostic framework that applies to many modalities and supports early and late fusion scenarios. In every iteration, M-SAM in three steps optimizes learning. \textbf{First, it identifies the dominant modality} based on modalities' contribution in the accuracy using Shapley. \textbf{Second, it decomposes the loss landscape}, or in another language, it modulates the loss to prioritize the robustness of the model in favor of the dominant modality, and \textbf{third, M-SAM updates the weights} by backpropagation of modulated gradients. This ensures robust learning for the dominant modality while enhancing contributions from others, allowing the model to explore and exploit complementary features that strengthen overall performance. Extensive experiments on four diverse datasets show that M-SAM outperforms the latest state-of-the-art optimization and gradient manipulation methods and significantly balances and improves multimodal learning.
Similar Papers
Asynchronous Sharpness-Aware Minimization For Fast and Accurate Deep Learning
Machine Learning (CS)
Makes smart computer programs learn faster and better.
Sharpness-Aware Minimization: General Analysis and Improved Rates
Optimization and Control
Makes computer learning models work better.
Unveiling m-Sharpness Through the Structure of Stochastic Gradient Noise
Machine Learning (CS)
Makes computer learning models work better.