Score: 0

DUAL: Dynamic Uncertainty-Aware Learning

Published: May 21, 2025 | arXiv ID: 2506.03158v1

By: Jiahao Qin , Bei Peng , Feng Liu and more

Potential Business Impact:

Helps computers learn better with messy information.

Business Areas:
Image Recognition Data and Analytics, Software

Deep learning models frequently encounter feature uncertainty in diverse learning scenarios, significantly impacting their performance and reliability. This challenge is particularly complex in multi-modal scenarios, where models must integrate information from different sources with inherent uncertainties. We propose Dynamic Uncertainty-Aware Learning (DUAL), a unified framework that effectively handles feature uncertainty in both single-modal and multi-modal scenarios. DUAL introduces three key innovations: Dynamic Feature Uncertainty Modeling, which continuously refines uncertainty estimates through joint consideration of feature characteristics and learning dynamics; Adaptive Distribution-Aware Modulation, which maintains balanced feature distributions through dynamic sample influence adjustment; and Uncertainty-aware Cross-Modal Relationship Learning, which explicitly models uncertainties in cross-modal interactions. Through extensive experiments, we demonstrate DUAL's effectiveness across multiple domains: in computer vision tasks, it achieves substantial improvements of 7.1% accuracy on CIFAR-10, 6.5% accuracy on CIFAR-100, and 2.3% accuracy on Tiny-ImageNet; in multi-modal learning, it demonstrates consistent gains of 4.1% accuracy on CMU-MOSEI and 2.8% accuracy on CMU-MOSI for sentiment analysis, while achieving 1.4% accuracy improvements on MISR. The code will be available on GitHub soon.

Country of Origin
🇨🇳 China

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)