Global Variational Inference Enhanced Robust Domain Adaptation
By: Lingkun Luo, Shiqiang Hu, Liming Chen
Potential Business Impact:
Helps computers learn from different data better.
Deep learning-based domain adaptation (DA) methods have shown strong performance by learning transferable representations. However, their reliance on mini-batch training limits global distribution modeling, leading to unstable alignment and suboptimal generalization. We propose Global Variational Inference Enhanced Domain Adaptation (GVI-DA), a framework that learns continuous, class-conditional global priors via variational inference to enable structure-aware cross-domain alignment. GVI-DA minimizes domain gaps through latent feature reconstruction, and mitigates posterior collapse using global codebook learning with randomized sampling. It further improves robustness by discarding low-confidence pseudo-labels and generating reliable target-domain samples. Extensive experiments on four benchmarks and thirty-eight DA tasks demonstrate consistent state-of-the-art performance. We also derive the model's evidence lower bound (ELBO) and analyze the effects of prior continuity, codebook size, and pseudo-label noise tolerance. In addition, we compare GVI-DA with diffusion-based generative frameworks in terms of optimization principles and efficiency, highlighting both its theoretical soundness and practical advantages.
Similar Papers
Beyond Batch Learning: Global Awareness Enhanced Domain Adaptation
Machine Learning (CS)
Teaches computers to understand different kinds of pictures.
Gradual Domain Adaptation for Graph Learning
Machine Learning (CS)
Helps computers learn from different data better.
Variational Bayesian Adaptive Learning of Deep Latent Variables for Acoustic Knowledge Transfer
Audio and Speech Processing
Makes computers understand speech better in noisy places.