From Entanglement to Alignment: Representation Space Decomposition for Unsupervised Time Series Domain Adaptation
By: Rongyao Cai , Ming Jin , Qingsong Wen and more
Potential Business Impact:
Helps computers learn from different data types.
Domain shift poses a fundamental challenge in time series analysis, where models trained on source domain often fail dramatically when applied in target domain with different yet similar distributions. While current unsupervised domain adaptation (UDA) methods attempt to align cross-domain feature distributions, they typically treat features as indivisible entities, ignoring their intrinsic compositions that govern domain adaptation. We introduce DARSD, a novel UDA framework with theoretical explainability that explicitly realizes UDA tasks from the perspective of representation space decomposition. Our core insight is that effective domain adaptation requires not just alignment, but principled disentanglement of transferable knowledge from mixed representations. DARSD consists of three synergistic components: (I) An adversarial learnable common invariant basis that projects original features into a domain-invariant subspace while preserving semantic content; (II) A prototypical pseudo-labeling mechanism that dynamically separates target features based on confidence, hindering error accumulation; (III) A hybrid contrastive optimization strategy that simultaneously enforces feature clustering and consistency while mitigating emerging distribution gaps. Comprehensive experiments conducted on four benchmarks (WISDM, HAR, HHAR, and MFD) demonstrate DARSD's superiority against 12 UDA algorithms, achieving optimal performance in 35 out of 53 scenarios and ranking first across all benchmarks.
Similar Papers
From Entanglement to Alignment: Representation Space Decomposition for Unsupervised Time Series Domain Adaptation
Machine Learning (CS)
Helps computers learn from different data types.
Simulations of Common Unsupervised Domain Adaptation Algorithms for Image Classification
Machine Learning (CS)
Helps computers learn from different data.
Domain Adaptation and Entanglement: an Optimal Transport Perspective
Machine Learning (CS)
Makes computer learning work better with new data.