Test-time Correlation Alignment
By: Linjing You, Jiabao Lu, Xiayuan Huang
Potential Business Impact:
Helps AI learn from new data without seeing old data.
Deep neural networks often degrade under distribution shifts. Although domain adaptation offers a solution, privacy constraints often prevent access to source data, making Test-Time Adaptation (TTA, which adapts using only unlabeled test data) increasingly attractive. However, current TTA methods still face practical challenges: (1) a primary focus on instance-wise alignment, overlooking CORrelation ALignment (CORAL) due to missing source correlations; (2) complex backpropagation operations for model updating, resulting in overhead computation and (3) domain forgetting. To address these challenges, we provide a theoretical analysis to investigate the feasibility of Test-time Correlation Alignment (TCA), demonstrating that correlation alignment between high-certainty instances and test instances can enhance test performances with a theoretical guarantee. Based on this, we propose two simple yet effective algorithms: LinearTCA and LinearTCA+. LinearTCA applies a simple linear transformation to achieve both instance and correlation alignment without additional model updates, while LinearTCA+ serves as a plug-and-play module that can easily boost existing TTA methods. Extensive experiments validate our theoretical insights and show that TCA methods significantly outperforms baselines across various tasks, benchmarks and backbones. Notably, LinearTCA achieves higher accuracy with only 4% GPU memory and 0.6% computation time compared to the best TTA baseline. It also outperforms existing methods on CLIP over 1.86%.
Similar Papers
CTA: Cross-Task Alignment for Better Test Time Training
CV and Pattern Recognition
Makes computer vision work better with new data.
Neural Collapse in Test-Time Adaptation
CV and Pattern Recognition
Fixes AI mistakes when data changes.
Structural Alignment Improves Graph Test-Time Adaptation
Machine Learning (CS)
Helps AI learn from changing data without retraining.