Synchronizing Task Behavior: Aligning Multiple Tasks during Test-Time Training
By: Wooseong Jeong , Jegyeong Cho , Youngho Yoon and more
Potential Business Impact:
Helps AI learn new things without forgetting old ones.
Generalizing neural networks to unseen target domains is a significant challenge in real-world deployments. Test-time training (TTT) addresses this by using an auxiliary self-supervised task to reduce the domain gap caused by distribution shifts between the source and target. However, we find that when models are required to perform multiple tasks under domain shifts, conventional TTT methods suffer from unsynchronized task behavior, where the adaptation steps needed for optimal performance in one task may not align with the requirements of other tasks. To address this, we propose a novel TTT approach called Synchronizing Tasks for Test-time Training (S4T), which enables the concurrent handling of multiple tasks. The core idea behind S4T is that predicting task relations across domain shifts is key to synchronizing tasks during test time. To validate our approach, we apply S4T to conventional multi-task benchmarks, integrating it with traditional TTT protocols. Our empirical results show that S4T outperforms state-of-the-art TTT methods across various benchmarks.
Similar Papers
CTA: Cross-Task Alignment for Better Test Time Training
CV and Pattern Recognition
Makes computer vision work better with new data.
Test-Time Training for Speech Enhancement
Audio and Speech Processing
Cleans up noisy speech on the fly.
Test-Time Alignment for Tracking User Interest Shifts in Sequential Recommendation
Information Retrieval
Helps movie apps guess what you'll watch next.