Score: 2

Similarity-Based Domain Adaptation with LLMs

Published: March 7, 2025 | arXiv ID: 2503.05281v1

By: Jie He , Wendi Zhou , Xiang Lorraine Li and more

Potential Business Impact:

Teaches computers new tasks without needing old examples.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Unsupervised domain adaptation leverages abundant labeled data from various source domains to generalize onto unlabeled target data. Prior research has primarily focused on learning domain-invariant features across the source and target domains. However, these methods often require training a model using source domain data, which is time-consuming and can limit model usage for applications with different source data. This paper introduces a simple framework that utilizes the impressive generalization capabilities of Large Language Models (LLMs) for target data annotation without the need of source model training, followed by a novel similarity-based knowledge distillation loss. Our extensive experiments on cross-domain text classification reveal that our framework achieves impressive performance, specifically, 2.44\% accuracy improvement when compared to the SOTA method.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¬πŸ‡§ United States, United Kingdom

Page Count
9 pages

Category
Computer Science:
Computation and Language