ExpeTrans: LLMs Are Experiential Transfer Learners
By: Jinglong Gao , Xiao Ding , Lingxiao Zou and more
Potential Business Impact:
Computers learn new tasks without human help.
Recent studies provide large language models (LLMs) with textual task-solving experiences via prompts to improve their performance. However, previous methods rely on substantial human labor or time to gather such experiences for each task, which is impractical given the growing variety of task types in user queries to LLMs. To address this issue, we design an autonomous experience transfer framework to explore whether LLMs can mimic human cognitive intelligence to autonomously transfer experience from existing source tasks to newly encountered target tasks. This not only allows the acquisition of experience without extensive costs of previous methods, but also offers a novel path for the generalization of LLMs. Experimental results on 13 datasets demonstrate that our framework effectively improves the performance of LLMs. Furthermore, we provide a detailed analysis of each module in the framework.
Similar Papers
LLMs Are Globally Multilingual Yet Locally Monolingual: Exploring Knowledge Transfer via Language and Thought Theory
Computation and Language
Helps computers understand facts in any language.
Explicit Learning and the LLM in Machine Translation
Computation and Language
Computers learn new languages from books.
How Far Can LLMs Improve from Experience? Measuring Test-Time Learning Ability in LLMs with Human Comparison
Computation and Language
Computers learn and get smarter as they play.