From Cross-Task Examples to In-Task Prompts: A Graph-Based Pseudo-Labeling Framework for In-context Learning
By: Zihan Chen , Song Wang , Xingbo Fu and more
Potential Business Impact:
Teaches computers new things with fewer examples.
The capability of in-context learning (ICL) enables large language models (LLMs) to perform novel tasks without parameter updates by conditioning on a few input-output examples. However, collecting high-quality examples for new or challenging tasks can be costly and labor-intensive. In this work, we propose a cost-efficient two-stage pipeline that reduces reliance on LLMs for data labeling. Our approach first leverages readily available cross-task examples to prompt an LLM and pseudo-label a small set of target task instances. We then introduce a graph-based label propagation method that spreads label information to the remaining target examples without additional LLM queries. The resulting fully pseudo-labeled dataset is used to construct in-task demonstrations for ICL. This pipeline combines the flexibility of cross-task supervision with the scalability of LLM-free propagation. Experiments across five tasks demonstrate that our method achieves strong performance while lowering labeling costs.
Similar Papers
Rethinking Label Consistency of In-Context Learning: An Implicit Transductive Label Propagation Perspective
Artificial Intelligence
Helps AI learn faster with better examples.
Leveraging In-Context Learning for Language Model Agents
Computation and Language
Helps AI agents learn by watching examples.
A Framework for Quantifying How Pre-Training and Context Benefit In-Context Learning
Artificial Intelligence
Teaches computers to learn new things from examples.