Score: 0

Federated In-Context Learning: Iterative Refinement for Improved Answer Quality

Published: June 9, 2025 | arXiv ID: 2506.07440v1

By: Ruhan Wang , Zhiyong Wang , Chengkai Huang and more

Potential Business Impact:

Lets AI learn from many computers without sharing data.

Business Areas:
Semantic Search Internet Services

For question-answering (QA) tasks, in-context learning (ICL) enables language models to generate responses without modifying their parameters by leveraging examples provided in the input. However, the effectiveness of ICL heavily depends on the availability of high-quality examples, which are often scarce due to data privacy constraints, annotation costs, and distribution disparities. A natural solution is to utilize examples stored on client devices, but existing approaches either require transmitting model parameters - incurring significant communication overhead - or fail to fully exploit local datasets, limiting their effectiveness. To address these challenges, we propose Federated In-Context Learning (Fed-ICL), a general framework that enhances ICL through an iterative, collaborative process. Fed-ICL progressively refines responses by leveraging multi-round interactions between clients and a central server, improving answer quality without the need to transmit model parameters. We establish theoretical guarantees for the convergence of Fed-ICL and conduct extensive experiments on standard QA benchmarks, demonstrating that our proposed approach achieves strong performance while maintaining low communication costs.

Country of Origin
🇺🇸 United States

Page Count
27 pages

Category
Computer Science:
Machine Learning (CS)