Federated In-Context Learning: Iterative Refinement for Improved Answer Quality
By: Ruhan Wang , Zhiyong Wang , Chengkai Huang and more
Potential Business Impact:
Lets AI learn from many computers without sharing data.
For question-answering (QA) tasks, in-context learning (ICL) enables language models to generate responses without modifying their parameters by leveraging examples provided in the input. However, the effectiveness of ICL heavily depends on the availability of high-quality examples, which are often scarce due to data privacy constraints, annotation costs, and distribution disparities. A natural solution is to utilize examples stored on client devices, but existing approaches either require transmitting model parameters - incurring significant communication overhead - or fail to fully exploit local datasets, limiting their effectiveness. To address these challenges, we propose Federated In-Context Learning (Fed-ICL), a general framework that enhances ICL through an iterative, collaborative process. Fed-ICL progressively refines responses by leveraging multi-round interactions between clients and a central server, improving answer quality without the need to transmit model parameters. We establish theoretical guarantees for the convergence of Fed-ICL and conduct extensive experiments on standard QA benchmarks, demonstrating that our proposed approach achieves strong performance while maintaining low communication costs.
Similar Papers
Implicit Federated In-context Learning For Task-Specific LLM Fine-Tuning
Machine Learning (CS)
Lets AI learn from private data without sharing it.
Leveraging In-Context Learning for Language Model Agents
Computation and Language
Helps AI agents learn by watching examples.
Improving Examples in Web API Specifications using Iterated-Calls In-Context Learning
Software Engineering
Creates useful examples for computer programs.