Parallel Context-of-Experts Decoding for Retrieval Augmented Generation
By: Giulio Corallo, Paolo Papotti
Retrieval Augmented Generation faces a trade-off: concatenating documents in a long prompt enables multi-document reasoning but creates prefill bottlenecks, while encoding document KV caches separately offers speed but breaks cross-document interaction. We propose Parallel Context-of-Experts Decoding (Pced), a training-free framework that shifts evidence aggregation from the attention mechanism to the decoding. Pced treats retrieved documents as isolated "experts", synchronizing their predictions via a novel retrieval-aware contrastive decoding rule that weighs expert logits against the model prior. This approach recovers cross-document reasoning capabilities without constructing a shared attention across documents.
Similar Papers
APE: Faster and Longer Context-Augmented Generation via Adaptive Parallel Encoding
Machine Learning (CS)
Lets computers answer questions faster with more info.
CoCoLex: Confidence-guided Copy-based Decoding for Grounded Legal Text Generation
Computation and Language
Makes AI write legal papers that are true.
CAAD: Context-Aware Adaptive Decoding for Truthful Text Generation
Machine Learning (CS)
Makes AI tell the truth by learning from examples.