Score: 1

qa-FLoRA: Data-free query-adaptive Fusion of LoRAs for LLMs

Published: December 12, 2025 | arXiv ID: 2512.11366v1

By: Shreya Shukla , Aditya Sriram , Milinda Kuppur Narayanaswamy and more

BigTech Affiliations: Mercedes-Benz

Potential Business Impact:

Lets AI combine different skills for better answers.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The deployment of large language models for specialized tasks often requires domain-specific parameter-efficient finetuning through Low-Rank Adaptation (LoRA) modules. However, effectively fusing these adapters to handle complex, multi-domain composite queries remains a critical challenge. Existing LoRA fusion approaches either use static weights, which assign equal relevance to each participating LoRA, or require data-intensive supervised training for every possible LoRA combination to obtain respective optimal fusion weights. We propose qa-FLoRA, a novel query-adaptive data-and-training-free method for LoRA fusion that dynamically computes layer-level fusion weights by measuring distributional divergence between the base model and respective adapters. Our approach eliminates the need for composite training data or domain-representative samples, making it readily applicable to existing adapter collections. Extensive experiments across nine multilingual composite tasks spanning mathematics, coding, and medical domains, show that qa-FLoRA outperforms static fusion by ~5% with LLaMA-2 and ~6% with LLaMA-3, and the training-free baselines by ~7% with LLaMA-2 and ~10% with LLaMA-3, while significantly closing the gap with supervised baselines. Further, layer-level analysis of our fusion weights reveals interpretable fusion patterns, demonstrating the effectiveness of our approach for robust multi-domain adaptation.

Country of Origin
🇩🇪 Germany

Page Count
12 pages

Category
Computer Science:
Computation and Language