Balancing Information Accuracy and Response Timeliness in Networked LLMs
By: Yigit Turkmen, Baturalp Buyukates, Melih Bastopcu
Potential Business Impact:
Smart AI groups work better than one big AI.
Recent advancements in Large Language Models (LLMs) have transformed many fields including scientific discovery, content generation, biomedical text mining, and educational technology. However, the substantial requirements for training data, computational resources, and energy consumption pose significant challenges for their practical deployment. A promising alternative is to leverage smaller, specialized language models and aggregate their outputs to improve overall response quality. In this work, we investigate a networked LLM system composed of multiple users, a central task processor, and clusters of topic-specialized LLMs. Each user submits categorical binary (true/false) queries, which are routed by the task processor to a selected cluster of $m$ LLMs. After gathering individual responses, the processor returns a final aggregated answer to the user. We characterize both the information accuracy and response timeliness in this setting, and formulate a joint optimization problem to balance these two competing objectives. Our extensive simulations demonstrate that the aggregated responses consistently achieve higher accuracy than those of individual LLMs. Notably, this improvement is more significant when the participating LLMs exhibit similar standalone performance.
Similar Papers
Distributed LLMs and Multimodal Large Language Models: A Survey on Advances, Challenges, and Future Directions
Computation and Language
Lets computers understand text, pictures, and sounds together.
Evaluating Large Language Models for Workload Mapping and Scheduling in Heterogeneous HPC Systems
Distributed, Parallel, and Cluster Computing
Lets computers solve hard scheduling puzzles from words.
Harnessing Collective Intelligence of LLMs for Robust Biomedical QA: A Multi-Model Approach
Computation and Language
Helps doctors find answers in medical books faster.