Reliable LLM-Based Edge-Cloud-Expert Cascades for Telecom Knowledge Systems
By: Qiushuo Hou , Sangwoo Park , Matteo Zecchin and more
Large language models (LLMs) are emerging as key enablers of automation in domains such as telecommunications, assisting with tasks including troubleshooting, standards interpretation, and network optimization. However, their deployment in practice must balance inference cost, latency, and reliability. In this work, we study an edge-cloud-expert cascaded LLM-based knowledge system that supports decision-making through a question-and-answer pipeline. In it, an efficient edge model handles routine queries, a more capable cloud model addresses complex cases, and human experts are involved only when necessary. We define a misalignment-cost constrained optimization problem, aiming to minimize average processing cost, while guaranteeing alignment of automated answers with expert judgments. We propose a statistically rigorous threshold selection method based on multiple hypothesis testing (MHT) for a query processing mechanism based on knowledge and confidence tests. The approach provides finite-sample guarantees on misalignment risk. Experiments on the TeleQnA dataset -- a telecom-specific benchmark -- demonstrate that the proposed method achieves superior cost-efficiency compared to conventional cascaded baselines, while ensuring reliability at prescribed confidence levels.
Similar Papers
Experts are all you need: A Composable Framework for Large Language Model Inference
Machine Learning (CS)
Makes AI smarter and faster by teamwork.
Balancing Information Accuracy and Response Timeliness in Networked LLMs
Machine Learning (CS)
Smart AI groups work better than one big AI.
MM-Telco: Benchmarks and Multimodal Large Language Models for Telecom Applications
Artificial Intelligence
Helps phone networks run better with smart computers.