Pay for Hints, Not Answers: LLM Shepherding for Cost-Efficient Inference
By: Ziming Dong , Hardik Sharma , Evan O'Toole and more
Potential Business Impact:
Makes small AI smarter with big AI hints.
Large Language Models (LLMs) deliver state-of-the-art performance on complex reasoning tasks, but their inference costs limit deployment at scale. Small Language Models (SLMs) offer dramatic cost savings yet lag substantially in accuracy. Existing approaches - routing and cascading - treat the LLM as an all-or-nothing resource: either the query bypasses the LLM entirely, or the LLM generates a complete response at full cost. We introduce LLM Shepherding, a framework that requests only a short prefix (a hint) from the LLM and provides it to SLM. This simple mechanism is surprisingly effective for math and coding tasks: even hints comprising 10-30% of the full LLM response improve SLM accuracy significantly. Shepherding generalizes both routing and cascading, and it achieves lower cost under oracle decision-making. We develop a two-stage predictor that jointly determines whether a hint is needed and how many tokens to request. On the widely-used mathematical reasoning (GSM8K, CNK12) and code generation (HumanEval, MBPP) benchmarks, Shepherding reduces costs by 42-94% relative to LLM-only inference. Compared to state-of-the-art routing and cascading baselines, shepherding delivers up to 2.8x cost reduction while matching accuracy. To our knowledge, this is the first work to exploit token-level budget control for SLM-LLM collaboration.
Similar Papers
Towards Efficient Multi-LLM Inference: Characterization and Analysis of LLM Routing and Hierarchical Techniques
Machine Learning (CS)
Lets smart computers use less power.
RelayLLM: Efficient Reasoning via Collaborative Decoding
Computation and Language
Smart AI asks for help only when it's stuck.
Reliable LLM-Based Edge-Cloud-Expert Cascades for Telecom Knowledge Systems
Signal Processing
Makes AI assistants smarter and cheaper for companies.