Can abstract concepts from LLM improve SLM performance?
By: Siddharth Tandon
Large language models (LLMs) excel at diverse tasks, but their deployment on resource-constrained devices remains challenging. Existing methods like quantization, pruning, and distillation can reduce memory footprint but often demand extensive experimentation and careful infrastructure design. Leveraging existing techniques for extracting high-level concepts (represented as steering vectors) from larger models, we investigate their transferability to smaller language models (SLM) during inference. We demonstrate through extensive experimentation that these concepts can be effectively transferred to smaller models, irrespective of their family (e.g., Phi, Llama, Qwen), leading to performance improvements across a wide range of tasks. Furthermore, we introduce inference-time scaling to enhance performance by dynamically adjusting the steering intensity which has resulted in a 7-15\% of accuracy improvement for Qwen3-0.6B.
Similar Papers
SLMQuant:Benchmarking Small Language Model Quantization for Practical Deployment
Machine Learning (CS)
Makes small AI models work on phones.
A Survey on Collaborative Mechanisms Between Large and Small Language Models
Artificial Intelligence
Makes smart AI work on phones and less powerful devices.
Improving Latent Reasoning in LLMs via Soft Concept Mixing
Computation and Language
Teaches computers to think with fuzzy ideas.