Score: 1

Scaling Up Efficient Small Language Models Serving and Deployment for Semantic Job Search

Published: October 25, 2025 | arXiv ID: 2510.22101v1

By: Kayhan Behdin , Qingquan Song , Sriram Vasudevan and more

BigTech Affiliations: LinkedIn

Potential Business Impact:

Makes smart search engines faster and cheaper.

Business Areas:
Semantic Search Internet Services

Large Language Models (LLMs) have demonstrated impressive quality when applied to predictive tasks such as relevance ranking and semantic search. However, deployment of such LLMs remains prohibitively expensive for industry applications with strict latency and throughput requirements. In this work, we present lessons and efficiency insights from developing a purely text-based decoder-only Small Language Model (SLM) for a semantic search application at LinkedIn. Particularly, we discuss model compression techniques such as pruning that allow us to reduce the model size by up to $40\%$ while maintaining the accuracy. Additionally, we present context compression techniques that allow us to reduce the input context length by up to $10$x with minimal loss of accuracy. Finally, we present practical lessons from optimizing the serving infrastructure for deploying such a system on GPUs at scale, serving millions of requests per second. Taken together, this allows us to increase our system's throughput by $10$x in a real-world deployment, while meeting our quality bar.

Country of Origin
🇺🇸 United States

Page Count
12 pages

Category
Computer Science:
Information Retrieval