MixLM: High-Throughput and Effective LLM Ranking via Text-Embedding Mix-Interaction
By: Guoyao Li , Ran He , Shusen Jing and more
Potential Business Impact:
Makes search engines find things faster and better.
Large language models (LLMs) excel at capturing semantic nuances and therefore show impressive relevance ranking performance in modern recommendation and search systems. However, they suffer from high computational overhead under industrial latency and throughput requirements. In particular, cross-encoder ranking systems often create long context prefill-heavy workloads, as the model has to be presented with the user, query and item information. To this end, we propose MixLM, a novel LLM-based ranking framework, which significantly improves the system throughput via reducing the input context length, while preserving the semantic strength of cross-encoder rankers. In contrast to a standard ranking system where the context is presented to the model as pure text, we propose to use mix-interaction, a mixture of text and embedding tokens to represent the input. Specifically, MixLM encodes all items in the catalog into a few embedding tokens and stores in a nearline cache. The encoded item descriptions are used during online inference, effectively reducing the item length from a few thousand text tokens to a few embedding tokens. We share insights from deploying our MixLM framework to a real-world search application at LinkedIn, including a detailed discussion of our training pipelines, as well as a thorough analysis of our online serving infrastructure optimization. Comparing with strong baselines, MixLM increased throughput by 10.0x under the same latency budget, while maintaining relevance metrics. The efficiency gains delivered by MixLM enabled full-traffic deployment of LLM-powered search, which resulted in a significant 0.47% increase in Daily Active Users (DAU) in online A/B tests.
Similar Papers
Do LLMs Benefit from User and Item Embeddings in Recommendation Tasks?
Machine Learning (CS)
Helps online stores pick better stuff for you.
Using LLMs to Capture Users' Temporal Context for Recommendation
Information Retrieval
Helps apps learn what you like, now and later.
Scaling Up Efficient Small Language Models Serving and Deployment for Semantic Job Search
Information Retrieval
Makes smart search engines faster and cheaper.