TaoSR1: The Thinking Model for E-commerce Relevance Search
By: Chenhe Dong , Shaowei Yao , Pengkun Jiao and more
Potential Business Impact:
Helps online stores show you better products.
Query-product relevance prediction is a core task in e-commerce search. BERT-based models excel at semantic matching but lack complex reasoning capabilities. While Large Language Models (LLMs) are explored, most still use discriminative fine-tuning or distill to smaller models for deployment. We propose a framework to directly deploy LLMs for this task, addressing key challenges: Chain-of-Thought (CoT) error accumulation, discriminative hallucination, and deployment feasibility. Our framework, TaoSR1, involves three stages: (1) Supervised Fine-Tuning (SFT) with CoT to instill reasoning; (2) Offline sampling with a pass@N strategy and Direct Preference Optimization (DPO) to improve generation quality; and (3) Difficulty-based dynamic sampling with Group Relative Policy Optimization (GRPO) to mitigate discriminative hallucination. Additionally, post-CoT processing and a cumulative probability-based partitioning method enable efficient online deployment. TaoSR1 significantly outperforms baselines on offline datasets and achieves substantial gains in online side-by-side human evaluations, introducing a novel paradigm for applying CoT reasoning to relevance classification.
Similar Papers
Thinking Broad, Acting Fast: Latent Reasoning Distillation from Multi-Perspective Chain-of-Thought for E-Commerce Relevance
Information Retrieval
Helps online shoppers find products faster.
LREF: A Novel LLM-based Relevance Framework for E-commerce
Information Retrieval
Helps online stores show you better stuff.
From Reasoning to Super-Intelligence: A Search-Theoretic Perspective
Artificial Intelligence
Teaches computers to solve hard problems step-by-step.