Agentic Conversational Search with Contextualized Reasoning via Reinforcement Learning
By: Fengran Mo , Yifan Gao , Sha Li and more
Potential Business Impact:
Helps chatbots understand and adapt to changing conversations.
Large Language Models (LLMs) have become a popular interface for human-AI interaction, supporting information seeking and task assistance through natural, multi-turn dialogue. To respond to users within multi-turn dialogues, the context-dependent user intent evolves across interactions, requiring contextual interpretation, query reformulation, and dynamic coordination between retrieval and generation. Existing studies usually follow static rewrite, retrieve, and generate pipelines, which optimize different procedures separately and overlook the mixed-initiative action optimization simultaneously. Although the recent developments in deep search agents demonstrate the effectiveness in jointly optimizing retrieval and generation via reasoning, these approaches focus on single-turn scenarios, which might lack the ability to handle multi-turn interactions. We introduce a conversational agent that interleaves search and reasoning across turns, enabling exploratory and adaptive behaviors learned through reinforcement learning (RL) training with tailored rewards towards evolving user goals. The experimental results across four widely used conversational benchmarks demonstrate the effectiveness of our methods by surpassing several existing strong baselines.
Similar Papers
SpeakRL: Synergizing Reasoning, Speaking, and Acting in Language Models with Reinforcement Learning
Artificial Intelligence
Teaches computers to ask questions to help users.
Benchmarking Contextual Understanding for In-Car Conversational Systems
Computation and Language
Tests car voice assistants for better answers.
Adaptive Multi-Agent Response Refinement in Conversational Systems
Computation and Language
Makes chatbots smarter by checking facts and you.