Towards Open-Ended Emotional Support Conversations in LLMs via Reinforcement Learning with Future-Oriented Rewards
By: Ting Yang, Li Chen, Huimin Wang
Potential Business Impact:
Helps computers give better emotional support.
Emotional Support Conversation (ESC) systems aim to alleviate users' emotional difficulties and provide long-term, systematic support for emotional well-being. However, most large language model (LLM)-based ESC systems rely on predefined strategies, which limits their effectiveness in complex, real-life scenarios. To enable flexible responses to diverse emotional problem scenarios, this paper introduces a novel end-to-end framework (RLFF-ESC) that directly learns enduring emotionally supportive response skills using reinforcement learning. For sustained emotional support, we first employ an LLM-based multi-agent mechanism to simulate future dialogue trajectories and collect future-oriented rewards. We then train a future-oriented reward model, which is subsequently used to train the emotional support policy model. Additionally, we incorporate an explicit reasoning process during response generation to further enhance the quality, relevance, and contextual appropriateness of the system's responses. We evaluate the backbone policy model on Qwen2.5-7B-Instruct-1M and LLaMA3.1-8B-Instruct models, testing the proposed RLFF-ESC framework across two public ESC datasets. Experimental results demonstrate that RLFF-ESC consistently outperforms existing baselines in terms of goal completion and response quality.
Similar Papers
Emotional Support with LLM-based Empathetic Dialogue Generation
Artificial Intelligence
Helps computers give comforting and helpful advice.
Convert Language Model into a Value-based Strategic Planner
Computation and Language
Helps computers give better emotional support talks.
Mitigating Strategy Preference Bias in Emotional Support Conversation via Uncertainty Estimations
Computation and Language
Helps computers give better emotional support talks.