Chain-of-Conceptual-Thought: Eliciting the Agent to Deeply Think within the Response
By: Qingqing Gu , Dan Wang , Yue Zhao and more
Potential Business Impact:
Helps AI understand feelings and give better advice.
Chain-of-Thought (CoT) is widely applied to improve the LLM capability in math, coding and reasoning tasks. However, its performance is limited for open-domain tasks since there are no clearly defined reasoning steps or logical transitions. To mitigate such challenges, we propose another prompt-based paradigm called Chain of Conceptual Thought (CoCT), where the LLM first tags a concept, then generates the detailed content. The chain of concepts is allowed within the utterance, encouraging the LLM's deep and strategic thinking. We experiment with this paradigm in daily and emotional support conversations where the concept is comprised of emotions, strategies and topics. Automatic, human and model evaluations suggest that CoCT surpasses baselines such as Self-Refine, ECoT, ToT, SoT and RAG, suggesting a potential effective prompt-based paradigm of LLM for a wider scope of tasks.
Similar Papers
Effectiveness of Chain-of-Thought in Distilling Reasoning Capability from Large Language Models
Computation and Language
Teaches small computers to think like big ones.
Non-Iterative Symbolic-Aided Chain-of-Thought for Logical Reasoning
Artificial Intelligence
Helps computers think through problems better.
Eliciting Chain-of-Thought in Base LLMs via Gradient-Based Representation Optimization
Computation and Language
Teaches computers to think step-by-step to solve problems.