InstructRAG: Leveraging Retrieval-Augmented Generation on Instruction Graphs for LLM-Based Task Planning
By: Zheng Wang , Shu Xian Teo , Jun Jie Chew and more
Potential Business Impact:
Helps AI learn new tasks better and faster.
Recent advancements in large language models (LLMs) have enabled their use as agents for planning complex tasks. Existing methods typically rely on a thought-action-observation (TAO) process to enhance LLM performance, but these approaches are often constrained by the LLMs' limited knowledge of complex tasks. Retrieval-augmented generation (RAG) offers new opportunities by leveraging external databases to ground generation in retrieved information. In this paper, we identify two key challenges (enlargability and transferability) in applying RAG to task planning. We propose InstructRAG, a novel solution within a multi-agent meta-reinforcement learning framework, to address these challenges. InstructRAG includes a graph to organize past instruction paths (sequences of correct actions), an RL-Agent with Reinforcement Learning to expand graph coverage for enlargability, and an ML-Agent with Meta-Learning to improve task generalization for transferability. The two agents are trained end-to-end to optimize overall planning performance. Our experiments on four widely used task planning datasets demonstrate that InstructRAG significantly enhances performance and adapts efficiently to new tasks, achieving up to a 19.2% improvement over the best existing approach.
Similar Papers
RouteRAG: Efficient Retrieval-Augmented Generation from Text and Graph via Reinforcement Learning
Computation and Language
Lets computers learn from text and links.
A Survey of Graph Retrieval-Augmented Generation for Customized Large Language Models
Computation and Language
Helps computers understand complex topics better.
Improving Retrieval-Augmented Generation through Multi-Agent Reinforcement Learning
Computation and Language
Makes AI answer questions more truthfully.