Self-Abstraction from Grounded Experience for Plan-Guided Policy Refinement
By: Hiroaki Hayashi , Bo Pang , Wenting Zhao and more
Potential Business Impact:
Teaches computers to fix their own code better.
Large language model (LLM) based agents are increasingly used to tackle software engineering tasks that require multi-step reasoning and code modification, demonstrating promising yet limited performance. However, most existing LLM agents typically operate within static execution frameworks, lacking a principled mechanism to learn and self-improve from their own experience and past rollouts. As a result, their performance remains bounded by the initial framework design and the underlying LLM's capabilities. We propose Self-Abstraction from Grounded Experience (SAGE), a framework that enables agents to learn from their own task executions and refine their behavior through self-abstraction. After an initial rollout, the agent induces a concise plan abstraction from its grounded experience, distilling key steps, dependencies, and constraints. This learned abstraction is then fed back as contextual guidance, refining the agent's policy and supporting more structured, informed subsequent executions. Empirically, SAGE delivers consistent performance gains across diverse LLM backbones and agent architectures. Notably, it yields a 7.2% relative performance improvement over the strong Mini-SWE-Agent baseline when paired with the GPT-5 (high) backbone. SAGE further achieves strong overall performance on SWE-Bench Verified benchmark, reaching 73.2% and 74% Pass@1 resolve rates with the Mini-SWE-Agent and OpenHands CodeAct agent framework, respectively.
Similar Papers
Sentient Agent as a Judge: Evaluating Higher-Order Social Cognition in Large Language Models
Computation and Language
Tests if AI understands feelings like people.
A Lightweight Framework for Trigger-Guided LoRA-Based Self-Adaptation in LLMs
Computation and Language
Lets AI learn new things while solving problems.
SAGE: A Top-Down Bottom-Up Knowledge-Grounded User Simulator for Multi-turn AGent Evaluation
Computation and Language
Helps computers test customer service better.