Score: 2

Self-Abstraction from Grounded Experience for Plan-Guided Policy Refinement

Published: November 8, 2025 | arXiv ID: 2511.05931v1

By: Hiroaki Hayashi , Bo Pang , Wenting Zhao and more

BigTech Affiliations: Salesforce Research

Potential Business Impact:

Teaches computers to fix their own code better.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Large language model (LLM) based agents are increasingly used to tackle software engineering tasks that require multi-step reasoning and code modification, demonstrating promising yet limited performance. However, most existing LLM agents typically operate within static execution frameworks, lacking a principled mechanism to learn and self-improve from their own experience and past rollouts. As a result, their performance remains bounded by the initial framework design and the underlying LLM's capabilities. We propose Self-Abstraction from Grounded Experience (SAGE), a framework that enables agents to learn from their own task executions and refine their behavior through self-abstraction. After an initial rollout, the agent induces a concise plan abstraction from its grounded experience, distilling key steps, dependencies, and constraints. This learned abstraction is then fed back as contextual guidance, refining the agent's policy and supporting more structured, informed subsequent executions. Empirically, SAGE delivers consistent performance gains across diverse LLM backbones and agent architectures. Notably, it yields a 7.2% relative performance improvement over the strong Mini-SWE-Agent baseline when paired with the GPT-5 (high) backbone. SAGE further achieves strong overall performance on SWE-Bench Verified benchmark, reaching 73.2% and 74% Pass@1 resolve rates with the Mini-SWE-Agent and OpenHands CodeAct agent framework, respectively.

Country of Origin
🇺🇸 United States


Page Count
18 pages

Category
Computer Science:
Artificial Intelligence