Agentic Rubrics as Contextual Verifiers for SWE Agents
By: Mohit Raghavendra , Anisha Gunjal , Bing Liu and more
Potential Business Impact:
Helps computers fix code without running it.
Verification is critical for improving agents: it provides the reward signal for Reinforcement Learning and enables inference-time gains through Test-Time Scaling (TTS). Despite its importance, verification in software engineering (SWE) agent settings often relies on code execution, which can be difficult to scale due to environment setup overhead. Scalable alternatives such as patch classifiers and heuristic methods exist, but they are less grounded in codebase context and harder to interpret. To this end, we explore Agentic Rubrics: an expert agent interacts with the repository to create a context-grounded rubric checklist, and candidate patches are then scored against it without requiring test execution. On SWE-Bench Verified under parallel TTS evaluation, Agentic Rubrics achieve a score of 54.2% on Qwen3-Coder-30B-A3B and 40.6% on Qwen3-32B, with at least a +3.5 percentage-point gain over the strongest baseline in our comparison set. We further analyze rubric behavior, showing that rubric scores are consistent with ground-truth tests while also flagging issues that tests do not capture. Our ablations show that agentic context gathering is essential for producing codebase-specific, unambiguous criteria. Together, these results suggest that Agentic Rubrics provide an efficient, scalable, and granular verification signal for SWE agents.
Similar Papers
SWE-RM: Execution-free Feedback For Software Engineering Agents
Computation and Language
Helps computers write better code by learning from mistakes.
The Rise of Agentic Testing: Multi-Agent Systems for Robust Software Quality Assurance
Software Engineering
Makes software fix itself and test better.
SWE-Compass: Towards Unified Evaluation of Agentic Coding Abilities for Large Language Models
Software Engineering
Tests AI's ability to write and fix code.