SGCR: A Specification-Grounded Framework for Trustworthy LLM Code Review
By: Kai Wang , Bingcheng Mao , Shuai Jia and more
Automating code review with Large Language Models (LLMs) shows immense promise, yet practical adoption is hampered by their lack of reliability, context-awareness, and control. To address this, we propose Specification-Grounded Code Review (SGCR), a framework that grounds LLMs in human-authored specifications to produce trustworthy and relevant feedback. SGCR features a novel dual-pathway architecture: an explicit path ensures deterministic compliance with predefined rules derived from these specifications, while an implicit path heuristically discovers and verifies issues beyond those rules. Deployed in a live industrial environment at HiThink Research, SGCR's suggestions achieved a 42% developer adoption rate-a 90.9% relative improvement over a baseline LLM (22%). Our work demonstrates that specification-grounding is a powerful paradigm for bridging the gap between the generative power of LLMs and the rigorous reliability demands of software engineering.
Similar Papers
Fortifying LLM-Based Code Generation with Graph-Based Reasoning on Secure Coding Practices
Cryptography and Security
Makes computer code safer from hidden mistakes.
LAURA: Enhancing Code Review Generation with Context-Enriched Retrieval-Augmented LLM
Software Engineering
Helps computers write better code suggestions.
SRLCG: Self-Rectified Large-Scale Code Generation with Multidimensional Chain-of-Thought and Dynamic Backtracking
Software Engineering
Builds whole computer programs from one idea.