Beyond Surface Similarity: Evaluating LLM-Based Test Refactorings with Structural and Semantic Awareness
By: Wendkûuni C. Ouédraogo , Yinghua Li , Xueqi Dang and more
Potential Business Impact:
Measures how well AI improves computer code.
Large Language Models (LLMs) are increasingly employed to automatically refactor unit tests, aiming to enhance readability, naming, and structural clarity while preserving functional behavior. However, evaluating such refactorings remains challenging: traditional metrics like CodeBLEU are overly sensitive to renaming and structural edits, whereas embedding-based similarities capture semantics but ignore readability and modularity. We introduce CTSES, a composite metric that integrates CodeBLEU, METEOR, and ROUGE-L to balance behavior preservation, lexical quality, and structural alignment. CTSES is evaluated on over 5,000 test suites automatically refactored by GPT-4o and Mistral-Large-2407, using Chain-of-Thought prompting, across two established Java benchmarks: Defects4J and SF110. Our results show that CTSES yields more faithful and interpretable assessments, better aligned with developer expectations and human intuition than existing metrics.
Similar Papers
LoCaL: Countering Surface Bias in Code Evaluation Metrics
Software Engineering
Tests code better by finding hidden differences.
Enhancing LLM Code Generation with Ensembles: A Similarity-Based Selection Approach
Software Engineering
Makes computers write better code by combining multiple helpers.
How Small Transformation Expose the Weakness of Semantic Similarity Measures
Computation and Language
Finds computer code that means the same thing.