Score: 2

Beyond Surface Similarity: Evaluating LLM-Based Test Refactorings with Structural and Semantic Awareness

Published: June 7, 2025 | arXiv ID: 2506.06767v1

By: Wendkûuni C. Ouédraogo , Yinghua Li , Xueqi Dang and more

Potential Business Impact:

Measures how well AI improves computer code.

Business Areas:
Semantic Search Internet Services

Large Language Models (LLMs) are increasingly employed to automatically refactor unit tests, aiming to enhance readability, naming, and structural clarity while preserving functional behavior. However, evaluating such refactorings remains challenging: traditional metrics like CodeBLEU are overly sensitive to renaming and structural edits, whereas embedding-based similarities capture semantics but ignore readability and modularity. We introduce CTSES, a composite metric that integrates CodeBLEU, METEOR, and ROUGE-L to balance behavior preservation, lexical quality, and structural alignment. CTSES is evaluated on over 5,000 test suites automatically refactored by GPT-4o and Mistral-Large-2407, using Chain-of-Thought prompting, across two established Java benchmarks: Defects4J and SF110. Our results show that CTSES yields more faithful and interpretable assessments, better aligned with developer expectations and human intuition than existing metrics.

Country of Origin
🇱🇺 🇸🇬 🇹🇷 Singapore, Turkey, Luxembourg

Page Count
6 pages

Category
Computer Science:
Software Engineering