LoCaL: Countering Surface Bias in Code Evaluation Metrics
By: Simantika Bhattacharjee Dristi, Matthew B. Dwyer
Potential Business Impact:
Tests code better by finding hidden differences.
With the increasing popularity of large language models (LLMs) and LLM-based agents, reliable and effective code evaluation metrics (CEMs) have become crucial for progress across several software engineering tasks. While popular benchmarks often provide test cases to assess the correctness of generated code, crafting and executing test cases is expensive. Reference-based CEMs provide a cheaper alternative by scoring a candidate program based on its functional similarity to a reference. Although prior research has focused on reporting the weak correlation between these CEMs and functional correctness, the causes are only assumed, and plausible solutions remain unexplored. In this work, we critically evaluate four state-of-the-art reference-based CEMs, revealing their strong bias towards surface-level features rather than code functionality. Despite this surface bias, current evaluation datasets for these CEMs rarely include code pairs that are surface-similar yet functionally dissimilar, or functionally similar yet surface-dissimilar. To mitigate this gap, we propose LoCaL (Looks Can Lie), a CEM evaluation benchmark, with 3117 code pairs at both the method and program levels. Each pair is labeled with a functional similarity score and aims to target regions where CEMs are likely to perform poorly. The functional similarity scores are calculated through differential fuzzing, which eliminates the need for predefined test cases and, at the same time, improves the reliability of the scores by executing an order of magnitude more tests than prior work. We find that all four CEMs show significant performance degradation on LoCaL, compared to the baselines. Finally, based on our findings, we draw the implication that exposing CEMs to LoCaL-like data might facilitate the development of metrics that are robust to surface bias.
Similar Papers
Beyond Surface Similarity: Evaluating LLM-Based Test Refactorings with Structural and Semantic Awareness
Software Engineering
Measures how well AI improves computer code.
CFCEval: Evaluating Security Aspects in Code Generated by Large Language Models
Software Engineering
Tests computer code for mistakes and safety.
LLM-based Vulnerability Discovery through the Lens of Code Metrics
Cryptography and Security
Finds computer bugs by looking at code patterns.