Measuring LLM Code Generation Stability via Structural Entropy
By: Yewei Song , Tiezhu Sun , Xunzhu Tang and more
Potential Business Impact:
Checks if computer code writing is consistent.
Assessing the stability of code generation from large language models (LLMs) is essential for judging their reliability in real-world development. We extend prior "structural-entropy concepts" to the program domain by pairing entropy with abstract syntax tree (AST) analysis. For any fixed prompt, we collect the multiset of depth-bounded subtrees of AST in each generated program and treat their relative frequencies as a probability distribution. We then measure stability in two complementary ways: (i) Jensen-Shannon divergence, a symmetric, bounded indicator of structural overlap, and (ii) a Structural Cross-Entropy ratio that highlights missing high-probability patterns. Both metrics admit structural-only and token-aware variants, enabling separate views on control-flow shape and identifier-level variability. Unlike pass@k, BLEU, or CodeBLEU, our metrics are reference-free, language-agnostic, and execution-independent. We benchmark several leading LLMs on standard code generation tasks, demonstrating that AST-driven structural entropy reveals nuances in model consistency and robustness. The method runs in O(n,d) time with no external tests, providing a lightweight addition to the code-generation evaluation toolkit.
Similar Papers
Dynamic Stability of LLM-Generated Code
Programming Languages
Makes computer code run faster and more reliably.
TreeDiff: AST-Guided Code Generation with Diffusion LLMs
Computation and Language
Helps computers write correct computer code.
Information-Theoretic Detection of Unusual Source Code Changes
Software Engineering
Finds weird code changes automatically.