Vintage Code, Modern Judges: Meta-Validation in Low Data Regimes
By: Ora Nova Fandina , Gal Amram , Eitan Farchi and more
Potential Business Impact:
Helps computers understand old code better.
Application modernization in legacy languages such as COBOL, PL/I, and REXX faces an acute shortage of resources, both in expert availability and in high-quality human evaluation data. While Large Language Models as a Judge (LaaJ) offer a scalable alternative to expert review, their reliability must be validated before being trusted in high-stakes workflows. Without principled validation, organizations risk a circular evaluation loop, where unverified LaaJs are used to assess model outputs, potentially reinforcing unreliable judgments and compromising downstream deployment decisions. Although various automated approaches to validating LaaJs have been proposed, alignment with human judgment remains a widely used and conceptually grounded validation strategy. In many real-world domains, the availability of human-labeled evaluation data is severely limited, making it difficult to assess how well a LaaJ aligns with human judgment. We introduce SparseAlign, a formal framework for assessing LaaJ alignment with sparse human-labeled data. SparseAlign combines a novel pairwise-confidence concept with a score-sensitive alignment metric that jointly capture ranking consistency and score proximity, enabling reliable evaluator selection even when traditional statistical methods are ineffective due to limited annotated examples. SparseAlign was applied internally to select LaaJs for COBOL code explanation. The top-aligned evaluators were integrated into assessment workflows, guiding model release decisions. We present a case study of four LaaJs to demonstrate SparseAlign's utility in real-world evaluation scenarios.
Similar Papers
From Code to Courtroom: LLMs as the New Software Judges
Software Engineering
Lets computers check other computer code quality.
LaajMeter: A Framework for LaaJ Evaluation
Computation and Language
Tests AI judges to make sure they are fair.
Are We on the Right Way to Assessing LLM-as-a-Judge?
Computation and Language
Checks if AI judges are fair and honest.