Can We Reliably Rank Model Performance across Domains without Labeled Data?
By: Veronica Rammouz , Aaron Gonzalez , Carlos Cruzportillo and more
Potential Business Impact:
Helps check computer language skills without tests.
Estimating model performance without labels is an important goal for understanding how NLP models generalize. While prior work has proposed measures based on dataset similarity or predicted correctness, it remains unclear when these estimates produce reliable performance rankings across domains. In this paper, we analyze the factors that affect ranking reliability using a two-step evaluation setup with four base classifiers and several large language models as error predictors. Experiments on the GeoOLID and Amazon Reviews datasets, spanning 15 domains, show that large language model-based error predictors produce stronger and more consistent rank correlations with true accuracy than drift-based or zero-shot baselines. Our analysis reveals two key findings: ranking is more reliable when performance differences across domains are larger, and when the error model's predictions align with the base model's true failure patterns. These results clarify when performance estimation methods can be trusted and provide guidance for their use in cross-domain model evaluation.
Similar Papers
Quantifying Language Disparities in Multilingual Large Language Models
Computation and Language
Tests computer language fairness better, especially for rare languages.
How Good are LLM-based Rerankers? An Empirical Analysis of State-of-the-Art Reranking Models
Computation and Language
Finds better search results for new questions.
Zero-Shot Grammar Competency Estimation Using Large Language Model Generated Pseudo Labels
Computation and Language
Teaches computers to grade spoken grammar without human help.