Score: 0

Objective Metrics for Evaluating Large Language Models Using External Data Sources

Published: August 1, 2025 | arXiv ID: 2508.08277v1

By: Haoze Du, Richard Li, Edward Gehringer

Potential Business Impact:

Tests computer smarts fairly and without bias.

Evaluating the performance of Large Language Models (LLMs) is a critical yet challenging task, particularly when aiming to avoid subjective assessments. This paper proposes a framework for leveraging subjective metrics derived from the class textual materials across different semesters to assess LLM outputs across various tasks. By utilizing well-defined benchmarks, factual datasets, and structured evaluation pipelines, the approach ensures consistent, reproducible, and bias-minimized measurements. The framework emphasizes automation and transparency in scoring, reducing reliance on human interpretation while ensuring alignment with real-world applications. This method addresses the limitations of subjective evaluation methods, providing a scalable solution for performance assessment in educational, scientific, and other high-stakes domains.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Computer Science:
Computation and Language