Score: 2

Human-aligned AI Model Cards with Weighted Hierarchy Architecture

Published: October 8, 2025 | arXiv ID: 2510.06989v2

By: Pengyue Yang , Haolin Jin , Qingwen Zeng and more

BigTech Affiliations: ByteDance

Potential Business Impact:

Helps pick the best AI for any job.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

The proliferation of Large Language Models (LLMs) has led to a burgeoning ecosystem of specialized, domain-specific models. While this rapid growth accelerates innovation, it has simultaneously created significant challenges in model discovery and adoption. Users struggle to navigate this landscape due to inconsistent, incomplete, and imbalanced documentation across platforms. Existing documentation frameworks, such as Model Cards and FactSheets, attempt to standardize reporting but are often static, predominantly qualitative, and lack the quantitative mechanisms needed for rigorous cross-model comparison. This gap exacerbates model underutilization and hinders responsible adoption. To address these shortcomings, we introduce the Comprehensive Responsible AI Model Card Framework (CRAI-MCF), a novel approach that transitions from static disclosures to actionable, human-aligned documentation. Grounded in Value Sensitive Design (VSD), CRAI-MCF is built upon an empirical analysis of 240 open-source projects, distilling 217 parameters into an eight-module, value-aligned architecture. Our framework introduces a quantitative sufficiency criterion to operationalize evaluation and enables rigorous cross-model comparison under a unified scheme. By balancing technical, ethical, and operational dimensions, CRAI-MCF empowers practitioners to efficiently assess, select, and adopt LLMs with greater confidence and operational integrity.

Country of Origin
🇦🇺 🇨🇳 China, Australia

Page Count
11 pages

Category
Computer Science:
Software Engineering