AI Transparency Atlas: Framework, Scoring, and Real-Time Model Card Evaluation Pipeline
By: Akhmadillo Mamirov, Faiaz Azmain, Hanyu Wang
AI model documentation is fragmented across platforms and inconsistent in structure, preventing policymakers, auditors, and users from reliably assessing safety claims, data provenance, and version-level changes. We analyzed documentation from five frontier models (Gemini 3, Grok 4.1, Llama 4, GPT-5, and Claude 4.5) and 100 Hugging Face model cards, identifying 947 unique section names with extreme naming variation. Usage information alone appeared under 97 distinct labels. Using the EU AI Act Annex IV and the Stanford Transparency Index as baselines, we developed a weighted transparency framework with 8 sections and 23 subsections that prioritizes safety-critical disclosures (Safety Evaluation: 25%, Critical Risk: 20%) over technical specifications. We implemented an automated multi-agent pipeline that extracts documentation from public sources and scores completeness through LLM-based consensus. Evaluating 50 models across vision, multimodal, open-source, and closed-source systems cost less than $3 in total and revealed systematic gaps. Frontier labs (xAI, Microsoft, Anthropic) achieve approximately 80% compliance, while most providers fall below 60%. Safety-critical categories show the largest deficits: deception behaviors, hallucinations, and child safety evaluations account for 148, 124, and 116 aggregate points lost, respectively, across all evaluated models.
Similar Papers
Evaluating AI Companies' Frontier Safety Frameworks: Methodology and Results
Computers and Society
Helps AI companies build safer, more responsible systems.
Human-aligned AI Model Cards with Weighted Hierarchy Architecture
Software Engineering
Helps pick the best AI for any job.
Bench-2-CoP: Can We Trust Benchmarking for EU AI Compliance?
Artificial Intelligence
Tests AI for dangers like losing control.