Score: 1

TAIBOM: Bringing Trustworthiness to AI-Enabled Systems

Published: October 2, 2025 | arXiv ID: 2510.02169v1

By: Vadim Safronov , Anthony McCaigue , Nicholas Allott and more

Potential Business Impact:

Makes AI systems safer and more trustworthy.

Business Areas:
Intelligent Systems Artificial Intelligence, Data and Analytics, Science and Engineering

The growing integration of open-source software and AI-driven technologies has introduced new layers of complexity into the software supply chain, challenging existing methods for dependency management and system assurance. While Software Bills of Materials (SBOMs) have become critical for enhancing transparency and traceability, current frameworks fall short in capturing the unique characteristics of AI systems -- namely, their dynamic, data-driven nature and the loosely coupled dependencies across datasets, models, and software components. These challenges are compounded by fragmented governance structures and the lack of robust tools for ensuring integrity, trust, and compliance in AI-enabled environments. In this paper, we introduce Trusted AI Bill of Materials (TAIBOM) -- a novel framework extending SBOM principles to the AI domain. TAIBOM provides (i) a structured dependency model tailored for AI components, (ii) mechanisms for propagating integrity statements across heterogeneous AI pipelines, and (iii) a trust attestation process for verifying component provenance. We demonstrate how TAIBOM supports assurance, security, and compliance across AI workflows, highlighting its advantages over existing standards such as SPDX and CycloneDX. This work lays the foundation for trustworthy and verifiable AI systems through structured software transparency.

Country of Origin
🇬🇧 United Kingdom

Repos / Data Links

Page Count
6 pages

Category
Computer Science:
Software Engineering