Score: 0

When Openness Fails: Lessons from System Safety for Assessing Openness in AI

Published: October 12, 2025 | arXiv ID: 2510.10732v1

By: Tamara Paris, Shalaleh Rismani

Potential Business Impact:

Checks if AI is truly open for everyone.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Most frameworks for assessing the openness of AI systems use narrow criteria such as availability of data, model, code, documentation, and licensing terms. However, to evaluate whether the intended effects of openness - such as democratization and autonomy - are realized, we need a more holistic approach that considers the context of release: who will reuse the system, for what purposes, and under what conditions. To this end, we adapt five lessons from system safety that offer guidance on how openness can be evaluated at the system level.

Country of Origin
🇨🇦 Canada

Page Count
5 pages

Category
Computer Science:
Computers and Society