When Openness Fails: Lessons from System Safety for Assessing Openness in AI
By: Tamara Paris, Shalaleh Rismani
Potential Business Impact:
Checks if AI is truly open for everyone.
Most frameworks for assessing the openness of AI systems use narrow criteria such as availability of data, model, code, documentation, and licensing terms. However, to evaluate whether the intended effects of openness - such as democratization and autonomy - are realized, we need a more holistic approach that considers the context of release: who will reuse the system, for what purposes, and under what conditions. To this end, we adapt five lessons from system safety that offer guidance on how openness can be evaluated at the system level.
Similar Papers
Opening the Scope of Openness in AI
Artificial Intelligence
Makes AI more open and understandable for everyone.
Safety by Measurement: A Systematic Literature Review of AI Safety Evaluation Methods
Artificial Intelligence
Tests AI for dangerous tricks and hidden goals.
The 2025 OpenAI Preparedness Framework does not guarantee any AI risk mitigation practices: a proof-of-concept for affordance analyses of AI safety policies
Computers and Society
Lets AI company bosses release dangerous AI.