Towards Systematic Specification and Verification of Fairness Requirements: A Position Paper
By: Qusai Ramadan , Jukka Ruohonen , Abhishek Tiwari and more
Potential Business Impact:
Helps software treat everyone fairly, no matter what.
Decisions suggested by improperly designed software systems might be prone to discriminate against people based on protected characteristics, such as gender and ethnicity. Previous studies attribute such undesired behavior to flaws in algorithmic design or biased data. However, these studies ignore that discrimination is often the result of a lack of well-specified fairness requirements and their verification. The fact that experts' knowledge about fairness is often implicit makes the task of specifying precise and verifiable fairness requirements difficult. In related domains, such as security engineering, knowledge graphs have been proven to be effective in formalizing knowledge to assist requirements specification and verification. To address the lack of formal mechanisms for specifying and verifying fairness requirements, we propose the development of a knowledge graph-based framework for fairness. In this paper, we discuss the challenges, research questions, and a road map towards addressing the research questions.
Similar Papers
A Gray Literature Study on Fairness Requirements in AI-enabled Software Engineering
Software Engineering
Makes AI fair, not just smart.
The AI Fairness Myth: A Position Paper on Context-Aware Bias
Computers and Society
Makes AI treat people fairly, even if it means helping some.
Argumentative Debates for Transparent Bias Detection [Technical Report]
Artificial Intelligence
Finds unfairness in AI by explaining its reasoning.