Score: 0

Modeling Fairness in Recruitment AI via Information Flow

Published: November 16, 2025 | arXiv ID: 2511.13793v1

By: Mattias Brännström, Themis Dimitra Xanthopoulou, Lili Jiang

Potential Business Impact:

Shows how AI hiring can be unfair.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Avoiding bias and understanding the real-world consequences of AI-supported decision-making are critical to address fairness and assign accountability. Existing approaches often focus either on technical aspects, such as datasets and models, or on high-level socio-ethical considerations - rarely capturing how these elements interact in practice. In this paper, we apply an information flow-based modeling framework to a real-world recruitment process that integrates automated candidate matching with human decision-making. Through semi-structured stakeholder interviews and iterative modeling, we construct a multi-level representation of the recruitment pipeline, capturing how information is transformed, filtered, and interpreted across both algorithmic and human components. We identify where biases may emerge, how they can propagate through the system, and what downstream impacts they may have on candidates. This case study illustrates how information flow modeling can support structured analysis of fairness risks, providing transparency across complex socio-technical systems.

Country of Origin
🇸🇪 Sweden

Page Count
16 pages

Category
Computer Science:
Computers and Society