Transparent and Fair Profiling in Employment Services: Evidence from Switzerland
By: Tim Räz
Potential Business Impact:
Helps people find jobs without unfair guessing.
Long-term unemployment (LTU) is a challenge for both jobseekers and public employment services. Statistical profiling tools are increasingly used to predict LTU risk. Some profiling tools are opaque, black-box machine learning models, which raise issues of transparency and fairness. This paper investigates whether interpretable models could serve as an alternative, using administrative data from Switzerland. Traditional statistical, interpretable, and black-box models are compared in terms of predictive performance, interpretability, and fairness. It is shown that explainable boosting machines, a recent interpretable model, perform nearly as well as the best black-box models. It is also shown how model sparsity, feature smoothing, and fairness mitigation can enhance transparency and fairness with only minor losses in performance. These findings suggest that interpretable profiling provides an accountable and trustworthy alternative to black-box models without compromising performance.
Similar Papers
Fairness-Aware and Interpretable Policy Learning
Econometrics
Makes computer decisions fair and understandable.
Enhancing ML Models Interpretability for Credit Scoring
Machine Learning (CS)
Helps banks predict loan risk with clear rules.
From Black Box to Transparency: Enhancing Automated Interpreting Assessment with Explainable AI in College Classrooms
Computation and Language
Helps computers judge translation quality better.