Investigating the Capabilities and Limitations of Machine Learning for Identifying Bias in English Language Data with Information and Heritage Professionals
By: Lucy Havens , Benjamin Bach , Melissa Terras and more
Potential Business Impact:
Shows computers where language is unfair.
Despite numerous efforts to mitigate their biases, ML systems continue to harm already-marginalized people. While predominant ML approaches assume bias can be removed and fair models can be created, we show that these are not always possible, nor desirable, goals. We reframe the problem of ML bias by creating models to identify biased language, drawing attention to a dataset's biases rather than trying to remove them. Then, through a workshop, we evaluated the models for a specific use case: workflows of information and heritage professionals. Our findings demonstrate the limitations of ML for identifying bias due to its contextual nature, the way in which approaches to mitigating it can simultaneously privilege and oppress different communities, and its inevitability. We demonstrate the need to expand ML approaches to bias and fairness, providing a mixed-methods approach to investigating the feasibility of removing bias or achieving fairness in a given ML use case.
Similar Papers
Addressing Bias in LLMs: Strategies and Application to Fair AI-based Recruitment
Artificial Intelligence
Removes gender bias from hiring AI.
Invisible Filters: Cultural Bias in Hiring Evaluations Using Large Language Models
Computers and Society
AI hiring tools unfairly judge people from different countries.
Fine-Grained Bias Detection in LLM: Enhancing detection mechanisms for nuanced biases
Computation and Language
Finds hidden unfairness in AI language.