Automatic Bias Detection in Source Code Review
By: Yoseph Berhanu Alebachew, Chris Brown
Potential Business Impact:
Finds unfairness in computer code reviews.
Bias is an inherent threat to human decision-making, including in decisions made during software development. Extensive research has demonstrated the presence of biases at various stages of the software development life-cycle. Notably, code reviews are highly susceptible to prejudice-induced biases, and individuals are often unaware of these biases as they occur. Developing methods to automatically detect these biases is crucial for addressing the associated challenges. Recent advancements in visual data analytics have shown promising results in detecting potential biases by analyzing user interaction patterns. In this project, we propose a controlled experiment to extend this approach to detect potentially biased outcomes in code reviews by observing how reviewers interact with the code. We employ the "spotlight model of attention", a cognitive framework where a reviewer's gaze is tracked to determine their focus areas on the review screen. This focus, identified through gaze tracking, serves as an indicator of the reviewer's areas of interest or concern. We plan to analyze the sequence of gaze focus using advanced sequence modeling techniques, including Markov Models, Recurrent Neural Networks (RNNs), and Conditional Random Fields (CRF). These techniques will help us identify patterns that may suggest biased interactions. We anticipate that the ability to automatically detect potentially biased interactions in code reviews will significantly reduce unnecessary push-backs, enhance operational efficiency, and foster greater diversity and inclusion in software development. This approach not only helps in identifying biases but also in creating a more equitable development environment by mitigating these biases effectively
Similar Papers
Perception-Driven Bias Detection in Machine Learning via Crowdsourced Visual Judgment
Machine Learning (CS)
Finds unfairness in computer decisions using people's eyes.
Classifier-to-Bias: Toward Unsupervised Automatic Bias Detection for Visual Classifiers
CV and Pattern Recognition
Finds hidden unfairness in computer programs.
Ensuring Medical AI Safety: Interpretability-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data
Artificial Intelligence
Fixes AI mistakes in medical pictures.