Algorithmic UDAP
By: Talia Gillis , Riley Stacy , Sam Brumer and more
This paper compares two legal frameworks -- disparate impact (DI) and unfair, deceptive, or abusive acts or practices (UDAP) -- as tools for evaluating algorithmic discrimination, focusing on the example of fair lending. While DI has traditionally served as the foundation of fair lending law, recent regulatory efforts have invoked UDAP, a doctrine rooted in consumer protection, as an alternative means to address algorithmic discrimination harms. We formalize and operationalize both doctrines in a simulated lending setting to assess how they evaluate algorithmic disparities. While some regulatory interpretations treat UDAP as operating similarly to DI, we argue it is an independent and analytically distinct framework. In particular, UDAP's "unfairness" prong introduces elements such as avoidability of harm and proportionality balancing, while its "deceptive" and "abusive" standards may capture forms of algorithmic harm that elude DI analysis. At the same time, translating UDAP into algorithmic settings exposes unresolved ambiguities, underscoring the need for further regulatory guidance if it is to serve as a workable standard.
Similar Papers
Price discrimination, algorithmic decision-making, and European non-discrimination law
Computers and Society
Protects people from unfair online prices.
Bridging Research Gaps Between Academic Research and Legal Investigations of Algorithmic Discrimination
Computers and Society
Helps lawyers fight unfair computer decisions.
Bridging Research Gaps Between Academic Research and Legal Investigations of Algorithmic Discrimination
Computers and Society
Helps lawyers fight unfair computer decisions.