Alignment Debt: The Hidden Work of Making AI Usable
By: Cumi Oyemike, Elizabeth Akpan, Pierre Hervé-Berdys
Potential Business Impact:
Helps AI work better for people everywhere.
Frontier LLMs are optimised around high-resource assumptions about language, knowledge, devices, and connectivity. Whilst widely accessible, they often misfit conditions in the Global South. As a result, users must often perform additional work to make these systems usable. We term this alignment debt: the user-side burden that arises when AI systems fail to align with cultural, linguistic, infrastructural, or epistemic contexts. We develop and validate a four-part taxonomy of alignment debt through a survey of 411 AI users in Kenya and Nigeria. Among respondents measurable on this taxonomy (n = 385), prevalence is: Cultural and Linguistic (51.9%), Infrastructural (43.1%), Epistemic (33.8%), and Interaction (14.0%). Country comparisons show a divergence in Infrastructural and Interaction debt, challenging one-size-fits-Africa assumptions. Alignment debt is associated with compensatory labour, but responses vary by debt type: users facing Epistemic challenges verify outputs at significantly higher rates (91.5% vs. 80.8%; p = 0.037), and verification intensity correlates with cumulative debt burden (Spearmans rho = 0.147, p = 0.004). In contrast, Infrastructural and Interaction debts show weak or null associations with verification, indicating that some forms of misalignment cannot be resolved through verification alone. These findings show that fairness must be judged not only by model metrics but also by the burden imposed on users at the margins, compelling context-aware safeguards that alleviate alignment debt in Global South settings. The alignment debt framework provides an empirically grounded way to measure user burden, informing both design practice and emerging African AI governance efforts.
Similar Papers
The Burden of Interactive Alignment with Inconsistent Preferences
Artificial Intelligence
Helps you get better stuff from online apps.
Chat Bankman-Fried: an Exploration of LLM Alignment in Finance
Computers and Society
Tests if AI will steal money for companies.
Cross-cultural value alignment frameworks for responsible AI governance: Evidence from China-West comparative analysis
Computers and Society
Checks if AI understands different cultures fairly.