When to Invoke: Refining LLM Fairness with Toxicity Assessment
By: Jing Ren , Bowen Li , Ziqi Xu and more
Large Language Models (LLMs) are increasingly used for toxicity assessment in online moderation systems, where fairness across demographic groups is essential for equitable treatment. However, LLMs often produce inconsistent toxicity judgements for subtle expressions, particularly those involving implicit hate speech, revealing underlying biases that are difficult to correct through standard training. This raises a key question that existing approaches often overlook: when should corrective mechanisms be invoked to ensure fair and reliable assessments? To address this, we propose FairToT, an inference-time framework that enhances LLM fairness through prompt-guided toxicity assessment. FairToT identifies cases where demographic-related variation is likely to occur and determines when additional assessment should be applied. In addition, we introduce two interpretable fairness indicators that detect such cases and improve inference consistency without modifying model parameters. Experiments on benchmark datasets show that FairToT reduces group-level disparities while maintaining stable and reliable toxicity predictions, demonstrating that inference-time refinement offers an effective and practical approach for fairness improvement in LLM-based toxicity assessment systems. The source code can be found at https://aisuko.github.io/fair-tot/.
Similar Papers
Improving Fairness in LLMs Through Testing-Time Adversaries
Computation and Language
Makes AI fairer by spotting and fixing bias.
LLMs on Trial: Evaluating Judicial Fairness for Large Language Models
Computation and Language
Tests if computer judges are fair.
How Toxic Can You Get? Search-based Toxicity Testing for Large Language Models
Software Engineering
Finds and fixes harmful words in AI.