Confident, Calibrated, or Complicit: Probing the Trade-offs between Safety Alignment and Ideological Bias in Language Models in Detecting Hate Speech
By: Sanjeeevan Selvaganapathy, Mehwish Nasim
Potential Business Impact:
Helps computers spot hate speech better.
We investigate the efficacy of Large Language Models (LLMs) in detecting implicit and explicit hate speech, examining whether models with minimal safety alignment (uncensored) might provide more objective classification capabilities compared to their heavily-aligned (censored) counterparts. While uncensored models theoretically offer a less constrained perspective free from moral guardrails that could bias classification decisions, our results reveal a surprising trade-off: censored models significantly outperform their uncensored counterparts in both accuracy and robustness, achieving 78.7% versus 64.1% strict accuracy. However, this enhanced performance comes with its own limitation -- the safety alignment acts as a strong ideological anchor, making censored models resistant to persona-based influence, while uncensored models prove highly malleable to ideological framing. Furthermore, we identify critical failures across all models in understanding nuanced language such as irony. We also find alarming fairness disparities in performance across different targeted groups and systemic overconfidence that renders self-reported certainty unreliable. These findings challenge the notion of LLMs as objective arbiters and highlight the need for more sophisticated auditing frameworks that account for fairness, calibration, and ideological consistency.
Similar Papers
Are LLMs Good Safety Agents or a Propaganda Engine?
Computation and Language
Tests if AI is safe or politically censored.
Consiglieres in the Shadow: Understanding the Use of Uncensored Large Language Models in Cybercrimes
Cryptography and Security
Finds hidden bad AI models used by criminals.
A Comprehensive Study of Implicit and Explicit Biases in Large Language Models
Machine Learning (CS)
Finds and fixes unfairness in AI writing.