Score: 1

HAL: Inducing Human-likeness in LLMs with Alignment

Published: January 6, 2026 | arXiv ID: 2601.02813v2

By: Masum Hasan, Junjie Zhao, Ehsan Hoque

Potential Business Impact:

Makes AI talk more like a real person.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Conversational human-likeness plays a central role in human-AI interaction, yet it has remained difficult to define, measure, and optimize. As a result, improvements in human-like behavior are largely driven by scale or broad supervised training, rather than targeted alignment. We introduce Human Aligning LLMs (HAL), a framework for aligning language models to conversational human-likeness using an interpretable, data-driven reward. HAL derives explicit conversational traits from contrastive dialogue data, combines them into a compact scalar score, and uses this score as a transparent reward signal for alignment with standard preference optimization methods. Using this approach, we align models of varying sizes without affecting their overall performance. In large-scale human evaluations, models aligned with HAL are more frequently perceived as human-like in conversation. Because HAL operates over explicit, interpretable traits, it enables inspection of alignment behavior and diagnosis of unintended effects. More broadly, HAL demonstrates how soft, qualitative properties of language--previously outside the scope for alignment--can be made measurable and aligned in an interpretable and explainable way.

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
Artificial Intelligence