Score: 0

Emotions Where Art Thou: Understanding and Characterizing the Emotional Latent Space of Large Language Models

Published: October 24, 2025 | arXiv ID: 2510.22042v1

By: Benjamin Reichman, Adar Avsian, Larry Heck

Potential Business Impact:

Teaches computers to understand and change feelings.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This work investigates how large language models (LLMs) internally represent emotion by analyzing the geometry of their hidden-state space. The paper identifies a low-dimensional emotional manifold and shows that emotional representations are directionally encoded, distributed across layers, and aligned with interpretable dimensions. These structures are stable across depth and generalize to eight real-world emotion datasets spanning five languages. Cross-domain alignment yields low error and strong linear probe performance, indicating a universal emotional subspace. Within this space, internal emotion perception can be steered while preserving semantics using a learned intervention module, with especially strong control for basic emotions across languages. These findings reveal a consistent and manipulable affective geometry in LLMs and offer insight into how they internalize and process emotion.

Country of Origin
🇺🇸 United States

Page Count
20 pages

Category
Computer Science:
Computation and Language