Score: 3

Geometry of Decision Making in Language Models

Published: November 25, 2025 | arXiv ID: 2511.20315v1

By: Abhinav Joshi, Divyanshu Bhatt, Ashutosh Modi

BigTech Affiliations: Samsung

Potential Business Impact:

Shows how AI learns to make smart choices.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) show strong generalization across diverse tasks, yet the internal decision-making processes behind their predictions remain opaque. In this work, we study the geometry of hidden representations in LLMs through the lens of \textit{intrinsic dimension} (ID), focusing specifically on decision-making dynamics in a multiple-choice question answering (MCQA) setting. We perform a large-scale study, with 28 open-weight transformer models and estimate ID across layers using multiple estimators, while also quantifying per-layer performance on MCQA tasks. Our findings reveal a consistent ID pattern across models: early layers operate on low-dimensional manifolds, middle layers expand this space, and later layers compress it again, converging to decision-relevant representations. Together, these results suggest LLMs implicitly learn to project linguistic inputs onto structured, low-dimensional manifolds aligned with task-specific decisions, providing new geometric insights into how generalization and reasoning emerge in language models.

Country of Origin
🇮🇳 🇰🇷 South Korea, India

Repos / Data Links

Page Count
50 pages

Category
Computer Science:
Machine Learning (CS)