Score: 0

Discovering Semantic Subdimensions through Disentangled Conceptual Representations

Published: August 29, 2025 | arXiv ID: 2508.21436v1

By: Yunhao Zhang , Shaonan Wang , Nan Lin and more

Potential Business Impact:

Finds hidden meanings in words and how brains understand them.

Business Areas:
Semantic Search Internet Services

Understanding the core dimensions of conceptual semantics is fundamental to uncovering how meaning is organized in language and the brain. Existing approaches often rely on predefined semantic dimensions that offer only broad representations, overlooking finer conceptual distinctions. This paper proposes a novel framework to investigate the subdimensions underlying coarse-grained semantic dimensions. Specifically, we introduce a Disentangled Continuous Semantic Representation Model (DCSRM) that decomposes word embeddings from large language models into multiple sub-embeddings, each encoding specific semantic information. Using these sub-embeddings, we identify a set of interpretable semantic subdimensions. To assess their neural plausibility, we apply voxel-wise encoding models to map these subdimensions to brain activation. Our work offers more fine-grained interpretable semantic subdimensions of conceptual meaning. Further analyses reveal that semantic dimensions are structured according to distinct principles, with polarity emerging as a key factor driving their decomposition into subdimensions. The neural correlates of the identified subdimensions support their cognitive and neuroscientific plausibility.

Country of Origin
🇭🇰 Hong Kong

Page Count
18 pages

Category
Computer Science:
Computation and Language