Discovering Semantic Subdimensions through Disentangled Conceptual Representations
By: Yunhao Zhang , Shaonan Wang , Nan Lin and more
Potential Business Impact:
Finds hidden meanings in words and how brains understand them.
Understanding the core dimensions of conceptual semantics is fundamental to uncovering how meaning is organized in language and the brain. Existing approaches often rely on predefined semantic dimensions that offer only broad representations, overlooking finer conceptual distinctions. This paper proposes a novel framework to investigate the subdimensions underlying coarse-grained semantic dimensions. Specifically, we introduce a Disentangled Continuous Semantic Representation Model (DCSRM) that decomposes word embeddings from large language models into multiple sub-embeddings, each encoding specific semantic information. Using these sub-embeddings, we identify a set of interpretable semantic subdimensions. To assess their neural plausibility, we apply voxel-wise encoding models to map these subdimensions to brain activation. Our work offers more fine-grained interpretable semantic subdimensions of conceptual meaning. Further analyses reveal that semantic dimensions are structured according to distinct principles, with polarity emerging as a key factor driving their decomposition into subdimensions. The neural correlates of the identified subdimensions support their cognitive and neuroscientific plausibility.
Similar Papers
Discovering Semantic Subdimensions through Disentangled Conceptual Representations
Computation and Language
Finds hidden meanings inside words.
Native Logical and Hierarchical Representations with Subspace Embeddings
Machine Learning (CS)
Computers understand words and their meanings better.
Disentangling Linguistic Features with Dimension-Wise Analysis of Vector Embeddings
Computation and Language
Finds what words mean in computer brains.