Native Logical and Hierarchical Representations with Subspace Embeddings
By: Gabriel Moreira , Zita Marinho , Manuel Marques and more
Potential Business Impact:
Computers understand words and their meanings better.
Traditional neural embeddings represent concepts as points, excelling at similarity but struggling with higher-level reasoning and asymmetric relationships. We introduce a novel paradigm: embedding concepts as linear subspaces. This framework inherently models generality via subspace dimensionality and hierarchy through subspace inclusion. It naturally supports set-theoretic operations like intersection (conjunction), linear sum (disjunction) and orthogonal complements (negations), aligning with classical formal semantics. To enable differentiable learning, we propose a smooth relaxation of orthogonal projection operators, allowing for the learning of both subspace orientation and dimension. Our method achieves state-of-the-art results in reconstruction and link prediction on WordNet. Furthermore, on natural language inference benchmarks, our subspace embeddings surpass bi-encoder baselines, offering an interpretable formulation of entailment that is both geometrically grounded and amenable to logical operations.
Similar Papers
Decomposing Representation Space into Interpretable Subspaces with Unsupervised Learning
Machine Learning (CS)
Finds hidden "folders" inside AI brains.
Latent Planning via Embedding Arithmetic: A Contrastive Approach to Strategic Reasoning
Machine Learning (CS)
Teaches computers to plan moves in games like chess.
Extracting Conceptual Spaces from LLMs Using Prototype Embeddings
Computation and Language
Teaches computers to understand concepts like humans.