Score: 2

From Tokens to Lattices: Emergent Lattice Structures in Language Models

Published: April 4, 2025 | arXiv ID: 2504.08778v1

By: Bo Xiong, Steffen Staab

BigTech Affiliations: Stanford University

Potential Business Impact:

Helps computers understand ideas like humans do.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Pretrained masked language models (MLMs) have demonstrated an impressive capability to comprehend and encode conceptual knowledge, revealing a lattice structure among concepts. This raises a critical question: how does this conceptualization emerge from MLM pretraining? In this paper, we explore this problem from the perspective of Formal Concept Analysis (FCA), a mathematical framework that derives concept lattices from the observations of object-attribute relationships. We show that the MLM's objective implicitly learns a \emph{formal context} that describes objects, attributes, and their dependencies, which enables the reconstruction of a concept lattice through FCA. We propose a novel framework for concept lattice construction from pretrained MLMs and investigate the origin of the inductive biases of MLMs in lattice structure learning. Our framework differs from previous work because it does not rely on human-defined concepts and allows for discovering "latent" concepts that extend beyond human definitions. We create three datasets for evaluation, and the empirical results verify our hypothesis.

Country of Origin
🇩🇪 🇺🇸 United States, Germany

Page Count
15 pages

Category
Computer Science:
Computation and Language