From Tokens to Lattices: Emergent Lattice Structures in Language Models
By: Bo Xiong, Steffen Staab
Potential Business Impact:
Helps computers understand ideas like humans do.
Pretrained masked language models (MLMs) have demonstrated an impressive capability to comprehend and encode conceptual knowledge, revealing a lattice structure among concepts. This raises a critical question: how does this conceptualization emerge from MLM pretraining? In this paper, we explore this problem from the perspective of Formal Concept Analysis (FCA), a mathematical framework that derives concept lattices from the observations of object-attribute relationships. We show that the MLM's objective implicitly learns a \emph{formal context} that describes objects, attributes, and their dependencies, which enables the reconstruction of a concept lattice through FCA. We propose a novel framework for concept lattice construction from pretrained MLMs and investigate the origin of the inductive biases of MLMs in lattice structure learning. Our framework differs from previous work because it does not rely on human-defined concepts and allows for discovering "latent" concepts that extend beyond human definitions. We create three datasets for evaluation, and the empirical results verify our hypothesis.
Similar Papers
Reducing Formal Context Extraction: A Newly Proposed Framework from Big Corpora
Computation and Language
Helps computers understand word meanings faster.
Revealing emergent human-like conceptual representations from language prediction
Computation and Language
Computers learn ideas like people from just words.
From Words to Waves: Analyzing Concept Formation in Speech and Text-Based Foundation Models
Computation and Language
Computers learn ideas from talking and reading.