Score: 1

Understanding Subword Compositionality of Large Language Models

Published: August 25, 2025 | arXiv ID: 2508.17953v1

By: Qiwei Peng, Yekun Chai, Anders Søgaard

Potential Business Impact:

Helps computers understand words by combining word parts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) take sequences of subwords as input, requiring them to effective compose subword representations into meaningful word-level representations. In this paper, we present a comprehensive set of experiments to probe how LLMs compose subword information, focusing on three key aspects: structural similarity, semantic decomposability, and form retention. Our analysis of the experiments suggests that these five LLM families can be classified into three distinct groups, likely reflecting difference in their underlying composition strategies. Specifically, we observe (i) three distinct patterns in the evolution of structural similarity between subword compositions and whole-word representations across layers; (ii) great performance when probing layer by layer their sensitivity to semantic decompositionality; and (iii) three distinct patterns when probing sensitivity to formal features, e.g., character sequence length. These findings provide valuable insights into the compositional dynamics of LLMs and highlight different compositional pattens in how LLMs encode and integrate subword information.

Country of Origin
🇩🇰 🇨🇭 Denmark, Switzerland

Page Count
12 pages

Category
Computer Science:
Computation and Language