BabyLM's First Words: Word Segmentation as a Phonological Probing Task
By: Zébulon Goriely, Paula Buttery
Potential Business Impact:
Teaches computers to understand word sounds in many languages.
Language models provide a key framework for studying linguistic theories based on prediction, but phonological analysis using large language models (LLMs) is difficult; there are few phonological benchmarks beyond English and the standard input representation used in LLMs (subwords of graphemes) is not suitable for analyzing the representation of phonemes. In this work, we demonstrate how word segmentation can be used as a phonological probing task, allowing us to study the representations learned by phoneme-based language models trained on child-directed speech across 31 languages. Following computational models of word segmentation, we present unsupervised methods for extracting word boundaries from a trained model using the observation that prediction-error peaks at the start of words. We also use linear probes to identify that these models implicitly track word boundaries, even when they do not appear in training. This cross-lingual work corroborates statistical learning theories of acquisition and empirically motivates new methods for training subword tokenizers.
Similar Papers
Segment First or Comprehend First? Explore the Limit of Unsupervised Word Segmentation with Large Language Models
Computation and Language
Helps computers understand words in any language.
Using Context to Improve Word Segmentation
Computation and Language
Helps babies learn words by listening to patterns.
Subword models struggle with word learning, but surprisal hides it
Computation and Language
Helps computers learn words like kids do.