Image, Word and Thought: A More Challenging Language Task for the Iterated Learning Model
By: Hyoyeon Lee, Seth Bullock, Conor Houghton
Potential Business Impact:
Teaches computers to create simple picture languages.
The iterated learning model simulates the transmission of language from generation to generation in order to explore how the constraints imposed by language transmission facilitate the emergence of language structure. Despite each modelled language learner starting from a blank slate, the presence of a bottleneck limiting the number of utterances to which the learner is exposed can lead to the emergence of language that lacks ambiguity, is governed by grammatical rules, and is consistent over successive generations, that is, one that is expressive, compositional and stable. The recent introduction of a more computationally tractable and ecologically valid semi supervised iterated learning model, combining supervised and unsupervised learning within an autoencoder architecture, has enabled exploration of language transmission dynamics for much larger meaning-signal spaces. Here, for the first time, the model has been successfully applied to a language learning task involving the communication of much more complex meanings: seven-segment display images. Agents in this model are able to learn and transmit a language that is expressive: distinct codes are employed for all 128 glyphs; compositional: signal components consistently map to meaning components, and stable: the language does not change from generation to generation.
Similar Papers
A Compressive-Expressive Communication Framework for Compositional Representations
Machine Learning (CS)
Teaches computers to build new ideas from simple parts.
Tell Me What You See: An Iterative Deep Learning Framework for Image Captioning
CV and Pattern Recognition
Makes computers describe pictures better.
Large Language Models as Model Organisms for Human Associative Learning
Machine Learning (CS)
Helps computers learn like brains, remembering new things.