Limited Linguistic Diversity in Embodied AI Datasets
By: Selma Wanna , Agnes Luhtaru , Jonathan Salfity and more
Potential Business Impact:
Finds that AI language training is too simple.
Language plays a critical role in Vision-Language-Action (VLA) models, yet the linguistic characteristics of the datasets used to train and evaluate these systems remain poorly documented. In this work, we present a systematic dataset audit of several widely used VLA corpora, aiming to characterize what kinds of instructions these datasets actually contain and how much linguistic variety they provide. We quantify instruction language along complementary dimensions-including lexical variety, duplication and overlap, semantic similarity, and syntactic complexity. Our analysis shows that many datasets rely on highly repetitive, template-like commands with limited structural variation, yielding a narrow distribution of instruction forms. We position these findings as descriptive documentation of the language signal available in current VLA training and evaluation data, intended to support more detailed dataset reporting, more principled dataset selection, and targeted curation or augmentation strategies that broaden language coverage.
Similar Papers
Vision Language Action Models in Robotic Manipulation: A Systematic Review
Robotics
Robots understand what you say and see.
Efficient Vision-Language-Action Models for Embodied Manipulation: A Systematic Survey
Robotics
Makes robots understand and do tasks faster.
Efficient Vision-Language-Action Models for Embodied Manipulation: A Systematic Survey
Robotics
Makes robots understand and do tasks faster.