From Observation to Action: Latent Action-based Primitive Segmentation for VLA Pre-training in Industrial Settings
By: Jiajie Zhang, Sören Schwertfeger, Alexander Kleiner
Potential Business Impact:
Teaches robots to do jobs by watching videos.
We present a novel unsupervised framework to unlock vast unlabeled human demonstration data from continuous industrial video streams for Vision-Language-Action (VLA) model pre-training. Our method first trains a lightweight motion tokenizer to encode motion dynamics, then employs an unsupervised action segmenter leveraging a novel "Latent Action Energy" metric to discover and segment semantically coherent action primitives. The pipeline outputs both segmented video clips and their corresponding latent action sequences, providing structured data directly suitable for VLA pre-training. Evaluations on public benchmarks and a proprietary electric motor assembly dataset demonstrate effective segmentation of key tasks performed by humans at workstations. Further clustering and quantitative assessment via a Vision-Language Model confirm the semantic coherence of the discovered action primitives. To our knowledge, this is the first fully automated end-to-end system for extracting and organizing VLA pre-training data from unstructured industrial videos, offering a scalable solution for embodied AI integration in manufacturing.
Similar Papers
Scalable Vision-Language-Action Model Pretraining for Robotic Manipulation with Real-Life Human Activity Videos
Robotics
Teaches robots to do tasks by watching people.
Latent Action Pretraining Through World Modeling
Robotics
Teaches robots to do tasks from watching videos.
RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation
CV and Pattern Recognition
Teaches robots to do tasks by watching videos.