Score: 3

RoboSSM: Scalable In-context Imitation Learning via State-Space Models

Published: September 24, 2025 | arXiv ID: 2509.19658v1

By: Youngju Yoo , Jiaheng Hu , Yifeng Zhu and more

Potential Business Impact:

Robots learn new tasks from fewer examples.

Business Areas:
Robotics Hardware, Science and Engineering, Software

In-context imitation learning (ICIL) enables robots to learn tasks from prompts consisting of just a handful of demonstrations. By eliminating the need for parameter updates at deployment time, this paradigm supports few-shot adaptation to novel tasks. However, recent ICIL methods rely on Transformers, which have computational limitations and tend to underperform when handling longer prompts than those seen during training. In this work, we introduce RoboSSM, a scalable recipe for in-context imitation learning based on state-space models (SSM). Specifically, RoboSSM replaces Transformers with Longhorn -- a state-of-the-art SSM that provides linear-time inference and strong extrapolation capabilities, making it well-suited for long-context prompts. We evaluate our approach on the LIBERO benchmark and compare it against strong Transformer-based ICIL baselines. Experiments show that RoboSSM extrapolates effectively to varying numbers of in-context demonstrations, yields high performance on unseen tasks, and remains robust in long-horizon scenarios. These results highlight the potential of SSMs as an efficient and scalable backbone for ICIL. Our code is available at https://github.com/youngjuY/RoboSSM.

Country of Origin
🇺🇸 🇰🇷 United States, Korea, Republic of

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Robotics