Score: 0

CubeRobot: Grounding Language in Rubik's Cube Manipulation via Vision-Language Model

Published: March 25, 2025 | arXiv ID: 2503.19281v1

By: Feiyang Wang, Xiaomin Yu, Wangyu Wu

Potential Business Impact:

Robot solves Rubik's Cubes using sight and thinking.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Proving Rubik's Cube theorems at the high level represents a notable milestone in human-level spatial imagination and logic thinking and reasoning. Traditional Rubik's Cube robots, relying on complex vision systems and fixed algorithms, often struggle to adapt to complex and dynamic scenarios. To overcome this limitation, we introduce CubeRobot, a novel vision-language model (VLM) tailored for solving 3x3 Rubik's Cubes, empowering embodied agents with multimodal understanding and execution capabilities. We used the CubeCoT image dataset, which contains multiple-level tasks (43 subtasks in total) that humans are unable to handle, encompassing various cube states. We incorporate a dual-loop VisionCoT architecture and Memory Stream, a paradigm for extracting task-related features from VLM-generated planning queries, thus enabling CubeRobot to independent planning, decision-making, reflection and separate management of high- and low-level Rubik's Cube tasks. Furthermore, in low-level Rubik's Cube restoration tasks, CubeRobot achieved a high accuracy rate of 100%, similar to 100% in medium-level tasks, and achieved an accuracy rate of 80% in high-level tasks.

Country of Origin
🇨🇳 China

Page Count
6 pages

Category
Computer Science:
Robotics