Score: 0

Enhancing Tactile-based Reinforcement Learning for Robotic Control

Published: October 24, 2025 | arXiv ID: 2510.21609v1

By: Elle Miller , Trevor McInroe , David Abel and more

Potential Business Impact:

Robots learn to grip and move objects better.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Achieving safe, reliable real-world robotic manipulation requires agents to evolve beyond vision and incorporate tactile sensing to overcome sensory deficits and reliance on idealised state information. Despite its potential, the efficacy of tactile sensing in reinforcement learning (RL) remains inconsistent. We address this by developing self-supervised learning (SSL) methodologies to more effectively harness tactile observations, focusing on a scalable setup of proprioception and sparse binary contacts. We empirically demonstrate that sparse binary tactile signals are critical for dexterity, particularly for interactions that proprioceptive control errors do not register, such as decoupled robot-object motions. Our agents achieve superhuman dexterity in complex contact tasks (ball bouncing and Baoding ball rotation). Furthermore, we find that decoupling the SSL memory from the on-policy memory can improve performance. We release the Robot Tactile Olympiad (RoTO) benchmark to standardise and promote future research in tactile-based manipulation. Project page: https://elle-miller.github.io/tactile_rl

Page Count
28 pages

Category
Computer Science:
Robotics