Score: 0

Adversarial Fine-tuning in Offline-to-Online Reinforcement Learning for Robust Robot Control

Published: October 15, 2025 | arXiv ID: 2510.13358v1

By: Shingo Ayabe, Hiroshi Kera, Kazuhiko Kawamoto

Potential Business Impact:

Makes robots learn to fix mistakes while working.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Offline reinforcement learning enables sample-efficient policy acquisition without risky online interaction, yet policies trained on static datasets remain brittle under action-space perturbations such as actuator faults. This study introduces an offline-to-online framework that trains policies on clean data and then performs adversarial fine-tuning, where perturbations are injected into executed actions to induce compensatory behavior and improve resilience. A performance-aware curriculum further adjusts the perturbation probability during training via an exponential-moving-average signal, balancing robustness and stability throughout the learning process. Experiments on continuous-control locomotion tasks demonstrate that the proposed method consistently improves robustness over offline-only baselines and converges faster than training from scratch. Matching the fine-tuning and evaluation conditions yields the strongest robustness to action-space perturbations, while the adaptive curriculum strategy mitigates the degradation of nominal performance observed with the linear curriculum strategy. Overall, the results show that adversarial fine-tuning enables adaptive and robust control under uncertain environments, bridging the gap between offline efficiency and online adaptability.

Country of Origin
🇯🇵 Japan

Page Count
16 pages

Category
Computer Science:
Robotics