Deep Reinforcement Learning via Object-Centric Attention
By: Jannis Blüml , Cedric Derstroff , Bjarne Gregori and more
Potential Business Impact:
Helps game robots learn new levels faster.
Deep reinforcement learning agents, trained on raw pixel inputs, often fail to generalize beyond their training environments, relying on spurious correlations and irrelevant background details. To address this issue, object-centric agents have recently emerged. However, they require different representations tailored to the task specifications. Contrary to deep agents, no single object-centric architecture can be applied to any environment. Inspired by principles of cognitive science and Occam's Razor, we introduce Object-Centric Attention via Masking (OCCAM), which selectively preserves task-relevant entities while filtering out irrelevant visual information. Specifically, OCCAM takes advantage of the object-centric inductive bias. Empirical evaluations on Atari benchmarks demonstrate that OCCAM significantly improves robustness to novel perturbations and reduces sample complexity while showing similar or improved performance compared to conventional pixel-based RL. These results suggest that structured abstraction can enhance generalization without requiring explicit symbolic representations or domain-specific object extraction pipelines.
Similar Papers
Are We Done with Object-Centric Learning?
CV and Pattern Recognition
Teaches computers to see objects separately.
Object-Centric Representations Improve Policy Generalization in Robot Manipulation
Robotics
Robots learn to grab things better by seeing objects.
Objects matter: object-centric world models improve reinforcement learning in visually complex environments
Machine Learning (CS)
Helps robots learn faster by focusing on important things.