A Champion-level Vision-based Reinforcement Learning Agent for Competitive Racing in Gran Turismo 7
By: Hojoon Lee , Takuma Seno , Jun Jet Tai and more
Potential Business Impact:
Cars learn to race using only cameras.
Deep reinforcement learning has achieved superhuman racing performance in high-fidelity simulators like Gran Turismo 7 (GT7). It typically utilizes global features that require instrumentation external to a car, such as precise localization of agents and opponents, limiting real-world applicability. To address this limitation, we introduce a vision-based autonomous racing agent that relies solely on ego-centric camera views and onboard sensor data, eliminating the need for precise localization during inference. This agent employs an asymmetric actor-critic framework: the actor uses a recurrent neural network with the sensor data local to the car to retain track layouts and opponent positions, while the critic accesses the global features during training. Evaluated in GT7, our agent consistently outperforms GT7's built-drivers. To our knowledge, this work presents the first vision-based autonomous racing agent to demonstrate champion-level performance in competitive racing scenarios.
Similar Papers
Drive Fast, Learn Faster: On-Board RL for High Performance Autonomous Racing
Robotics
Teaches race cars to drive themselves faster.
Vision based driving agent for race car simulation environments
Robotics
Car learns to drive fast, using all tire grip.
On learning racing policies with reinforcement learning
Robotics
Teaches race cars to drive themselves faster.