Vision based driving agent for race car simulation environments
By: Gergely Bári, László Palkovics
Potential Business Impact:
Car learns to drive fast, using all tire grip.
In recent years, autonomous driving has become a popular field of study. As control at tire grip limit is essential during emergency situations, algorithms developed for racecars are useful for road cars too. This paper examines the use of Deep Reinforcement Learning (DRL) to solve the problem of grip limit driving in a simulated environment. Proximal Policy Optimization (PPO) method is used to train an agent to control the steering wheel and pedals of the vehicle, using only visual inputs to achieve professional human lap times. The paper outlines the formulation of the task of time optimal driving on a race track as a deep reinforcement learning problem, and explains the chosen observations, actions, and reward functions. The results demonstrate human-like learning and driving behavior that utilize maximum tire grip potential.
Similar Papers
Self driving algorithm for an active four wheel drive racecar
Robotics
Teaches race cars to drive faster and safer.
On learning racing policies with reinforcement learning
Robotics
Teaches race cars to drive themselves faster.
A Practical Introduction to Deep Reinforcement Learning
Machine Learning (CS)
Teaches computers to learn and make smart choices.