Learning Dolly-In Filming From Demonstration Using a Ground-Based Robot
By: Philip Lorimer, Alan Hunter, Wenbin Li
Potential Business Impact:
Robot cameras learn to film like humans.
Cinematic camera control demands a balance of precision and artistry - qualities that are difficult to encode through handcrafted reward functions. While reinforcement learning (RL) has been applied to robotic filmmaking, its reliance on bespoke rewards and extensive tuning limits creative usability. We propose a Learning from Demonstration (LfD) approach using Generative Adversarial Imitation Learning (GAIL) to automate dolly-in shots with a free-roaming, ground-based filming robot. Expert trajectories are collected via joystick teleoperation in simulation, capturing smooth, expressive motion without explicit objective design. Trained exclusively on these demonstrations, our GAIL policy outperforms a PPO baseline in simulation, achieving higher rewards, faster convergence, and lower variance. Crucially, it transfers directly to a real-world robot without fine-tuning, achieving more consistent framing and subject alignment than a prior TD3-based method. These results show that LfD offers a robust, reward-free alternative to RL in cinematic domains, enabling real-time deployment with minimal technical effort. Our pipeline brings intuitive, stylized camera control within reach of creative professionals, bridging the gap between artistic intent and robotic autonomy.
Similar Papers
Reinforcement Learning of Dolly-In Filming Using a Ground-Based Robot
Robotics
Makes movie robots move camera smoothly.
Robot Policy Transfer with Online Demonstrations: An Active Reinforcement Learning Approach
Robotics
Robots learn new jobs faster with live help.
Learning and generalization of robotic dual-arm manipulation of boxes from demonstrations via Gaussian Mixture Models (GMMs)
Robotics
Robots learn new tasks from few examples.