Score: 2

The Rosario Dataset v2: Multimodal Dataset for Agricultural Robotics

Published: August 29, 2025 | arXiv ID: 2508.21635v1

By: Nicolas Soncini , Javier Cremona , Erica Vidal and more

Potential Business Impact:

Helps farm robots see and move in fields.

Business Areas:
Robotics Hardware, Science and Engineering, Software

We present a multi-modal dataset collected in a soybean crop field, comprising over two hours of recorded data from sensors such as stereo infrared camera, color camera, accelerometer, gyroscope, magnetometer, GNSS (Single Point Positioning, Real-Time Kinematic and Post-Processed Kinematic), and wheel odometry. This dataset captures key challenges inherent to robotics in agricultural environments, including variations in natural lighting, motion blur, rough terrain, and long, perceptually aliased sequences. By addressing these complexities, the dataset aims to support the development and benchmarking of advanced algorithms for localization, mapping, perception, and navigation in agricultural robotics. The platform and data collection system is designed to meet the key requirements for evaluating multi-modal SLAM systems, including hardware synchronization of sensors, 6-DOF ground truth and loops on long trajectories. We run multimodal state-of-the art SLAM methods on the dataset, showcasing the existing limitations in their application on agricultural settings. The dataset and utilities to work with it are released on https://cifasis.github.io/rosariov2/.


Page Count
19 pages

Category
Computer Science:
Robotics