Event-Based Crossing Dataset (EBCD)
By: Joey Mulé , Dhandeep Challagundla , Rachit Saini and more
Potential Business Impact:
Helps cameras see better in changing light.
Event-based vision revolutionizes traditional image sensing by capturing asynchronous intensity variations rather than static frames, enabling ultrafast temporal resolution, sparse data encoding, and enhanced motion perception. While this paradigm offers significant advantages, conventional event-based datasets impose a fixed thresholding constraint to determine pixel activations, severely limiting adaptability to real-world environmental fluctuations. Lower thresholds retain finer details but introduce pervasive noise, whereas higher thresholds suppress extraneous activations at the expense of crucial object information. To mitigate these constraints, we introduce the Event-Based Crossing Dataset (EBCD), a comprehensive dataset tailored for pedestrian and vehicle detection in dynamic outdoor environments, incorporating a multi-thresholding framework to refine event representations. By capturing event-based images at ten distinct threshold levels (4, 8, 12, 16, 20, 30, 40, 50, 60, and 75), this dataset facilitates an extensive assessment of object detection performance under varying conditions of sparsity and noise suppression. We benchmark state-of-the-art detection architectures-including YOLOv4, YOLOv7, EfficientDet-b0, MobileNet-v1, and Histogram of Oriented Gradients (HOG)-to experiment upon the nuanced impact of threshold selection on detection performance. By offering a systematic approach to threshold variation, we foresee that EBCD fosters a more adaptive evaluation of event-based object detection, aligning diverse neuromorphic vision with real-world scene dynamics. We present the dataset as publicly available to propel further advancements in low-latency, high-fidelity neuromorphic imaging: https://ieee-dataport.org/documents/event-based-crossing-dataset-ebcd
Similar Papers
DVS-PedX: Synthetic-and-Real Event-Based Pedestrian Dataset
CV and Pattern Recognition
Helps cars see people better in bad weather.
TUMTraf EMOT: Event-Based Multi-Object Tracking Dataset and Baseline for Traffic Scenarios
CV and Pattern Recognition
Helps cars see better in dark and fast conditions.
Near-Memory Architecture for Threshold-Ordinal Surface-Based Corner Detection of Event Cameras
Hardware Architecture
Makes cameras see corners faster and use less power.