Real-Time Image Processing Algorithms for Embedded Systems
By: Soundes Oumaima Boufaida, Abdemadjid Benmachiche, Majda Maatallah
Embedded vision systems need efficient and robust image processing algorithms to perform real-time, with resource-constrained hardware. This research investigates image processing algorithms, specifically edge detection, corner detection, and blob detection, that are implemented on embedded processors, including DSPs and FPGAs. To address latency, accuracy and power consumption noted in the image processing literature, optimized algorithm architectures and quantization techniques are employed. In addition, optimal techniques for inter-frame redundancy removal and adaptive frame averaging are used to improve throughput with reasonable image quality. Simulations and hardware trials of the proposed approaches show marked improvements in the speed and energy efficiency of processing as compared to conventional implementations. The advances of this research facilitate a path for scalable and inexpensive embedded imaging systems for the automotive, surveillance, and robotics sectors, and underscore the benefit of co-designing algorithms and hardware architectures for practical real-time embedded vision applications.
Similar Papers
Real-time Object Detection and Associated Hardware Accelerators Targeting Autonomous Vehicles: A Review
Hardware Architecture
Helps self-driving cars see faster and safer.
Boosting performance of computer vision applications through embedded GPUs on the edge
CV and Pattern Recognition
Makes phone apps with cool pictures run faster.
Real Time FPGA Based CNNs for Detection, Classification, and Tracking in Autonomous Systems: State of the Art Designs and Optimizations
Hardware Architecture
Makes cameras understand things faster and with less power.