Red grape detection with accelerated artificial neural networks in the FPGA's programmable logic
By: Sandro Costa Magalhães , Marco Almeida , Filipe Neves dos Santos and more
Potential Business Impact:
Makes robots see and move much faster.
Robots usually slow down for canning to detect objects while moving. Additionally, the robot's camera is configured with a low framerate to track the velocity of the detection algorithms. This would be constrained while executing tasks and exploring, making robots increase the task execution time. AMD has developed the Vitis-AI framework to deploy detection algorithms into FPGAs. However, this tool does not fully use the FPGAs' PL. In this work, we use the FINN architecture to deploy three ANNs, MobileNet v1 with 4-bit quantisation, CNV with 2-bit quantisation, and CNV with 1-bit quantisation (BNN), inside an FPGA's PL. The models were trained on the RG2C dataset. This is a self-acquired dataset released in open access. MobileNet v1 performed better, reaching a success rate of 98 % and an inference speed of 6611 FPS. In this work, we proved that we can use FPGAs to speed up ANNs and make them suitable for attention mechanisms.
Similar Papers
Real Time FPGA Based CNNs for Detection, Classification, and Tracking in Autonomous Systems: State of the Art Designs and Optimizations
Hardware Architecture
Makes cameras understand things faster and with less power.
Efficient FPGA-accelerated Convolutional Neural Networks for Cloud Detection on CubeSats
Signal Processing
Makes small satellites see clouds in space.
Real-Time Semantic Segmentation of Aerial Images Using an Embedded U-Net: A Comparison of CPU, GPU, and FPGA Workflows
CV and Pattern Recognition
Lets drones quickly understand what they see.