Analyzing Transformer Models and Knowledge Distillation Approaches for Image Captioning on Edge AI
By: Wing Man Casca Kwok, Yip Chiu Tung, Kunal Bhagchandani
Potential Business Impact:
Makes robots understand pictures faster on small devices.
Edge computing decentralizes processing power to network edge, enabling real-time AI-driven decision-making in IoT applications. In industrial automation such as robotics and rugged edge AI, real-time perception and intelligence are critical for autonomous operations. Deploying transformer-based image captioning models at the edge can enhance machine perception, improve scene understanding for autonomous robots, and aid in industrial inspection. However, these edge or IoT devices are often constrained in computational resources for physical agility, yet they have strict response time requirements. Traditional deep learning models can be too large and computationally demanding for these devices. In this research, we present findings of transformer-based models for image captioning that operate effectively on edge devices. By evaluating resource-effective transformer models and applying knowledge distillation techniques, we demonstrate inference can be accelerated on resource-constrained devices while maintaining model performance using these techniques.
Similar Papers
A Novel Lightweight Transformer with Edge-Aware Fusion for Remote Sensing Image Captioning
CV and Pattern Recognition
Makes satellite pictures tell better stories.
Empowering Edge Intelligence: A Comprehensive Survey on On-Device AI Models
Artificial Intelligence
Puts smart computer brains on your phone.
Attention-based transformer models for image captioning across languages: An in-depth survey and evaluation
CV and Pattern Recognition
Makes computers describe pictures in many languages.