Score: 1

Analyzing Transformer Models and Knowledge Distillation Approaches for Image Captioning on Edge AI

Published: June 4, 2025 | arXiv ID: 2506.03607v1

By: Wing Man Casca Kwok, Yip Chiu Tung, Kunal Bhagchandani

Potential Business Impact:

Makes robots understand pictures faster on small devices.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Edge computing decentralizes processing power to network edge, enabling real-time AI-driven decision-making in IoT applications. In industrial automation such as robotics and rugged edge AI, real-time perception and intelligence are critical for autonomous operations. Deploying transformer-based image captioning models at the edge can enhance machine perception, improve scene understanding for autonomous robots, and aid in industrial inspection. However, these edge or IoT devices are often constrained in computational resources for physical agility, yet they have strict response time requirements. Traditional deep learning models can be too large and computationally demanding for these devices. In this research, we present findings of transformer-based models for image captioning that operate effectively on edge devices. By evaluating resource-effective transformer models and applying knowledge distillation techniques, we demonstrate inference can be accelerated on resource-constrained devices while maintaining model performance using these techniques.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
CV and Pattern Recognition