Onboard Optimization and Learning: A Survey
By: Monirul Islam Pavel , Siyi Hu , Mahardhika Pratama and more
Potential Business Impact:
Lets small computers learn and think by themselves.
Onboard learning is a transformative approach in edge AI, enabling real-time data processing, decision-making, and adaptive model training directly on resource-constrained devices without relying on centralized servers. This paradigm is crucial for applications demanding low latency, enhanced privacy, and energy efficiency. However, onboard learning faces challenges such as limited computational resources, high inference costs, and security vulnerabilities. This survey explores a comprehensive range of methodologies that address these challenges, focusing on techniques that optimize model efficiency, accelerate inference, and support collaborative learning across distributed devices. Approaches for reducing model complexity, improving inference speed, and ensuring privacy-preserving computation are examined alongside emerging strategies that enhance scalability and adaptability in dynamic environments. By bridging advancements in hardware-software co-design, model compression, and decentralized learning, this survey provides insights into the current state of onboard learning to enable robust, efficient, and secure AI deployment at the edge.
Similar Papers
Empowering Edge Intelligence: A Comprehensive Survey on On-Device AI Models
Artificial Intelligence
Puts smart computer brains on your phone.
Rethinking Inference Placement for Deep Learning across Edge and Cloud Platforms: A Multi-Objective Optimization Perspective and Future Directions
Distributed, Parallel, and Cluster Computing
Makes smart apps run faster and safer.
To Offload or Not To Offload: Model-driven Comparison of Edge-native and On-device Processing
Distributed, Parallel, and Cluster Computing
Decides when phone should do work or send it away.