Score: 1

Balancing Robustness and Efficiency in Embedded DNNs Through Activation Function Selection

Published: April 7, 2025 | arXiv ID: 2504.05119v2

By: Jon Gutiérrez-Zaballa, Koldo Basterretxea, Javier Echanobe

Potential Business Impact:

Makes self-driving cars safer from tiny errors.

Business Areas:
Intelligent Systems Artificial Intelligence, Data and Analytics, Science and Engineering

Machine learning-based embedded systems for safety-critical applications, such as aerospace and autonomous driving, must be robust to perturbations caused by soft errors. As transistor geometries shrink and voltages decrease, modern electronic devices become more susceptible to background radiation, increasing the concern about failures produced by soft errors. The resilience of deep neural networks (DNNs) to these errors depends not only on target device technology but also on model structure and the numerical representation and arithmetic precision of their parameters. Compression techniques like pruning and quantization, used to reduce memory footprint and computational complexity, alter both model structure and representation, affecting soft error robustness. In this regard, although often overlooked, the choice of activation functions (AFs) impacts not only accuracy and trainability but also compressibility and error resilience. This paper explores the use of bounded AFs to enhance robustness against parameter perturbations, while evaluating their effects on model accuracy, compressibility, and computational load with a technology-agnostic approach. We focus on encoder-decoder convolutional models developed for semantic segmentation of hyperspectral images with application to autonomous driving systems. Experiments are conducted on an AMD-Xilinx's KV260 SoM.

Country of Origin
🇪🇸 Spain

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)