TYTAN: Taylor-series based Non-Linear Activation Engine for Deep Learning Accelerators
By: Soham Pramanik , Vimal William , Arnab Raha and more
Potential Business Impact:
Makes AI chips faster and use less power.
The rapid advancement in AI architectures and the proliferation of AI-enabled systems have intensified the need for domain-specific architectures that enhance both the acceleration and energy efficiency of AI inference, particularly at the edge. This need arises from the significant resource constraints-such as computational cost and energy consumption-associated with deploying AI algorithms, which involve intensive mathematical operations across multiple layers. High-power-consuming operations, including General Matrix Multiplications (GEMMs) and activation functions, can be optimized to address these challenges. Optimization strategies for AI at the edge include algorithmic approaches like quantization and pruning, as well as hardware methodologies such as domain-specific accelerators. This paper proposes TYTAN: TaYlor-series based non-linear acTivAtion eNgine, which explores the development of a Generalized Non-linear Approximation Engine (G-NAE). TYTAN targets the acceleration of non-linear activation functions while minimizing power consumption. The TYTAN integrates a re-configurable hardware design with a specialized algorithm that dynamically estimates the necessary approximation for each activation function, aimed at achieving minimal deviation from baseline accuracy. The proposed system is validated through performance evaluations with state-of-the-art AI architectures, including Convolutional Neural Networks (CNNs) and Transformers. Results from system-level simulations using Silvaco's FreePDK45 process node demonstrate TYTAN's capability to operate at a clock frequency >950 MHz, showcasing its effectiveness in supporting accelerated, energy-efficient AI inference at the edge, which is ~2 times performance improvement, with ~56% power reduction and ~35 times lower area compared to the baseline open-source NVIDIA Deep Learning Accelerator (NVDLA) implementation.
Similar Papers
Atleus: Accelerating Transformers on the Edge Enabled by 3D Heterogeneous Manycore Architectures
Hardware Architecture
Makes smart computer programs run faster and use less power.
TPU-Gen: LLM-Driven Custom Tensor Processing Unit Generator
Hardware Architecture
Builds faster, smaller computer chips for AI.
Dynamic Tsetlin Machine Accelerators for On-Chip Training at the Edge using FPGAs
Hardware Architecture
Makes smart gadgets learn faster and use less power.