Score: 2

ADEPT: Adaptive Dynamic Early-Exit Process for Transformers

Published: January 7, 2026 | arXiv ID: 2601.03700v1

By: Sangmin Yoo , Srikanth Malla , Chiho Choi and more

BigTech Affiliations: Samsung

Potential Business Impact:

Makes AI faster and smarter by skipping steps.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The inference of large language models imposes significant computational workloads, often requiring the processing of billions of parameters. Although early-exit strategies have proven effective in reducing computational demands by halting inference earlier, they apply either to only the first token in the generation phase or at the prompt level in the prefill phase. Thus, the Key-Value (KV) cache for skipped layers remains a bottleneck for subsequent token generation, limiting the benefits of early exit. We introduce ADEPT (Adaptive Dynamic Early-exit Process for Transformers), a novel approach designed to overcome this issue and enable dynamic early exit in both the prefill and generation phases. The proposed adaptive token-level early-exit mechanism adjusts computation dynamically based on token complexity, optimizing efficiency without compromising performance. ADEPT further enhances KV generation procedure by decoupling sequential dependencies in skipped layers, making token-level early exit more practical. Experimental results demonstrate that ADEPT improves efficiency by up to 25% in language generation tasks and achieves a 4x speed-up in downstream classification tasks, with up to a 45% improvement in performance.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡°πŸ‡· United States, South Korea

Page Count
22 pages

Category
Computer Science:
Computation and Language