Score: 0

FlexAct: Why Learn when you can Pick?

Published: January 10, 2026 | arXiv ID: 2601.06441v1

By: Ramnath Kumar , Kyle Ritscher , Junmin Judy and more

Potential Business Impact:

Lets computers pick the best math for learning.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Learning activation functions has emerged as a promising direction in deep learning, allowing networks to adapt activation mechanisms to task-specific demands. In this work, we introduce a novel framework that employs the Gumbel-Softmax trick to enable discrete yet differentiable selection among a predefined set of activation functions during training. Our method dynamically learns the optimal activation function independently of the input, thereby enhancing both predictive accuracy and architectural flexibility. Experiments on synthetic datasets show that our model consistently selects the most suitable activation function, underscoring its effectiveness. These results connect theoretical advances with practical utility, paving the way for more adaptive and modular neural architectures in complex learning scenarios.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)