Score: 0

On-Device Fine-Tuning via Backprop-Free Zeroth-Order Optimization

Published: November 14, 2025 | arXiv ID: 2511.11362v1

By: Prabodh Katti , Sangwoo Park , Bipin Rajendran and more

Potential Business Impact:

Lets AI learn on small devices without losing data.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

On-device fine-tuning is a critical capability for edge AI systems, which must support adaptation to different agentic tasks under stringent memory constraints. Conventional backpropagation (BP)-based training requires storing layer activations and optimizer states, a demand that can be only partially alleviated through checkpointing. In edge deployments in which the model weights must reside entirely in device memory, this overhead severely limits the maximum model size that can be deployed. Memory-efficient zeroth-order optimization (MeZO) alleviates this bottleneck by estimating gradients using forward evaluations alone, eliminating the need for storing intermediate activations or optimizer states. This enables significantly larger models to fit within on-chip memory, albeit at the cost of potentially longer fine-tuning wall-clock time. This paper first provides a theoretical estimate of the relative model sizes that can be accommodated under BP and MeZO training. We then numerically validate the analysis, demonstrating that MeZO exhibits accuracy advantages under on-device memory constraints, provided sufficient wall-clock time is available for fine-tuning.

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)