Score: 0

Bayesian Subspace Gradient Estimation for Zeroth-Order Optimization of Large Language Models

Published: January 4, 2026 | arXiv ID: 2601.01452v1

By: Jian Feng, Zhihong Huang

Potential Business Impact:

Makes AI learn with less computer memory.

Business Areas:
A/B Testing Data and Analytics

Fine-tuning large language models (LLMs) with zeroth-order (ZO) optimization reduces memory by approximating gradients through function evaluations, but existing methods rely on one-step gradient estimates from random perturbations. We introduce Bayesian Subspace Zeroth-Order optimization (BSZO), a ZO optimizer that applies Kalman filtering to combine finite-difference information across multiple perturbation directions. By treating each finite-difference measurement as a noisy observation, BSZO builds a posterior distribution over the projected gradient and updates it through Bayesian inference, with a residual-based adaptive mechanism to adjust perturbation scales. Theoretical analysis shows that BSZO improves the convergence rate by a factor of $k/γ$ compared to standard ZO methods. Experiments on RoBERTa, Mistral, and OPT models show that BSZO outperforms MeZO, MeZO-Adam, and HiZOO across various tasks, achieving up to 6.67\% absolute average improvement on OPT-13B while keeping memory usage close to inference-only baselines (1.00$\times$--1.08$\times$ of MeZO).

Country of Origin
🇨🇳 China

Page Count
19 pages

Category
Computer Science:
Machine Learning (CS)