Visualising Policy-Reward Interplay to Inform Zeroth-Order Preference Optimisation of Large Language Models
By: Alessio Galatolo , Zhenbang Dai , Katie Winkle and more
Potential Business Impact:
Teaches AI to write better with less computing power.
Fine-tuning Large Language Models (LLMs) with first-order methods like back-propagation is computationally intensive. Zeroth-Order (ZO) optimisation uses function evaluations instead of gradients, reducing memory usage, but suffers from slow convergence in high-dimensional models. As a result, ZO research in LLMs has mostly focused on classification, overlooking more complex generative tasks. In this paper, we introduce ZOPrO, a novel ZO algorithm designed for Preference Optimisation in LLMs. We begin by analysing the interplay between policy and reward models during traditional (first-order) Preference Optimisation, uncovering patterns in their relative updates. Guided by these insights, we adapt Simultaneous Perturbation Stochastic Approximation (SPSA) with a targeted sampling strategy to accelerate convergence. Through experiments on summarisation, machine translation, and conversational assistants, we demonstrate that our method consistently enhances reward signals while achieving convergence times comparable to first-order methods. While it falls short of some state-of-the-art methods, our work is the first to apply Zeroth-Order methods to Preference Optimisation in LLMs, going beyond classification tasks and paving the way for a largely unexplored research direction. Code and visualisations are available at https://github.com/alessioGalatolo/VisZOPrO
Similar Papers
Branch, or Layer? Zeroth-Order Optimization for Continual Learning of Vision-Language Models
CV and Pattern Recognition
Learns new things without forgetting old ones.
Zeroth-Order Optimization is Secretly Single-Step Policy Optimization
Machine Learning (CS)
Makes computers learn faster by guessing answers.
TeZO: Empowering the Low-Rankness on the Temporal Dimension in the Zeroth-Order Optimization for Fine-tuning LLMs
Machine Learning (CS)
Makes AI learn faster with less computer power.