ConMeZO: Adaptive Directional Sampling for Gradient-Free Finetuning of Language Models

Published: 11 Jun 2025, Last Modified: 10 Jul 2025ES-FoMo IIIEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Finetuning, Zeroth-Order, Gradient Free, Adaptive Directional Sampling
TL;DR: Faster zeroth-order algorithm for LLM finetuning through adaptive directional sampling
Abstract: Zeroth‑order optimization (MeZO) is an attractive strategy for finetuning large language models (LLMs) because it eliminates the memory overhead of storing intermediate activations required by backpropagation. However, it converges slowly due to the inherent curse of dimensionality when searching for descent directions in the higher-dimensional parameter space of billion-scale LLMs. We propose ConMeZO, a novel zeroth‑order optimizer that accelerates convergence by adaptive directional sampling. Instead of drawing the direction uniformly at random, ConMeZO restricts the sampling to a cone centered around a momentum estimate. This concentrates the search in the directions where the true gradient is more likely to lie and thus reduces the effect of higher dimensions. We analytically prove that ConMeZO achieves the same worst-case convergence rate as MeZO. Empirically, when finetuning LLMs on natural language benchmarks, ConMeZO is up to 2x faster than MeZO while retaining the low‑memory footprint of zeroth-order methods.
Submission Number: 94
Loading