Zeroth-order methods for non-smooth stochastic problems under heavy-tailed noise

Published: 20 Sept 2024, Last Modified: 30 Sept 2024ICOMP PublicationEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Zeroth-order methods, convex optimisation, saddle point problems, heavy-tailed noise
Abstract: Recently gradient-free optimisation methods have become a major tool in reinforcement learning and memory-efficient LLM fine-tuning. Under the standard setting of uniformly bounded noise variance an optimal accelerated algorithm has been derived. However, the assumption of bounded variance is strict and usually is not fulfilled in practice. Therefore, we will relax it, allowing the noise distribution to be heavy-tailed and, thus, broadening the class of problems to be solved. We propose gradient-free algorithms with zeroth-order oracle under adversarial noise with unbounded variance, for non-smooth convex and convex-concave optimisation problems. We apply clipping operator to deal with heavy-tailedness and batching to allow efficient computation via parallelization. Our analysis provides asymptotic bounds for such key parameters as iteration complexity, oracle complexity and maximal adversarial noise level.
Submission Number: 27
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview