An Invariant Information Geometric Method for High-dimensional Online Optimization

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Online optimization, Invariant optimizer, Information geometry, Evolutionary strategy, Bayesian optimization
TL;DR: We build the invariant framework for online optimization with zeroth-order feedback, which exemplifies an invariant, scalable, learning rate free and historical information fully incorporated optimizer with edges in synthetic and realistic tasks.
Abstract: Sample efficiency lies at the heart of many optimization problems, especially for black-box settings where costly evaluations and zeroth-order feedback occur. Typical methods such as Bayesian optimization and evolutionary strategy, which stem from an online formulation that optimizes mostly through the current batch, suffer from either high computational cost or low efficiency. To strengthen sample efficiency under reasonable computational cost, one promising way is to achieve invariant under smooth bijective transformations of model parameters. In this paper, we build the first invariant practical optimizer framework InvIGO based on information geometric optimization. It can incorporate historical information without violating the invariant. We further exemplify InvIGO with historical information on multi-dimensional Gaussian, which gives an invariant and scalable optimizer SynCMA that fully incorporates historical information with no external learning rate to tune. The theoretical behavior and advantages of our algorithm over other Gaussian-based optimizers are further analyzed to demonstrate its theoretical superiority. We then benchmark SynCMA against other leading optimizers, including the competitive optimizer in Bayesian optimization, on synthetic functions, Mujoco locomotion tasks and rover planning task. In all scenarios, SynCMA demonstrates great competence, if not dominance, over other optimizers in sample efficiency.
Supplementary Material: zip
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6183
Loading