NgIohTuned, a new Black-Box Optimization Wizard for Real World Machine Learning

TMLR Paper2274 Authors

21 Feb 2024 (modified: 30 Apr 2024)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: Inspired by observations in neuro-control and various reproducibility issues in machine learn- ing black-box optimization, we analyze the gap between real-world and artificial benchmarks. We (i) compare real-world benchmarks vs artificial ones, emphasizing the success of Differen- tial Evolution (DE) and Particle Swarm Optimization (PSO) in the former case (ii) propose new artificial benchmarks including properties observed in the real world and in particular in neural reinforcement learning, with a special emphasis on the scaling issues, where scale refers to the unknown distance between the optimum and the origin (iii) observe the good performance of quasi-opposite sampling and of Cobyla in some problems for which the scale is critical (iv) observe a robust performance of discrete optimization methods focusing on an optimized decreasing schedule of the mutation scale (v) design more efficient black-box optimization algorithms that combine, sequentially, optimization algorithms with good scal- ing properties in a first phase, then robust optimization algorithms for the middle phase, followed by fast convergence techniques for the final optimization phase. All methods are included in a public optimization wizard, namely NgIoh4 (without taking into account the type of variables) and NgIohTuned (taking into account all conclusions of the paper, including taking into account the real-world nature of a problem and/or that it is neurocontrol).
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=J2FozfEsIF
Changes Since Last Submission: We have taken into account the additional review, leading to a major revision.
Assigned Action Editor: ~Xi_Lin2
Submission Number: 2274
Loading