MASIF: Meta-learned Algorithm Selection using Implicit Fidelity Information

Published: 18 Apr 2023, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Event Certifications: automl.cc/AutoML/2023/Journal_Track
Abstract: Selecting a well-performing algorithm for a given task or dataset can be time-consuming and tedious, but is crucial for the successful day-to-day business of developing new AI & ML applications. Algorithm Selection (AS) mitigates this through a meta-model leveraging meta-information about previous tasks. However, most of the available AS methods are error-prone because they characterize a task by either cheap-to-compute properties of the dataset or evaluations of cheap proxy algorithms, called landmarks. In this work, we extend the classical AS data setup to include multi-fidelity information and empirically demonstrate how meta-learning on algorithms’ learning behaviour allows us to exploit cheap test-time evidence effectively and combat myopia significantly. We further postulate a budget-regret trade-off w.r.t. the selection process. Our new selector MASIF is able to jointly interpret online evidence on a task in form of varying-length learning curves without any parametric assumption by leveraging a transformer-based encoder. This opens up new possibilities for guided rapid prototyping in data science on cheaply observed partial learning curves.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Following bHsm, we have added additional experiments; 1. using LCNet as baseline for learning curve prediction on LCBench. 2. evaluate our model and all baselines on a different Task-set subset.
Video: https://youtu.be/4qXRyRjJPIY
Code: https://github.com/automl/masif
Assigned Action Editor: ~Kevin_Swersky1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 637
Loading