Interleaving Multi-Task Neural Architecture Search

15 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Interleaving learning, neural architecture search
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We develop an interleaving learning framework for neural architecture search.
Abstract: Multi-task neural architecture search (MTNAS), which searches for a shared architecture for multiple tasks, has been broadly investigated. In these methods, multiple tasks are learned simultaneously by minimizing the weighted sum of their losses. How to balance these losses by finding the optimal loss weights requires a lot of tuning, which is time-consuming and labor intensive. To address this problem, we propose an interleaving MTNAS framework, where no tuning of loss weights is needed. In our method, a set of tasks (e.g., A, B, C) are performed in an interleaving loop (e.g., ABCABCABC...) where each task transfers its knowledge to the next task. Each task is learned by minimizing its loss function alone, without intervening with losses of other tasks. Loss functions of individual tasks are organized into a multi-level optimization framework which enables all tasks performed end-to-end. The effectiveness of our method is demonstrated in a variety of experiments.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 454
Loading