Auxiliary Learning by Implicit DifferentiationDownload PDF

Published: 12 Jan 2021, Last Modified: 22 Oct 2023ICLR 2021 PosterReaders: Everyone
Keywords: Auxiliary Learning, Multi-task Learning
Abstract: Training neural networks with auxiliary tasks is a common practice for improving the performance on a main task of interest. Two main challenges arise in this multi-task learning setting: (i) designing useful auxiliary tasks; and (ii) combining auxiliary tasks into a single coherent loss. Here, we propose a novel framework, AuxiLearn, that targets both challenges based on implicit differentiation. First, when useful auxiliaries are known, we propose learning a network that combines all losses into a single coherent objective function. This network can learn non-linear interactions between tasks. Second, when no useful auxiliary task is known, we describe how to learn a network that generates a meaningful, novel auxiliary task. We evaluate AuxiLearn in a series of tasks and domains, including image segmentation and learning with attributes in the low data regime, and find that it consistently outperforms competing methods.
One-sentence Summary: Learn to combine auxiliary tasks in a nonlinear fashion and to design them automatically.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) AvivNavon/AuxiLearn](https://github.com/AvivNavon/AuxiLearn)
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [CIFAR-100](https://paperswithcode.com/dataset/cifar-100), [CUB-200-2011](https://paperswithcode.com/dataset/cub-200-2011), [NYUv2](https://paperswithcode.com/dataset/nyuv2), [Oxford-IIIT Pet Dataset](https://paperswithcode.com/dataset/oxford-iiit-pets), [SVHN](https://paperswithcode.com/dataset/svhn), [Stanford Cars](https://paperswithcode.com/dataset/stanford-cars)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2007.02693/code)
12 Replies

Loading