Synergies Between Disentanglement and Sparsity: a Multi-Task Learning PerspectiveDownload PDF

Published: 01 Feb 2023, 19:30, Last Modified: 13 Feb 2023, 23:27Submitted to ICLR 2023Readers: Everyone
Keywords: Disentanglement, identifiability, multi-task learning, sparsity, transfer learning, meta-learing
TL;DR: We show how disentangled representations combined with sparse base-predictors can improve generalization and how, in a multi-task learning setting, sparsity regularization on the task-specific predictors can induce disentanglement.
Abstract: Although disentangled representations are often said to be beneficial for downstream tasks, current empirical and theoretical understanding is limited. In this work, we provide evidence that disentangled representations coupled with sparse base-predictors improve generalization. In the context of multi-task learning, we prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations. Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem. Finally, we explore a meta-learning version of this algorithm based on group Lasso multiclass SVM base-predictors, for which we derive a tractable dual formulation. It obtains competitive results on standard few-shot classification benchmarks, while each task is using only a fraction of the learned representations.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
14 Replies