Abstract: Multi-task learning commonly encounters competition for resources among tasks, specifically when model capac-ity is limited. This challenge motivates models which al-low control over the relative importance of tasks and total compute cost during inference time. In this work, we pro-pose such a controllable multi-task network that dynami-cally adjusts its architecture and weights to match the de-sired task preference as well as the resource constraints. In contrast to the existing dynamic multi-task approaches that adjust only the weights within a fixed architecture, our approach affords the flexibility to dynamically control the total computational cost and match the user-preferred task importance better. We propose a disentangled training of two hype rnetwo rks, by exploiting task affinity and a novel branching regularized loss, to take input prefer-ences and accordingly predict tree-structured models with adapted weights. Experiments on three multi-task bench-marks, namely PASCAL-Context, NYU-v2, and CIFAR-100, show the efficacy of our approach. Project page is available at https://www.nec-labs.com/-mas/DYMU.
0 Replies
Loading