Task-Aware Dynamic Model Optimization for Multi-Task Learning

Published: 01 Jan 2023, Last Modified: 13 Nov 2024IEEE Access 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Multi-task learning (MTL) is a field in which a deep neural network simultaneously learns knowledge from multiple tasks. However, achieving resource-efficient MTL remains challenging due to entangled network parameters across tasks and varying task-specific complexity. Existing methods employ network compression techniques while maintaining comparable performance, but they often compress uniformly across all tasks without considering individual complexity. This can lead to suboptimal solutions due to entangled network parameters and memory inefficiency, as the parameters for each task may be insufficient or excessive. To address these challenges, we propose a framework called Dynamic Model Optimization (DMO) that dynamically allocates network parameters to groups based on task-specific complexity. This framework consists of three key steps: measuring task similarity and task difficulty, grouping tasks, and allocating parameters. This process involves the calculation of both weight and loss similarities across tasks and employs sample-wise loss as a measure of task difficulty. Tasks are grouped based on their similarities, and parameters are allocated with dynamic pruning according to task difficulty within their respective groups. We apply the proposed framework to MTL with various classification datasets. Experimental results demonstrate that the proposed approach achieves high performance while taking fewer network parameters than other MTL methods.
Loading