Enhancing Task-Related Features Learning with Task Agnostic-to-Specific Attention for Multi-task Dense PredictionDownload PDF

16 Nov 2022OpenReview Archive Direct UploadReaders: Everyone
Abstract: Existing works generally adopt the encoder-decoder structure for Multi-task Learning (MTL), where the encoder extracts the task-agnostic features, and multiple decoders generate task-specific features for predictions. One key issue of such structure is that the task-related features, which contain both low-level and beneficial high-level representations have been long under-estimated and under-explored for multi-task predictions. In this work, by learning the task-related features jointly from both task-agnostic and task-specific features, we reveal an important fact that apart from the task-specific features in decoders, the task-related features are also beneficial to final predictions. Inspired by this, we propose a novel Task-Related Feature Learning (TRFL) method that consists of a decoupling stage and a recoupling stage. In the decoupling stage, the TRFL introduces multi-scale deep supervision and Task Agnostic-to-Specific Attention Module (TASAM), to better decouple the task-agnostic features and produce task-specific features. In the recoupling stage, a Task-Related Feature Aggregation Module Module (TRFAM) is designed to aggregate the task-agnostic and task-specific features to learn task-related features. Extensive experiments are conducted on NYUD-v2 and PASCAL Context benchmarks, and the results show the superior performance of the proposed architecture on promoting different dense prediction tasks simultaneously.
0 Replies

Loading