Keywords: Meta Learning, Auxiliary Learning, Sample Weighing, Multi task learning
Abstract: Auxiliary Learning (AL) is a form of Multi-Task Learning in which a model leverages auxiliary tasks to improve performance on a primary task. AL has boosted performance across multiple domains, including navigation, image classification, and natural language processing. One of the main weaknesses of AL is the need for labeled auxiliary
tasks, which can require human effort and domain expertise to generate. Furthermore, it has been shown that not all auxiliary tasks are equally beneficial to aid primary task performance. Therefore, deciding how to weight an auxiliary task or sample during training is also a hard problem. Recent work addresses the task-creation problem by learning auxiliary labels using Meta Learning approaches, often via bi-level optimization. However, these methods assume uniform weighting across data points. Other works present selecting weights for known tasks. In this work, we propose Weight-Aware Meta Auxiliary Learning (WAMAL), a novel framework that jointly learns both auxiliary labels and per-sample auxiliary loss weights to better guide the main task. Our method improves upon existing approaches by allowing more nuanced and adaptive task supervision. Across multiple benchmarks WAMAL surpasses both handcrafted auxiliaries and prior meta-auxiliary baselines. On CIFAR-100 (20 super-classes, VGG16) it reaches 80.2\% test accuracy (+5.6 pp over human-designed auxiliaries; +2.8 pp over weight-unaware meta-learning). When fine-tuning ViT-B/16 on Oxford-IIIT Pet, WAMAL improves accuracy by 0.62 pp. These results underscore the importance of learning both which auxiliary tasks to use and how strongly to weight them at the sample level. Code repo will be released after submission. Anonymized version: \url{https://anonymous.4open.science/r/wamal-66EF/README.md}.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 20624
Loading