Keywords: Multi-task Learning, In-memory Computing, Mapping, Optimization
TL;DR: The paper introduces a novel framework, Multi-task Heterogeneous Layer Mapping (MHLM), which efficiently maps multi-task learning models onto heterogeneous analog-digital accelerators.
Abstract: Multi-task Learning (MTL) models are increasingly popular for their ability to perform multiple tasks using shared parameters, significantly reducing redundant computations and resource utilization. These models are particularly advantageous for analog-digital heterogeneous systems, where shared parameters can be mapped onto weight-stationary analog cores.
This paper introduces a novel framework, entitled Multi-task Heterogeneous Layer Mapping, designed to strategically map MTL models onto an accelerator that integrates analog in-memory computing cores and digital processing units. Our framework incorporates a training process that increases task similarity and account for analog non-idealities using hardware-aware training. In the subsequent mapping phase, deployment on the accelerator is optimized for resource allocation and model performance, leveraging feature similarity and importance. Experiments on the COCO, UCI, and BelgiumTS datasets demonstrate that this approach reduces model parameters by up to 3× while maintaining performance within 0.03% of task-specific models.
Submission Number: 18
Loading