Abstract: In time-series machine learning, the challenge of obtaining labeled data has spurred interest in using unlabeled data for model training. Current research primarily focuses on deep multi-task learning, emphasizing the hard parameter-sharing approach. Importantly, when correlations between tasks are weak, indiscriminate parameter sharing can lead to learning interference. Consequently, we introduce a novel framework called DPS, which separates training into dependency-learning and parameter-sharing phases. This structure allows the model to manage knowledge sharing between tasks dynamically. Additionally, we introduce a loss function to align neuron functionalities across tasks, addressing learning interference. Through experiments on real-world datasets, we demonstrate the superiority of DPS over baselines. Moreover, our results shed light on the impacts of the two designed training phases, validating that DPS consistently ensures a degree of learning stability.
Loading