Learning Soft Sparse Shapes for Efficient Time-Series Classification

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose a soft sparse shapes model for efficient time series classification, using soft shapes as inputs to facilitate shapelet learning.
Abstract: Shapelets are discriminative subsequences (or shapes) with high interpretability in time series classification. Due to the time-intensive nature of shapelet discovery, existing shapelet-based methods mainly focus on selecting discriminative shapes while discarding others to achieve candidate subsequence sparsification. However, this approach may exclude beneficial shapes and overlook the varying contributions of shapelets to classification performance. To this end, we propose a Soft sparse Shapes (SoftShape) model for efficient time series classification. Our approach mainly introduces soft shape sparsification and soft shape learning blocks. The former transforms shapes into soft representations based on classification contribution scores, merging lower-scored ones into a single shape to retain and differentiate all subsequence information. The latter facilitates intra- and inter-shape temporal pattern learning, improving model efficiency by using sparsified soft shapes as inputs. Specifically, we employ a learnable router to activate a subset of class-specific expert networks for intra-shape pattern learning. Meanwhile, a shared expert network learns inter-shape patterns by converting sparsified shapes into sequences. Extensive experiments show that SoftShape outperforms state-of-the-art methods and produces interpretable results.
Lay Summary: Shapelets are discriminative subsequences with high interpretability in time series classification. Traditional shapelet-based methods mainly focus on the selection of discriminative shapes while discarding non-essential ones to accelerate the shapelet discovery process. In this paper, we introduce SoftShape, a soft shapelet learning framework for time series classification. SoftShape consists of two main components: first, it employs a Soft Shape Sparsification mechanism, leveraging attention-based soft weighting to retain key shapelets rather than discarding them outright. Second, it incorporates a dual-pattern learning strategy that integrates a mixture-of-experts architecture for intra-shape learning alongside sequence-aware inter-shape modeling, thereby capturing both local and global temporal patterns. Experiments on 128 UCR time series datasets demonstrate that SoftShape outperforms existing state-of-the-art methods and provides good interpretable results.
Link To Code: https://github.com/qianlima-lab/SoftShape
Primary Area: Deep Learning->Sequential Models, Time series
Keywords: deep learning, time series classification, shapelets, interpretability
Submission Number: 6432
Loading