Presentation Attendance: Yes, we will present in-person
Keywords: Foundation Model, Time Series, Model Compression
TL;DR: TinyCN compresses a transformer foundation model into a lightweight CNN via distillation, achieving better accuracy than the state-of-the-art ensemble on 128 UCR datasets, while being up to 10x smaller and far more efficient.
Abstract: Time series foundation models provide strong generalization but remain computationally expensive for deployment in resource-constrained settings. We introduce TinyCN, a compact convolutional model trained via knowledge distillation from a transformer-based foundation model (Mantis-8M). Our training procedure transitions from representation alignment to task-specific optimization, enabling effective transfer of foundation representations into a lightweight CNN. Across all 128 UCR datasets, TinyCN achieves statistically significant improvements over Hybrid InceptionTime (HIT), the ensemble state-of-the-art, while being over $40\times$ smaller than Mantis and $10\times$ smaller than HIT. These results demonstrate that foundation representations can be effectively compressed into simple CNNs, achieving superior accuracy and efficiency for time series classification.
Track: Research Track (max 4 pages)
Submission Number: 91
Loading