A Self-Supervised Pre-Training Model for Time Series Classification based on Data Pre-Processing

20 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: self-supervised, FIR Filter, pre-training, contrast learning, data pre-processing
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Currently, time series is widely used in the industrial field. Many scholars have conducted research and made great progress, including pre-training models. By training the model with a large amount of data similar to a certain field, and then fine-tuning the model with a small amount of samples, a high-precision model can be obtained, which is of great value in the industrial field. However, there are two main problems with current models. First, most of them use supervised classification. Although the accuracy is high, it is not practical for many real-world data with few labeled samples. Secondly, most researchers have recently focused on contrastive learning, which has higher requirements for the form and regularity of data, indicating that they have not targeted these issues. To solve these problems, we propose an self-supervised pre-processing classification model for time series classification. First, according to the inherent features of the data, the way of data pre-processing is determined by judging the attributes of the time series. Second, a sorting similarity method is proposed for contrastive learning, and a rough similarity is used in the pre-training stage, while our sorting loss function is used in the fine-tuning stage to improve overall performance. After that, extensive experiments were conducted on $8$ different real-world datasets from various fields to verify the effectiveness and efficiency of the proposed method.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2559
Loading