A²Tformer: Addressing Temporal Bias and Nonstationarity in Transformer-Based IoT Time Series Classification
Abstract: Sensor devices continuously generate large volumes of time series data in the Internet of Things (IoT) environment. These voluminous streams require models that scale to massive data while discerning the intricate, multiscale patterns embedded in diverse temporal sequences. Transformer models have been widely used for IoT time series analysis due to their strong feature representation and global modeling capability. However, existing architectures struggle to explicitly capture temporal structures and adapt to nonstationary data, limiting classification performance. To address these issues, we propose a novel attention mechanism based on the autocorrelation function, named A2T, which leverages lag characteristics to unify temporal modeling and feature extraction. We further introduce a parameterized wavelet transform module that learns scale and bandwidth end-to-end and uses an attention gate to fuse multiresolution coefficients. Building on this, we design a dual-channel time–frequency feature extraction module to improve adaptability to distribution shifts. Integrating these components, we develop A2Tformer for IoT time series classification. Experimental results on the UCR dataset demonstrate that A2Tformer achieves an average accuracy of 84.49% and ranks first on 26 out of all datasets, outperforming state-of-the-art Transformer-based models.
External IDs:dblp:journals/iotj/LuoLCLSZL25
Loading