TL;DR: We propose Channel Normalization (CN), a simple yet effective normalization strategy that enhances channel-identifiability of a time series model by employing distinct affine transformation parameters for each channel.
Abstract: Channel identifiability (CID) refers to the ability to distinguish among individual channels in time series (TS) modeling. The absence of CID often results in producing identical outputs for identical inputs, disregarding channel-specific characteristics. In this paper, we highlight the importance of CID and propose Channel Normalization (CN), a simple yet effective normalization strategy that enhances CID by assigning distinct affine transformation parameters to each channel. We further extend CN in two ways: 1) Adaptive CN (ACN) dynamically adjusts parameters based on the input TS, improving adaptability in TS models, and 2) Prototypical CN (PCN) introduces a set of learnable prototypes instead of per-channel parameters, enabling applicability to datasets with unknown or varying number of channels and facilitating use in TS foundation models. We demonstrate the effectiveness of CN and its variants by applying them to various TS models, achieving significant performance gains for both non-CID and CID models. In addition, we analyze the success of our approach from an information theory perspective. Code is available at [https://github.com/seunghan96/CN](https://github.com/seunghan96/CN).
Lay Summary: When analyzing time series (TS) data — like sensor readings or financial trends — many TS models fail to tell individual input channels apart. This can lead to treating very different signals as if they were the same, hurting the model’s performance. Our research tackles this problem by helping AI models better distinguish between input channels, a property we call channel identifiability.
We introduce a simple method called Channel Normalization (CN) that enables TS models to distinguish between different channels using channel-specific parameters. We also develop two extensions: one that adapts to different inputs, and another that can handle situations where the number of input channels changes. These techniques make models more flexible and accurate across a wide range of real-world TS tasks. Our findings highlight how improving a model’s awareness of its inputs can lead to better decisions.
Link To Code: https://github.com/seunghan96/CN
Primary Area: Deep Learning->Sequential Models, Time series
Keywords: Time Series, Channel Normalization, Time Series Forecasting
Submission Number: 2091
Loading