TL;DR: Generalizing time encoding in diverse domains through learnable transformation functions
Abstract: Effectively modeling time information and incorporating it into applications or models involving chronologically occurring events is crucial. Real-world scenarios often involve diverse and complex time patterns, which pose significant challenges for time encoding methods. While previous methods focus on capturing time patterns, many rely on specific inductive biases, such as using trigonometric functions to model periodicity. This narrow focus on single-pattern modeling makes them less effective in handling the diversity and complexities of real-world time patterns. In this paper, we investigate to improve the existing commonly used time encoding methods and introduce **Learnable Transformation-based Generalized Time Encoding (LeTE)**. We propose using deep function learning techniques to parameterize nonlinear transformations in time encoding, making them learnable and capable of modeling generalized time patterns, including diverse and complex temporal dynamics. By enabling learnable transformations, LeTE encompasses previous methods as specific cases and allows seamless integration into a wide range of tasks. Through extensive experiments across diverse domains, we demonstrate the versatility and effectiveness of LeTE.
Lay Summary: Many AI systems need to understand when things happen—whether it’s predicting user activity, detecting fraud, or modeling social interactions. However, existing methods for representing time often assume simple, repeating patterns such as daily or weekly cycles. These assumptions and inductive biases limit their ability to capture the complex, irregular, and mixed temporal patterns commonly found in real-world data.
Our research introduces a new time encoding method called **LeTE (Learnable Transformation-based Generalized Time Encoding)**. Instead of relying on hand-crafted assumptions or injecting strong inductive biases—such as those imposed by fixed trigonometric functions—LeTE offers a fully learnable framework that encodes time directly from data. It uses deep function learning to automatically discover flexible and complex time patterns.
This allows the time encoding to adapt to periodic, non-periodic, and mixed time patterns, making it not only more versatile but also inherently interpretable, as LeTE encodes time through explicit, structured, and learnable functions that can be directly examined. LeTE unifies and generalizes previous time encoding techniques, and it can be seamlessly integrated into a wide range of machine learning models.
In experiments across several domains, our method consistently improves model performance, demonstrating that learning time embeddings directly from data improves both accuracy and robustness in downstream predictions.
Link To Code: https://github.com/chenxi1228/LeTE
Primary Area: Deep Learning->Everything Else
Keywords: Time Encoding, Dynamic Graph, Time Series, Deep Function Learning
Submission Number: 103
Loading