A Decoupled Learning Framework for Neural Marked Temporal Point Process

ICLR 2025 Conference Submission12957 Authors

28 Sept 2024 (modified: 21 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: temporal point process, interpretability, event sequence modeling
TL;DR: Addresses history vector bias in neural marked temporal point processes by proposing a decoupled learning framework with individual EEHD architectures per event type. Enhances training speed, prediction performance, and model interpretability.
Abstract: The standard neural marked temporal point process employs the EmbeddingEncoder-History vector-Decoder (EEHD) architecture, wherein the history vector encapsulates the cumulative effects of past events. However, due to the inherent imbalance in event categories in real-world scenarios, the history vector tends to favor more frequent events, inadvertently overlooking less common yet potentially significant ones, thereby compromising the model’s overall performance. To tackle this issue, we introduce a novel decoupled learning framework for neural marked temporal point process, where each event type is modeled independently to capture its unique characteristics, allowing for a more nuanced and equitable treatment of all event types. Each event type boasts its own complete EEHD architecture, featuring scaled-down parameters due to the decoupling of temporal dynamics. This decoupled design enables asynchronous parallel training, and the embeddings can reflect the dependencies between event types. Our versatile framework, accommodating various encoder and decoder architectures, demonstrates state-of-the-art performance across diverse datasets, outperforming benchmarks by a significant margin and increasing training speed by up to 12 times. Additionally, it offers interpretability, revealing which event types have similar influences on a particular event type, fostering a deeper understanding of temporal dynamics.
Primary Area: learning on time series and dynamical systems
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12957
Loading