Hawkes process revisited: balancing interpretability and flexibility with contextualized event embeddings and a neural impact kernel

27 Sept 2024 (modified: 18 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Event sequence, Hawkes Process, Interpretability, Embedding Space
Abstract: The Hawkes process (HP) is commonly used to model event sequences with selfreinforcing dynamics, including electronic health records, stock trades, and social media interactions. Traditional HPs capture self-reinforcement via parametric impact functions that can be inspected to understand how each event modulates the intensity of others. Neural network-based HPs offer greater flexibility, resulting in improved fit and prediction performance, but at the cost of interpretability, which can be critical in medicine and other high-stakes settings. In this work, we aim to understand and improve upon this tradeoff. We propose a novel HP formulation in which impact functions are modeled by defining a flexible impact kernel, instantiated as a neural network, in event embedding space, which allows us to model large-scale event sequences with many event types. This approach is more flexible than traditional HPs, because we do not assume a particular parametric form for the impact functions, yet more interpretable than other neural network approaches, because self-reinforcing dynamics are still entirely captured by the impact kernel, which can be inspected. If needed, our approach allows us to trade interpretability for flexibility by contextualizing the event embeddings with transformer encoder layers. Results show that our method accurately recovers impact functions in simulations and achieves competitive performance on real-world datasets even without transformer layers. This suggests that our flexible impact kernel is often sufficient to capture self-reinforcing dynamics effectively, implying that interpretability can be maintained without loss of performance.
Primary Area: interpretability and explainable AI
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8763
Loading