In-Context Learning of Temporal Point Processes with Foundation Inference Models

ICLR 2026 Conference Submission17418 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: temporal point processes, zero-shot inference, in-context learning, zero-shot parameter estimation, inference of point processes, foundation models, foundation inference models
TL;DR: We introduce a framework for in-context (zero-shot) inference/estimation of intensity function underlying temporal point processes from empirical data.
Abstract: Modeling event sequences with multiple event types with marked temporal point processes (MTPPs) provides a principled way to uncover governing rules and predict future events. Current neural network approaches to MTPP inference rely on training separate, specialized models for each target system. We pursue a radically different approach: drawing on amortized inference and in-context learning, we pretrain a deep neural network to infer, *in-context*, the conditional intensity functions of event histories from a context defined by sets of event sequences. Pretraining is performed on a large synthetic dataset of MTPPs sampled from a broad distribution over Hawkes processes. Once pretrained, our Foundation Inference Model for Point Processes (FIM-PP) can estimate MTPPs from real-world data without any additional training, or be rapidly finetuned to target systems. Experiments show that this amortized approach matches the performance of specialized models on next-event prediction across common benchmark datasets. We provide the pretrained model weights with the supplementary material.
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 17418
Loading