Deep learning for continuous-time stochastic control with jumps

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Continuous-time stochastic control ; Jump diffusion ; Actor-critic ; Physics-informed neural networks ; Hamilton-Jacobi-Bellman equation ; Deep learning ; Bellman updating
TL;DR: This paper introduces an iterative neural network-based approach to solve finite-horizon continuous-time stochastic control problems with jumps, when the underlying dynamics are fully known and given.
Abstract: In this paper, we introduce a model-based deep-learning approach to solve finite-horizon continuous-time stochastic control problems with jumps. We iteratively train two neural networks: one to represent the optimal policy and the other to approximate the value function. Leveraging a continuous-time version of the dynamic programming principle, we derive two different training objectives based on the Hamilton--Jacobi--Bellman equation, ensuring that the networks capture the underlying stochastic dynamics. Empirical evaluations on different problems illustrate the accuracy and scalability of our approach, demonstrating its effectiveness in solving complex, high-dimensional stochastic control tasks.
Primary Area: Optimization (e.g., convex and non-convex, stochastic, robust)
Submission Number: 9093
Loading