Abstract: Backpropagation through (neural) SDE solvers is traditionally approached in two ways: discretise-then-optimise, which offers accurate gradients but incurs prohibitive memory costs; and optimise-then-discretise, which achieves constant memory cost by solving an auxiliary backward SDE, but suffers from slower evaluation and gradient approximation errors. Algebraically reversible solvers promise both memory efficiency and gradient accuracy, yet existing methods such as Reversible Heun are often unstable under complex models and large step sizes, and their non-standard auxiliary-state structure obstructs extension to manifold-valued SDEs. Building on the recently introduced Explicit and Effectively Symmetric (EES) schemes---a class of stable, near-reversible explicit Runge--Kutta methods---we address both limitations of existing schemes. We extend EES schemes from ODEs to SDEs and show that they admit an efficient Williamson 2N-storage realisation. Bazavov's commutator-free construction then lifts these schemes to arbitrary Lie groups and homogeneous spaces. To our knowledge, this is the first explicit near-reversible integrator in this setting, unlocking the reversible adjoint approach for manifold-valued problems. On Euclidean neural SDE benchmarks, our schemes improve stability under stiff drift and large steps compared with other reversible solvers, while the commutator-free lift reduces memory by up to an order of magnitude on manifold-valued problems versus other baselines. These results establish effectively symmetric integration as a unified, geometry-aware foundation for memory-efficient and stable training of neural SDEs.
Loading