Does “Do Differentiable Simulators Give Better Policy Gradients?” Give Better Policy Gradients?

ICLR 2026 Conference Submission16460 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Differentiable simulation, Reinforcement learning, Policy gradient, Model-based reinforcement learning, Monte Carlo gradient estimation, Reparameterization gradient, Likelihood ratio gradient, Score function gradient estimator, Inverse variance weighting, Randomized smoothing
TL;DR: Gradient estimators for policy learning with differentiable simulators that handle discontinuities robustly and remain stable in practice with simple variance control.
Abstract: In policy gradient reinforcement learning, access to a differentiable model enables 1st-order gradient estimation that accelerates learning compared to relying solely on derivative-free 0th-order estimators. However, discontinuous dynamics cause bias and undermine the effectiveness of 1st-order estimators. Prior work addressed this bias by constructing a confidence interval around the REINFORCE 0th-order gradient estimator and using these bounds to detect discontinuities. However, the REINFORCE estimator is notoriously noisy, and we find that this method requires task-specific hyperparameter tuning and has low sample efficiency. This paper asks whether such bias is the primary obstacle and what minimal fixes suffice. First, we re-examine standard discontinuous settings from prior work and introduce DDCG, a lightweight test that switches estimators in nonsmooth regions; with a single hyperparameter, DDCG achieves robust performance and remains reliable with small samples. Second, on differentiable robotics control tasks, we present IVW-H, a per-step inverse-variance implementation that stabilizes variance without explicit discontinuity detection and yields strong results. Together, these findings indicate that while estimator switching improves robustness in controlled studies, careful variance control often dominates in practical deployments.
Primary Area: reinforcement learning
Submission Number: 16460
Loading