Keywords: Motion Planning, Visuo-Motor Policy, Reactive Control
Abstract: Generating collision-free motion in dynamic, partially observable envi-
ronments is a fundamental challenge for robotic manipulators. Classical motion
planners can compute globally optimal trajectories but require full environment
knowledge and are typically too slow for dynamic scenes. Neural motion policies
offer a promising alternative by operating in closed-loop directly on raw sensory
inputs but often struggle to generalize in complex or dynamic settings.
We propose Deep Reactive Policy (DRP), a visuo-motor neural motion policy
designed for reactive motion generation in diverse dynamic environments, operating
directly on point cloud sensory input. At its core is IMPACT, a transformer-
based neural motion policy pretrained on 10 million generated expert trajectories
across diverse simulation scenarios. We further improve IMPACT’s static obstacle
avoidance through iterative student-teacher finetuning. We additionally enhance the
policy’s dynamic obstacle avoidance at inference time using DCP-RMP, a locally
reactive goal-proposal module.
We evaluate DRP on challenging tasks featuring cluttered scenes, dynamic moving
obstacles, and goal obstructions. DRP achieves strong generalization, outper-
forming prior classical and neural methods in success rate across both simulated
and real-world settings. We will release the dataset, simulation environments,
and trained models upon acceptance. Video results available at deep-reactive-
policy.github.io.
Submission Number: 5
Loading