Abstract: Active sensing and planning in unknown, cluttered
environments is an open challenge for robots intending to provide
home service, search and rescue, narrow-passage inspection, and
medical assistance. Although many active sensing methods exist,
they often consider open spaces, assume known settings, or mostly
do not generalize to real-world scenarios. We present the active
neural sensing approach that generates the kinematically feasible
viewpoint sequences for the robot manipulator with an in-hand
camera to gather the minimum number of observations needed to
reconstruct the underlying environment. Our framework actively
collects the visual RGBD observations, aggregates them into scene
representation, and performs object shape inference to avoid
unnecessary robot interactions with the environment. We train
our approach on synthetic data with domain randomization and
demonstrate its successful execution via sim-to-real transfer in
reconstructing narrow, covered, real-world cabinet environments
cluttered with unknown objects. The natural cabinet scenarios
impose significant challenges for robot motion and scene reconstruction due to surrounding obstacles and low ambient lighting
conditions. However, despite unfavorable settings, our method
exhibits high performance compared to its baselines in terms of
various environment reconstruction metrics, including planning
speed, the number of viewpoints, and overall scene coverage.
0 Replies
Loading