A Simple Approach for Visual Room Rearrangement: 3D Mapping and Semantic SearchDownload PDF

Published: 01 Feb 2023, 19:23, Last Modified: 01 Feb 2023, 19:23ICLR 2023 posterReaders: Everyone
Keywords: Embodied AI, Deep Learning, Object Rearrangement
TL;DR: A System For Exploring A Scene, Mapping Objects, and Rearranging Objects To A Visual Goal
Abstract: Physically rearranging objects is an important capability for embodied agents. Visual room rearrangement evaluates an agent's ability to rearrange objects in a room to a desired goal based solely on visual input. We propose a simple yet effective method for this problem: (1) search for and map which objects need to be rearranged, and (2) rearrange each object until the task is complete. Our approach consists of an off-the-shelf semantic segmentation model, voxel-based semantic map, and semantic search policy to efficiently find objects that need to be rearranged. On the AI2-THOR Rearrangement Challenge, our method improves on current state-of-the-art end-to-end reinforcement learning-based methods that learn visual rearrangement policies from 0.53\% correct rearrangement to 16.56\%, using only 2.7\% as many samples from the environment.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)
19 Replies

Loading