Keywords: domain adaptation, indoor 3D object detection
Abstract: As a fundamental task for indoor scene understanding, 3D object detection has been extensively studied, and the accuracy on indoor point cloud data has been substantially improved. However, existing researches have been conducted on limited datasets, where the training and testing sets share the same distribution. In this paper, we consider the task of adapting indoor 3D object detectors from one dataset to another, presenting a first benchmark with commonly used ScanNet and SUN RGB-D datasets, as well as our newly proposed large-scale SimRoom and SimHouse datasets by a 3D simulator with far greater number of objects and more precise annotations. Since indoor point cloud datasets are collected and constructed in different ways, the object detectors are likely to overfit to specific factors within each dataset, such as point cloud quality, room layout configuration, style and object size. We conduct experiments across datasets on different adaptation scenarios, analyzing the impact of different domain gaps on 3D object detectors. We observe that through our evaluated domain gap factors, synthetic-to-real adaptation is the most difficult adaptation hurdle to overcome. We also introduce several domain adaptation approaches to improve adaptation performances, providing a first baseline for domain adaptive indoor 3D object detection, hoping that future works may propose detectors with stronger generalization ability across domains.
Supplementary Material: pdf
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9308
Loading