Learning from synthetic data generated with GRADEDownload PDF

Published: 07 May 2023, Last Modified: 20 Oct 2024ICRA-23 Workshop on Pretraining4Robotics LightningReaders: Everyone
Keywords: synthetic data, training, data generation, human detection, dynamic environments
TL;DR: We use synthetic data of indoor dynamic environments generated with our GRADE framework to train YOLO and Mask R-CNN to detect and segment humans. We show how we can achieve compelling results even by using only synthetic information.
Abstract: Recently, synthetic data generation and realistic rendering has advanced tasks like target tracking and human pose estimation. Simulations for most robotics applications are obtained in (semi)static environments, with specific sensors and low visual fidelity. To solve this, we present a fully customizable framework for generating realistic animated dynamic environments (GRADE) for robotics research, first introduced in~\cite{GRADE}. GRADE supports full simulation control, ROS integration, realistic physics, while being in an engine that produces high visual fidelity images and ground truth data. We use GRADE to generate a dataset focused on indoor dynamic scenes with people and flying objects. Using this, we evaluate the performance of YOLO and Mask R-CNN on the tasks of segmenting and detecting people. Our results provide evidence that using data generated with GRADE can improve the model performance when used for a pre-training step. We also show that, even training using only synthetic data, can generalize well to real-world images in the same application domain such as the ones from the TUM-RGBD dataset. The code, results, trained models, and the generated data are provided as open-source at~\url{https://eliabntt.github.io/grade-rr}.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/learning-from-synthetic-data-generated-with/code)
0 Replies

Loading