EDEN: Multimodal Synthetic Dataset of Enclosed GarDEN Scenes
Abstract: Multimodal large-scale datasets for outdoor scenes are
mostly designed for urban driving problems. The scenes
are highly structured and semantically different from scenarios seen in nature-centered scenes such as gardens or
parks. To promote machine learning methods for natureoriented applications, such as agriculture and gardening,
we propose the multimodal synthetic dataset for Enclosed
garDEN scenes (EDEN). The dataset features more than
300K images captured from more than 100 garden models. Each image is annotated with various low/high-level
vision modalities, including semantic segmentation, depth,
surface normals, intrinsic colors, and optical flow. Experimental results on the state-of-the-art methods for semantic
segmentation and monocular depth prediction, two important tasks in computer vision, show positive impact of pretraining deep networks on our dataset for unstructured natural scenes. The dataset and related materials will be available at https://lhoangan.github.io/eden
0 Replies
Loading