Toward Semantic Scene Understanding for Fine-Grained 3D Modeling of PlantsDownload PDF

Published: 23 May 2023, Last Modified: 23 May 2023AIAFS 2022Readers: Everyone
Keywords: Agricultural Robotics, Semantic Mapping, 3D Reconstruction, Simultaneous Localization and Mapping
TL;DR: Leveraging semantics to improve high resolution 3D modeling
Abstract: Agricultural robotics is an active research area due to global population growth and expectations of food and labor shortages. Robots can potentially help with tasks such as pruning, harvesting, phenotyping, and plant modeling. However, agricultural automation is hampered by the difficulty in creating high resolution 3D semantic maps in the field that would allow for safe manipulation and navigation. In this paper, we build toward solutions for this issue and showcase how the use of semantics and environmental priors can help in constructing accurate 3D maps for the target application of sorghum. Specifically, we 1) use sorghum seeds as semantic landmarks to build a visual Simultaneous Localization and Mapping (SLAM) system that enables us to map 78% of a sorghum range on average, compared to 38% with ORB-SLAM2; and 2) use seeds as semantic features to improve 3D reconstruction of a full sorghum panicle from images taken by a robotic in-hand camera.
0 Replies