Keywords: Deep Learning, Point Cloud reconstruction, Implicit representations, Plant geometry
TL;DR: Comparing and identifying the best deep learning framework for reconstructing the 3D geometry of single plants from point cloud data
Abstract: Reconstructing the geometry of crops from 3D point cloud data is useful for a variety of plant phenotyping applications. Due to very thin and slender segments, obtaining accurate surface geometry representations from the 3D point cloud data is challenging. Further, defects (noise) and holes (sparsity or occlusion) in the point cloud data might be errors in the reconstructed plant structures. While the reconstruction of a surface from an input point cloud has been studied for decades, recent work on deep learning frameworks that learn neural implicit representations have shown significant promise in accurately reconstructing 3D data, especially under noisy and sparse sampling conditions. However, these approaches have not yet been deployed for slender members. In this work, we explore neural implicit representations to reconstruct the surfaces of fully developed maize plants using data acquired from Terrestrial Laser Scanners (TLS). We compare several neural implicit approaches with more traditional methods of surface reconstruction. We also analyze the robustness of these neural implicit methods for 3D plant data reconstruction. We finally utilize the predicted surface to infer structural features from the data. This approach paves the way for detailed flow/transport simulations of agricultural domains from 3D point cloud data.
0 Replies
Loading