Point2FFD: Learning Shape Representations of Simulation-Ready 3D Models for Engineering Design Optimization
Abstract: Methods for learning on 3D point clouds became ubiquitous due to the popularization of 3D scanning technology and advances of machine learning techniques. Among these methods, point-based deep neural networks have been utilized to explore 3D designs in optimization tasks. However, engineering computer simulations require high-quality meshed models, which are challenging to automatically generate from unordered point clouds. In this work, we propose Point2FFD: A novel deep neural network for learning compact geometric representations and generating simulation-ready meshed models. Built upon an autoencoder architecture, Point2FFD learns to compress 3D point clouds into a latent design space, from which the network generates 3D polygonal meshes by selecting and deforming simulation-ready mesh templates. Through benchmark experiments, we show that our proposed network achieves comparable shape-generative performance than existing state-of-the-art point-based generative models. In real world-inspired vehicle aerodynamic optimizations, we demonstrate that Point2FFD generates simulation-ready meshes of realistic car shapes and leads to better optimized designs than the benchmarked networks.
Loading