A Deep Learning Pipeline for Optimizing Large-scale Phase Field Simulations

Published: 2023, Last Modified: 04 Nov 2025IEEE Big Data 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Phase field (PF) simulations are computationally expensive but remain a key analysis tool to understand the complex mechanisms of additive manufacturing (AM) processes. Each PF simulation-aided analysis requires thousands of node hours on leadership-class supercomputers. One of the main goals of these analyses is the study of microstructure evolution during the build process which begins with the onset of nucleation. Nucleation occurs under certain thermomechanical conditions which are not known a priori and many PF simulations are required to identify ranges of input thermo-mechanical parameters that can result in the onset of nucleation. Since many of the simulations do not result in nucleation, an analysis campaign often ends up wasting tremendous amounts of precious computing resources executing nucleation-absent simulations. The goal of this work is to design and train deep learning models to inform a PF simulation about the likelihood of the occurrence of nucleation in a future simulation time-step based on the state summary over a finite number of past time-steps of a running simulation. If the prediction determines that the running simulation is unlikely to reach nucleation in the allotted time, then its execution is stopped immediately ultimately resulting in vast reduction in wasted computations when accrued over all the PF simulations typically performed in a single or multiple analysis campaign(s). The paper presents the performance of a machine learning pipeline that uses a convolutional neural network (CNN) model to learn an embedding which is then used with a self-attention network to build a multi-task deep learning model to predict the likelihood of nucleation. The model also predicts the input parameters used in a simulation. Performance is compared with a baseline pipeline that uses an off-the-shelf LeNet-5 model to learn the initial embedding. Despite their smaller size, performance results indicate significant improvement in accuracy of the proposed models compared to the larger baseline models.
Loading