Abstract: Computational fluid dynamics (CFD) has widespread application in research and industry. The quality of the mesh, particularly in the boundary layer, significantly influences the CFD accuracy. Despite its importance, the mesh generation process remains manual and time intensive, with the introduction of potential errors and inconsistencies. The limitations of traditional methods have prompted the recent exploration of deep reinforcement learning (DRL) for mesh generation. Although some studies have demonstrated the applicability of DRL in mesh generation, they have limitations in utilizing existing tools, thereby falling short of fully leveraging the potential of DRL. This study proposes a new boundary mesh generation method using DRL, namely an agent-based mesh generator. The nodes on the surface act as agents and optimize the paths into space to create high-quality meshes. Mesh generation is naturally suited to DRL owing to its computational nature and deterministic execution. However, challenges also arise, including training numerous agents simultaneously and managing their interdependencies in a vast state space. In this study, these challenges are addressed along with an investigation of the optimal learning conditions after formulating grid generation as a DRL task: defining states, agents, actions, and rewards. The derived optimal conditions are applied to generate two dimensional airfoil grids to validate the feasibility of the proposed approach.
Loading