Keywords: Physics-Informed Neural Networks;Multi-Task Learning;Dynamic Weight Allocation;Cross-Task Attention
Abstract: In recent years, Physics-Informed Neural Networks (PINN) have been used for flow simulations of various incompressible Navier-Stokes equations, but single-task PINNs have inherent limitations: low data efficiency under sparse supervision, weak generalization ability across flow patterns, and high computational costs due to repeated training. Furthermore, inter-task negative transfer constrains their performance in complex flow simulations. To tackle these challenges, this study proposes a multi-task PINN framework that combines cross-task attention with dynamic weight allocation (DWA) strategies. The aim is to verify the efficiency of this framework in various typical flows, focusing on alleviating negative transfer, enhancing training stability across different viscosities, and exploring the range of advantages. Extensive experiments demonstrate the effectiveness of our approach. Especially, with the cross-task attention module, inter-task negative transfer is significantly mitigated, enabling flow tasks with advantages to maintain stable training curves; the introduction of dynamic weight allocation further reduces loss oscillations during training, notably enhancing the convergence speed of certain flow tasks.
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 11707
Loading