NITO: Neural Implicit Fields for Resolution-free and Domain-Adaptable Topology Optimization

Published: 13 May 2025, Last Modified: 13 May 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Structural topology optimization plays a crucial role in engineering by determining the optimal material layout within a design space to maximize performance under given constraints. We introduce Neural Implicit Topology Optimization (NITO), a deep learning regression approach to accelerate topology optimization tasks. We demonstrate that, compared to state-of-the-art diffusion models, NITO generates structures that are under 15% as structurally sub-optimal and does so ten times faster. Furthermore, we show that NITO is entirely resolution-free and domain-agnostic, offering a more scalable solution than the current fixed-resolution and domain-specific diffusion models. To achieve this state-of-the-art performance, NITO combines three key innovations. First, we introduce the Boundary Point Order-Invariant MLP (BPOM), which represents loads and supports in a sparse and domain-agnostic manner, allowing NITO to train on variable conditioning, domain shapes, and mesh resolutions. Second, we adopt a neural implicit field representation, which allows NITO to synthesize topologies of any shape or resolution. Finally, we propose an inference-time refinement step using a few steps of gradient-based optimization to enable NITO to achieve results comparable to direct optimization methods. These three innovations empower NITO with a precision and versatility that is currently unparalleled among competing deep learning approaches for topology optimization. Code & Data: https://github.com/ahnobari/NITO_Public
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Author list and public code added to the paper.
Code: https://github.com/ahnobari/NITO_Public
Assigned Action Editor: ~Mingsheng_Long2
Submission Number: 3899
Loading