Abstract: Structural topology optimization plays a crucial role in engineering by determining the optimal material layout within a design space to maximize performance under given constraints. We introduce Neural Implicit Topology Optimization (NITO), a deep learning regression approach to accelerate topology optimization tasks.
We demonstrate that, compared to state-of-the-art diffusion models, NITO generates structures that are under 15% as structurally sub-optimal and does so ten times faster. Furthermore, we show that NITO is entirely resolution-free and domain-agnostic, offering a more scalable solution than the current fixed-resolution and domain-specific diffusion models.
To achieve this state-of-the-art performance NITO combines three key innovations. First, we introduce the Boundary Point Order-Invariant MLP (BPOM), which represents boundary conditions in a sparse and domain-agnostic manner, allowing NITO to train on variable conditioning, domain shapes, and mesh resolutions. Second, we adopt a neural implicit field representation, which allows NITO to synthesize topologies of any shape or resolution. Finally, we propose an inference-time refinement step using a few steps of gradient-based optimization to enable NITO to achieve results comparable to direct optimization methods. These three innovations empower NITO with a precision and versatility that is currently unparalleled among competing deep learning approaches for topology optimization.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We sincerely appreciate the time and effort that the reviewers have put into evaluating our work. Their constructive feedback has been invaluable in helping us improve the clarity and presentation our paper. Below, we summarize the key revisions we have made based on the provided suggestions:
1. **Clarification of Figures and Notation:** We have improved figure annotations, added legends where necessary, and clarified mathematical symbols and notation, particularly around Equation 1 and the definition of design variables.
2. **Expanded Mathematical Exposition:** We have revised the explanation of our optimization formulation to eliminate potential misunderstandings, ensuring clear definitions of all terms, constraints, and objectives.
3. **Additional Background:** We have extended Section 2.2 to include a broader discussion of existing learning-based TO methods and neural operators, positioning our work more clearly in the broader context. We have also expanded section 2.1 to include a brief discussion on high resolution conventional solvers and how NITO is positioned in this regard.
4. **Clarified Conditioning and Constraint Satisfaction:** We have explicitly discussed how our method implicitly satisfies volume fraction constraints and suggested a simple post-processing step (binary search) to ensure strict adherence.
5. **Discussion on Scalability and High-Resolution Performance:** We have addressed concerns regarding problem size by running additional experiments at higher resolutions (e.g., 5000×5000). These results demonstrate NITO's ability to scale effectively, though we acknowledge limitations in capturing fine-grained details beyond its training resolution.
6. **Stronger Positioning Against Classic TO Methods:** We have refined our discussion on how neural implicit representations compare to classic TO solvers, particularly regarding computational complexity and scaling behavior.
7. **Clarifications on BPOM and Network Design Choices:** We have improved our explanation of BPOM’s pooling strategy and its distinction from PointMLP, supported by an updated figure for clarity.
8. **Grammar and Typographical Corrections:** We have fixed minor language issues throughout the manuscript to improve readability.
9. **Diff:** To make it easier to review the changes we have added a LatexDiff output in the supplementary materials for the reviewers. See diff.pdf in the supplementary material.
We apologize for the delay in submitting this response and greatly appreciate the reviewers' thoughtful feedback. We believe these revisions have significantly strengthened the manuscript and look forward to further discussions and the decision of the reviewers.
Thank you all again,
Authors
Further response to reviewer uPWv also added. The diff.pdf is not updated for these changes.
Assigned Action Editor: ~Mingsheng_Long2
Submission Number: 3899
Loading