Abstract: We present a generative model that amortises computation for the field and potential around e.g.~gravitational or electromagnetic sources. Exact numerical calculation has either computational complexity $\mathcal{O}(M\times{}N)$ in the number of sources $M$ and evaluation points $N$, or requires a fixed evaluation grid to exploit fast Fourier transforms. Using an architecture where a hypernetwork produces an implicit representation of the field or potential around a source collection, our model instead performs as $\mathcal{O}(M + N)$, achieves relative error of $\sim\!4\%-6\%$, and allows evaluation at arbitrary locations for arbitrary numbers of sources, greatly increasing the speed of e.g.~physics simulations. We compare with existing models and develop two-dimensional examples, including cases where sources overlap or have more complex geometries, to demonstrate its application.
Submission Type: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=aw7otauUZ3
Changes Since Last Submission: Since the previous TMLR submission, we have substantially revised and expanded the paper based on the NeurIPS reviews and editorial feedback. The new version addresses all major concerns regarding baselines, experimental scope, clarity, and theoretical justification.
Major revisions:
- Expanded baselines: We now include a Fast Multipole Method (FMM) implementation, alongside the previously reported direct computation, providing a more complete comparison against classical fast algorithms.
- Broader and more challenging experiments: We added experiments with heterogeneous and overlapping source configurations, more complex geometries, and varying magnetisation patterns. We also include new error heatmaps, robustness tests, and an extended scaling analysis.
- Improved clarity and notation: We corrected all inconsistencies in the mathematical formulation (including the M/N mismatch), rewrote Section 3 for clarity, and re-designed Figure 1 to make the roles of the hypernetwork and field network explicit.
- Unified evaluation metrics: All results are now reported in terms of relative error, enabling fair comparison across methods, datasets, and numbers of sources.
- Theoretical justification of scalability: We added a new section that formally explains why the proposed architecture reduces inference complexity from O(MxN) to O(M+N), clarifying the role of source-wise parameter generation and decoupled field evaluation.
These additions represent a significant extension over the earlier TMLR submission: the paper now contains new baselines, broader experiments, clearer exposition, and a more rigorous justification of the method. We believe this revised version fully addresses the issues that led to the initial desk rejection and the NeurIPS borderline decision.
Assigned Action Editor: ~Markus_Lange-Hegermann1
Submission Number: 6426
Loading