SH-SAS: An Implicit Neural Representation for Complex Spherical-Harmonic Scattering Fields for 3D Synthetic Aperture Sonar
Keywords: 3D Reconstruction, Implicit Neural Representation, Computer Vision, Computer Graphics, Synthetic Aperture Sonar, Acoustic Scattering
TL;DR: SH-SAS: an INR using spherical harmonics for view-dependent SAS scattering. Trains from raw 1-D ToF (no beamforming) and beats backprojection and Reed et al.’s isotropic INR on synthetic and real AirSAS/SVSS.
Abstract: Synthetic aperture sonar (SAS) reconstruction requires recovering both the spatial distribution of acoustic scatterers and their direction-dependent response. Time-domain backprojection is the most common 3D SAS reconstruction algorithm, but it does not model directionality and can suffer from sampling limitations, aliasing, and occlusion. Prior neural volumetric methods applied to synthetic aperture sonar, e.g., Reed et al., treat each voxel as an isotropic scattering density, not modeling anisotropic returns. We introduce SH-SAS, an implicit neural representation that expresses the complex acoustic scattering field as a set of spherical harmonic (SH) coefficients. A multi-resolution hash encoder feeds a lightweight MLP that outputs complex SH coefficients up to a specified degree L. The zeroth-order coefficient acts as an isotropic scattering field, which also serves as the density term, while higher orders compactly capture directional scattering with minimal parameter overhead. Because the model predicts the complex amplitude for any transmit–receive baseline, training is performed directly from 1-D time-of-flight (ToF) signals without the need to beamform intermediate images for supervision. Across synthetic and real SAS (both in-air and underwater) benchmarks, results show that SH-SAS performs better in terms of 3D reconstruction quality and geometric metrics than previous methods, such as time-domain backprojection and Reed et al.
Supplementary Material: zip
Submission Number: 271
Loading