Abstract: In this paper, we propose an effective and efficient pyramid
multi-view stereo (MVS) net with self-adaptive view aggregation for accurate and complete dense point cloud reconstruction. Different from
using mean square variance to generate cost volume in previous deeplearning based MVS methods, our VA-MVSNet incorporates the cost
variances in different views with small extra memory consumption by
introducing two novel self-adaptive view aggregations: pixel-wise view
aggregation and voxel-wise view aggregation. To further boost the robustness and completeness of 3D point cloud reconstruction, we extend
VA-MVSNet with pyramid multi-scale images input as PVA-MVSNet,
where multi-metric constraints are leveraged to aggregate the reliable
depth estimation at the coarser scale to fill in the mismatched regions at
the finer scale. Experimental results show that our approach establishes
a new state-of-the-art on the DTU dataset with significant improvements in the completeness and overall quality, and has strong generalization by achieving a comparable performance as the state-of-the-art
methods on the Tanks and Temples benchmark. Our codebase is at
https://github.com/yhw-yhw/PVAMVSNet
0 Replies
Loading