Keywords: sparse-view novel view synthesis、3D Gaussian Splatting
TL;DR: We propose TAS-GS, a topology, appearance, and semantic prior guided extension of 3D Gaussian Splatting that enables robust sparse-view reconstruction with sharper structures and more faithful textures.
Abstract: We present TAS-GS, a framework that extends 3D Gaussian Splatting (3DGS) to sparse-view reconstruction by integrating topology, appearance and semantic priors. TAS-GS addresses key challenges of sparse-view 3DGS, including structural fragility, texture incoherency, and loss of fine details, through three modules: (i) a topology-aware graph regularizer that prunes floaters and bridges structural gaps, (ii) a GNN-based appearance propagation module that refines textures in weakly supervised regions, and (iii) a semantic-rarity and boundary-aware modulator that preserves fine details and underrepresented categories. All modules are applied only during training, and the final representation remains fully compatible with the standard 3DGS rasterizer. Extensive experiments on LLFF and Mip-NeRF 360 show that TAG-GS consistently outperforms state-of-the-art NeRF- and Gaussian-based methods across a wide range of sparsity levels. Ablation studies further confirm the effectiveness of each component in improving both quantitative metrics and perceptual quality. Our code is available at https://anonymous.4open.science/r/56165123.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 5566
Loading