Visibility-Aware Language Aggregation for Open-Vocabulary Segmentation in 3D Gaussian Splatting

Published: 14 Sept 2025, Last Modified: 13 Oct 2025ICCV 2025 Wild3DEveryoneRevisionsBibTeXCC BY 4.0
Keywords: 3D Gaussian Splatting, 3D Scene Understanding, Text-Language in 3D
TL;DR: We fuse noisy, view-dependent 2D language features into 3D Gaussians via visibility-aware gating and a streaming, weighted geometric median, yielding sharper boundaries and cross-view-consistent open-vocabulary 3D semantics.
Abstract: Recently, distilling open-vocabulary language features from 2D images into 3D Gaussians has attracted significant attention. Although existing methods achieve impressive language-based interactions of 3D scenes, we observe two fundamental issues: background Gaussians contributing negligibly to a rendered pixel get the same feature as the dominant foreground ones, and multi-view inconsistencies due to view-specific noise in language embeddings. We introduce Visibility-Aware Language Aggregation (VALA), a lightweight yet effective method that computes marginal contributions for each ray and applies a visibility-aware gate to retain only visible Gaussians. Moreover, we propose a streaming weighted geometric median in cosine space to merge noisy multi-view features. Our method yields a robust, view-consistent language feature embedding in a fast and memory-efficient manner. VALA improves open-vocabulary localization and segmentation across reference datasets, consistently surpassing existing works. Code and models will be shared upon acceptance.
Submission Number: 31
Loading