Abstract: This review thoroughly examines the role of semantically-aware Neural Radiance Fields (NeRFs)
in visual scene understanding, covering an analysis of over 250 scholarly papers. It explores how
NeRFs adeptly infer 3D representations for both stationary and dynamic objects in a scene. This
capability is pivotal for generating high-quality new viewpoints, completing missing scene details
(inpainting), conducting comprehensive scene segmentation (panoptic segmentation), predicting 3D
bounding boxes, editing 3D scenes, and extracting object-centric 3D models. A significant aspect
of this study is the application of semantic labels as viewpoint-invariant functions, which effectively
map spatial coordinates to a spectrum of semantic labels, thus facilitating the recognition of distinct
objects within the scene. Overall, this survey highlights the progression and diverse applications of
semantically-aware neural radiance fields in the context of visual scene interpretation.
Loading