Abstract: Novel view synthesis is an essential functionality for enabling immersive experiences in various Augmented- and Virtual-Reality (AR/VR) applications, for which Neural Radiance Field (NeRF) has emerged as the state-of-the-art (SOTA) technique. In particular, generalizable NeRFs have gained increasing popularity thanks to their cross-scene generalization capability, which enables NeRFs to be instantly serviceable for new scenes without per-scene training. Despite their promise, generalizable NeRFs aggravate the prohibitive complexity of NeRFs due to their required extra memory accesses needed to acquire scene features, causing NeRFs' ray marching process to be memory-bounded. To tackle this dilemma, existing sparsity-exploitation techniques for NeRFs fall short, because they require knowledge of the sparsity distribution of the target 3D scene which is unknown when generalizing NeRFs to a new scene.To this end, we propose Gen-NeRF, an algorithm-hardware co-design framework dedicated to generalizable NeRF acceleration, which aims to win both rendering efficiency and generalization capability in NeRFs. To the best of our knowledge, Gen-NeRF is the first to enable real-time generalizable NeRFs, demonstrating a promising NeRF solution for next-generation AR/VR devices. On the algorithm side, Gen-NeRF integrates a coarse-then-focus sampling strategy, leveraging the fact that different regions of a 3D scene contribute differently to the rendered pixels depending on where the objects are located in the scene, to enable sparse yet effective s ampling. In addition, Gen-NeRF replaces the ray transformer, which is generally included in SOTA generalizable NeRFs to enhance density estimation, with a novel Ray-Mixer module to reduce workload heterogeneity. On the hardware side, Gen-NeRF highlights an accelerator micro-architecture dedicated to accelerating the resulting model workloads from our Gen-NeRF algorithm to maximize the data reuse opportunities among different rays by making use of their epipolar geometric relationship. Furthermore, our Gen-NeRF accelerator features a customized dataflow to enhance data locality during point-to-hardware mapping and an optimized scene feature storage strategy to minimize memory bank conflicts across camera rays of NeRFs. Extensive experiments validate the effectiveness of our proposed Gen-NeRF framework in enabling real-time and generalizable novel view synthesis.
Loading