Keywords: quantum machine learning, novel-view synthesis
Abstract: Recently, Quantum Visual Fields (QVFs) have shown promising improvements in model compactness and convergence speed for learning 2D images. Meanwhile, novel-view synthesis has seen major advances with Neural Radiance Fields (NeRFs), where models learn a compact representation from 2D images to render 3D scenes, albeit at the cost of large models and intensive training.
In this work, we extend the approach of QVFs by introducing QNeRF, the first hybrid quantum-classical model designed for novel-view synthesis from 2D images. QNeRF leverages parameterized quantum circuits to encode spatial and view-dependent information via quantum superposition and entanglement, resulting in more compact models.
We present two architectural variants. Full QNeRF maximally exploits all quantum amplitudes to enhance representational capabilities. In contrast, Dual-Branch QNeRF introduces a task-informed inductive bias by branching spatial and view-dependent quantum state preparations, drastically reducing the complexity of this operation and ensuring scalability and potential hardware compatibility.
Our experiments demonstrate that---when trained on images of reduced resolution---QNeRF matches or outperforms classical NeRF baselines while using less than half the number of parameters. These results suggest that Quantum Machine Learning can serve as a competitive alternative for continuous signal representation in high-level tasks in Computer Vision such as 3D representation learning.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 9420
Loading