Abstract: The novel view synthesis task involves inputting a source image, a source pose and a target pose, and rendering to generate a corresponding target image. However, obtaining a clear novel view synthesized image from only a set of blurred images and corresponding poses is a challenging problem. To solve this problem, the good performance of 3D Gaussian Splatting (3DGS) in the field of 3D scene reconstruction is taken into account, as well as the remarkable effectiveness of event cameras in the deblurring problem. Inspired by the novel Event-Enhanced Neural Radiance Fields (E2NeRF) model, which is also based on event enhancement, a new 3D reconstruction framework, Event-Enhanced 3DGS (Ev3DGS), based on 3DGS is proposed by utilizing the combined data from event cameras and standard RGB cameras. We effectively introduce the event stream into the 3D Gaussian iterative process by constructing the blur rendering loss and event rendering loss, which guides the optimization of the network structure by predicting the blurred image and event generation processes. Compared with the E2NeRF model, the proposed Ev3DGS framework in this paper effectively improves the rendering performance and reduces the training time consumed. Ev3DGS not only achieves image deblurring, but also realizes high-quality of novel view synthesis. Extensive experiments on both synthetic and real-world datasets show that Ev3DGS can effectively learn clear 3DGS from blurred image inputs, making 3DGS more robust. Our code and the datasets used are publicly available at https://github.com/HuuuangJW/Ev3DGS.
Loading