Abstract: Existing methods for event stream super-resolution (SR) either require high-quality and high-resolution frames or underperform for
large factor SR. To address these problems, we propose a recurrent neural
network for event SR without frames. First, we design a temporal propagation net for incorporating neighboring and long-range event-aware contexts that facilitates event SR. Second, we build a spatiotemporal fusion
net for reliably aggregating the spatiotemporal clues of event stream.
These two elaborate components are tightly synergized for achieving
satisfying event SR results even for 16× SR. Synthetic and real-world
experimental results demonstrate the clear superiority of our method.
Furthermore, we evaluate our method on two downstream event-driven
applications, i.e., object recognition and video reconstruction, achieving
remarkable performance boost over existing methods.
0 Replies
Loading