Abstract: The recently invented retina-inspired spike camera produces asynchronous binary spike streams to record the dynamic light intensity variation process. This paper develops a novel image reconstruction method, called SpikeFormer, which reconstructs the dynamic scene from binary spike streams in a supervised learning strategy. We construct the training dataset which composes of spike streams and corresponding ground truth images by simulating the working mechanism of spike camera. Spike noises are also taken into consideration in the simulator. Firstly, the input spike stream is encoded as an enlarged binary image by interlacing temporal and spatial information. Then the binary image is inputted to the SpikeFormer to recover the dynamic scene. SpikeFormer adopts Transformer architecture which includes an encoder and a decoder. In particular, we propose a hierarchical architecture encoder to exploit multi-scale temporal and spatial features progressively. The decoder aggregates information from different stages to incorporate both local and global attention. Multi-task loss including reconstruction loss, perception loss, edge loss, and temporal consistency loss are combined to restrict the model. Extensive experimental results demonstrate that the proposed framework achieves encouraging results in details reconstruction and noise alleviation.
0 Replies
Loading