Keywords: event-cameras, event-based representations, velocity-invariant
Abstract: Event-cameras promise low-latency and high temporal resolution perception for various computer vision tasks, especially resources constrained, highly dynamic scenarios such as robotics. The novel sensor circuitry (i.e. asynchronous, independent pixels) that enables these advantages also introduces new challenges for algorithm development. The temporal synchronisation of a global shutter can no-longer be relied upon, and instead temporal association of individually timestamped events becomes a non-trivial problem that must be addressed, especially when various parts of the scene move with different velocities. The proposed Set of Centre Active Receptive Fields (SCARF) attempts to solve temporal association by maintaining an active set of events that represents the contrast changes across the scene for every precise moment in time - integrating information both spatially and temporally - and doing so while also inherently accounting for variation in velocity across the image plane. There are no temporal parameters that need to be tuned to the motion present in a dataset. Experiments in this paper demonstrate SCARF produces a representation more similar to Sobel filters (i.e. a representation of intensity change similar to the data produced by event-cameras) than other velocity variant representations, while also achieving the lowest computational cost. The output can be sampled at sub-millisecond resolution as an edge image or as a set of sparse events, during fast motion, in real-time, and without motion-blur.
Submission Number: 1
Loading