Abstract: Event cameras are bio-inspired vision sensors that encode visual information with high dynamic range, high temporal resolution, and low latency. Current state-of-the-art event stream processing methods rely on end-to-end deep learning techniques. However, these models are heavily dependent on data structures, limiting their stability and generalization capabilities across tasks, thereby hindering their deployment in real-world scenarios. To address this issue, we propose a chaotic dynamics event signal processing framework inspired by the dorsal visual pathway of the brain. Specifically, we utilize Continuous-coupled Neural Network (CCNN) to encode the event stream. CCNN encodes polarity-invariant event sequences as periodic signals and polarity-changing event sequences as chaotic signals. We then use continuous wavelet transforms to analyze the dynamical states of CCNN neurons and establish the high-order mappings of the event stream. The effectiveness of our method is validated through integration with conventional classification networks, achieving state-of-the-art classification accuracy on the N-Caltech101 and N-CARS datasets, with results of 84.3% and 99.9%, respectively. Our method improves the accuracy of event camera-based object classification while significantly enhancing the generalization and stability of event representation.
Lay Summary: Event cameras offer high temporal resolution, low latency, and high dynamic range, making them well-suited for capturing fast-changing scenes. However, existing processing methods heavily rely on data-specific deep learning models, which often suffer from limited generalization and robustness in real-world scenarios. Inspired by the brain’s dorsal visual pathway, we propose a biologically plausible framework for event signal processing based on chaotic dynamics. A Continuous-Coupled Neural Network (CCNN) is designed to encode polarity-invariant event sequences as periodic signals and polarity-changing ones as chaotic signals. These dynamics are then analyzed using continuous wavelet transforms to extract high-order, task-independent representations. Integrated with standard classification networks, our approach achieves state-of-the-art accuracy on N-Caltech101 (84.3%) and N-CARS (99.9%) datasets. The results demonstrate that our method not only enhances classification performance but also significantly improves the stability and generalization of event-based representations, offering a promising direction for real-world deployment.
Primary Area: Deep Learning->Other Representation Learning
Keywords: Event Camera, Object Classification, Continuous-coupled Neural Network.
Submission Number: 742
Loading