Brain-inspired Robust Vision using Convolutional Neural Networks with FeedbackDownload PDF

Published: 02 Oct 2019, Last Modified: 05 May 2023Real Neurons & Hidden Units @ NeurIPS 2019 PosterReaders: Everyone
TL;DR: CNN-F extends CNN with a feedback generative network for robust vision.
Keywords: generative models, brain-inspired, robust vision
Abstract: Humans have the remarkable ability to correctly classify images despite possible degradation. Many studies have suggested that this hallmark of human vision results from the interaction between feedforward signals from bottom-up pathways of the visual cortex and feedback signals provided by top-down pathways. Motivated by such interaction, we propose a new neuro-inspired model, namely Convolutional Neural Networks with Feedback (CNN-F). CNN-F extends CNN with a feedback generative network, combining bottom-up and top-down inference to perform approximate loopy belief propagation. We show that CNN-F's iterative inference allows for disentanglement of latent variables across layers. We validate the advantages of CNN-F over the baseline CNN. Our experimental results suggest that the CNN-F is more robust to image degradation such as pixel noise, occlusion, and blur. Furthermore, we show that the CNN-F is capable of restoring original images from the degraded ones with high reconstruction accuracy while introducing negligible artifacts.
4 Replies

Loading