Reinforcement Based Learning on Classification Task Could Yield Better Generalization and Adversarial AccuracyDownload PDF

Published: 03 Nov 2020, Last Modified: 05 May 2023SVRHM@NeurIPS PosterReaders: Everyone
Keywords: Adversarial Learning & Robustness, Adversarial Examples, Image Classification, Reinforcement Learning
TL;DR: Learning to classify how human learns can produce better generalization and adversarial accuracy
Abstract: Deep Learning has become interestingly popular in computer vision, mostly attaining near or above human-level performance in various vision tasks. But recent work has also demonstrated that these deep neural networks are very vulnerable to adversarial examples (adversarial examples - inputs to a model which are naturally similar to original data but fools the model in classifying it into a wrong class). Humans are very robust against such perturbations; one possible reason could be that humans do not learn to classify based on an error between "target label" and "predicted label" but possibly due to reinforcements that they receive on their predictions. In this work, we proposed a novel method to train deep learning models on an image classification task. We used a reward-based optimization function, similar to the vanilla policy gradient method used in reinforcement learning, to train our model instead of conventional cross-entropy loss. An empirical evaluation on the cifar10 dataset showed that our method learns a more robust classifier than the same model architecture trained using cross-entropy loss function (on adversarial training). At the same time, our method shows a better generalization with the difference in test accuracy and train accuracy $< 2\%$ for most of the time compared to the cross-entropy one, whose difference most of the time remains $> 2\%$.
5 Replies

Loading