Abstract: The reliability of a learning model is key to the
successful deployment of machine learning in various applications. Creating a robust model, particularly one unaffected by
adversarial attacks, requires a comprehensive understanding of
the adversarial examples phenomenon. However, it is difficult to
describe the phenomenon due to the complicated nature of the
problems in machine learning. It has been shown that adversarial
training can improve the robustness of the hypothesis. However,
this improvement comes at the cost of decreased performance on
natural samples. Hence, it has been suggested that robustness
and accuracy of a hypothesis are at odds with each other.
In this paper, we put forth the alternative proposal that it
is the continuity of a hypothesis that is incompatible with its
robustness and accuracy. In other words, a continuous function
cannot effectively learn the optimal robust hypothesis. To this
end, we will introduce a framework for a rigorous study of
harmonic and holomorphic hypothesis in learning theory terms
and provide empirical evidence that continuous hypotheses does
not perform as well as discontinuous hypotheses in some common
machine learning tasks. From a practical point of view, our
results suggests that a robust and accurate learning rule would
train different continuous hypotheses for different regions of the
domain. From a theoretical perspective, our analysis explains
the adversarial examples phenomenon as a conflict between the
continuity of a sequence of functions and its uniform convergence
to a discontinuous function.
0 Replies
Loading