Can Deep Networks be Highly Performant, Efficient and Robust simultaneously?

Published: 11 Nov 2023, Last Modified: 05 Mar 2025British Machine Vision Conference 2023EveryoneCC BY 4.0
Abstract: Performance is not enough when it comes to deep neural networks (DNNs); in realworld settings, computational load or efficiency during training and adversarial security are just as or even more important. Often there are critical trade-offs to consider when prioritizing one goal over the others. Instead, we propose to concurrently target Performance, Efficiency, and Robustness, and ask just how far we can push the envelope on simultaneously achieving these goals. Our algorithm, CAPER, follows the intuition that samples that are highly susceptible to noise strongly affect the decision boundaries learned by DNNs, which in turn degrades their performance and adversarial robustness. By identifying and removing such samples, we demonstrate increased performance and adversarial robustness while using only a subset of the training data, thereby improving the training efficiency. Through our experiments, we highlight CAPER’s high performance across multiple Dataset-DNN combinations, and provide insights into the complementary behavior of CAPER alongside existing adversarial training approaches to increase robustness by over 11.6% while using up to 4% fewer FLOPs during training.
Loading