Unifying Robust Activation Functions for Reduced Adversarial Vulnerability with the Parametric Generalized Gamma Function
Abstract: Adversaries minimally perturb deep learning input data to reduce a learning model's ability to produce domain-specific data-driven recommendations to solve specialized tasks. This vulnerability to adversarial perturbations has been argued to stem from a learning model's nonlocal generalization over complex input data. Given the incomplete information in a complex dataset, a learning model captures nonlinear patterns between data points with volatility in the loss surface and exploitable areas of low-confidence knowledge. It is the responsibility of activation functions to capture the nonlinearity in data and, thus, has inspired disjointed research efforts to create robust activation functions. This work unifies the properties of activation functions that contribute to robust generalization with the generalized gamma distribution function. We show that combining the disjointed characteristics presented in the literature with our parametric generalized gamma activation function provides more effective robustness than the individual characteristics alone11The source code for this research effort: https://github.com/sheilaalemany/generalized-gamma-activation.git.
External IDs:dblp:conf/icmla/AlemanyWDGP24
Loading