SAU: Smooth activation function using convolution with approximate identitiesDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Deep Learning, Neural Networks, Parametric activation function.
Abstract: Well-known activation functions like ReLU or Leaky ReLU are non-differentiable at the origin. Over the years, many smooth approximations of ReLU have been proposed using various smoothing techniques. We propose new smooth approximations of a non-differentiable activation function by convolving it with approximate identities. In particular, we present smooth approximations of Leaky ReLU and show that they outperform several well-known activation functions in various datasets and models. We call this function Smooth Activation Unit (SAU). Replacing ReLU by SAU, we get 5.12% improvement with ShuffleNet V2 (2.0x) model on the CIFAR100 dataset.
One-sentence Summary: In this paper, we propose a smooth activation function, we call it SAU and shown that SAU outperforms widely used activation functions in different deep learning problems.
11 Replies

Loading