Abstract: It has increasingly been recognised that verification can contribute to the validation and debugging of neural networks before deployment, particularly in safety-critical areas. While progress has been made in the area of verification of neural networks, present techniques still do not scale to large ReLU-based neural networks used in many applications. In this paper we show that considerable progress can be made by employing Parametric ReLU activation functions in lieu of plain ReLU functions. We give training procedures that produce networks which achieve one order of magnitude gain in verification overheads and 30-100% fewer timeouts with VeriNet, a SoA Symbolic Interval Propagation-based verification toolkit, while not compromising the resulting accuracy. Furthermore, we show that adversarial training combined with our approach improves certified robustness up to 36% compared to adversarial training performed on baseline ReLU networks.
Loading