Keywords: Robustness Validation, Hölder Optimisation, Geometric robustness, Hilbert curve dimensionality-reduction
TL;DR: NN robustness validation against geometric perturbations using Hölder Optimisation
Abstract: Neural Network (NN) verification methods provide local robustness
guarantees for a NN in the dense perturbation space of an input.
In this paper we introduce H$^2$V, a method for the validation of
local robustness of NNs against geometric perturbations. H$^2$V
uniquely employs a Hilbert space-filling construction to recast
multi-dimensional problems into single-dimensional ones and Hölder
optimisation, iteratively refining the estimation of the Hölder constant 
for constructing the lower bound.
In common with methods, Hölder optimisation might theoretically
converge to a local minimum, thereby resulting in a robustness result
being incorrect. However, we here identify conditions for H$^2$V
 to be provably sound, and show experimentally that even
outside the soundness conditions, the risk of incorrect results can be
minimised by introducing appropriate heuristics in the global
optimisation procedure. Indeed, we found no incorrect 
results validated by H$^2$V on a large set of benchmarks from
SoundnessBench and VNN-COMP.
To assess the scalability of the approach, we report the results
obtained on large NNs ranging from Resnet34 to Resnet152 and vision
transformers.  These point to SoA scalability of the approach when
validating the local robustness of large NNs against geometric
perturbations on the ImageNet dataset.  Beyond image tasks, we show
that the method's scalability enables for the first time the
robustness validation of large-scale 3D-NNs in video classification
tasks against geometric perturbations for long-sequence input frames
on Kinetics/UCF101 datasets.
Primary Area: Social and economic aspects of machine learning (e.g., fairness, interpretability, human-AI interaction, privacy, safety, strategic behavior)
Submission Number: 24436
Loading