Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose a new orthogonal convolution and a novel loss function to enhance certified robustness.
Abstract: Lipschitz neural networks are well-known for providing certified robustness in deep learning. In this paper, we present a novel, efficient Block Reflector Orthogonal (BRO) layer that enhances the capability of orthogonal layers on constructing more expressive Lipschitz neural architectures. In addition, by theoretically analyzing the nature of Lipschitz neural networks, we introduce a new loss function that employs an annealing mechanism to increase margin for most data points. This enables Lipschitz models to provide better certified robustness. By employing our BRO layer and loss function, we design BRONet — a simple yet effective Lipschitz neural network that achieves state-of-the-art certified robustness. Extensive experiments and empirical analysis on CIFAR-10/100, Tiny-ImageNet, and ImageNet validate that our method outperforms existing baselines. The implementation is available at [GitHub Link](https://github.com/ntuaislab/BRONet).
Lay Summary: Today, artificial intelligence (AI) plays a role in many aspects of our lives, from image recognition to medical diagnostics. However, these systems can still be easily fooled by small changes to their input—such as a slightly modified photo—that mislead the AI but not a human. This vulnerability makes it difficult to fully trust AI in critical applications like self-driving cars or healthcare. To address this, we developed a new building block for AI models called the **Block Reflector Orthogonal (BRO) layer**. This component helps construct stronger **Lipschitz models**—a class of neural networks that respond more consistently to slight changes in input. Since these models can be harder to train effectively, we also introduced a new training objective called the **Logit Annealing Loss**, which encourages the model to learn better decision boundaries across a wider range of data, not just a subset. Our work moves us closer to creating AI systems that are not only intelligent but also robust, trustworthy, and safer to use in real-world settings.
Link To Code: https://github.com/ntuaislab/BRONet
Primary Area: Deep Learning->Robustness
Keywords: Certified robustness, Adversarial
Submission Number: 559
Loading