TL;DR: Certified Robustness sets expectations in users that are not met by current techniques. We articulate these challenges and pose open questions to the community to make this technique useful for increasing model security.
Abstract: While certified robustness is widely promoted as a solution to adversarial examples in Artificial Intelligence systems, significant challenges remain before these techniques can be meaningfully deployed in real-world applications. We identify critical gaps in current research, including the paradox of detection without distinction, the lack of clear criteria for practitioners to evaluate certification schemes, and the potential security risks arising from users' expectations surrounding ``guaranteed" robustness claims. These create an alignment issue between how certifications are presented and perceived, relative to their actual capabilities. This position paper is a call to arms for the certification research community, proposing concrete steps to address these fundamental challenges and advance the field toward practical applicability.
Lay Summary: Certified Robustness is a form of guaranteed defense of AI and Machine Learning Models against manipulation. However, these models are a long way away from being able to be deployed to help protect real world systems. Complicating this is the way these systems are presented and named - as it is completely reasonable to think that being certifiably robust would mean that no additional security provisions would be required.
This work presents a case for how AI security can be moved forward, by drawing more direct inspiration from the needs of real world users, and by creating more honest and task appropriate measures of success. In doing so, we hope to lay the foundations for the next era of developments in secure AI.
Verify Author Names: My co-authors have confirmed that their names are spelled correctly both on OpenReview and in the camera-ready PDF. (If needed, please update ‘Preferred Name’ in OpenReview to match the PDF.)
No Additional Revisions: I understand that after the May 29 deadline, the camera-ready submission cannot be revised before the conference. I have verified with all authors that they approve of this version.
Pdf Appendices: My camera-ready PDF file contains both the main text (not exceeding the page limits) and all appendices that I wish to include. I understand that any other supplementary material (e.g., separate files previously uploaded to OpenReview) will not be visible in the PMLR proceedings.
Latest Style File: I have compiled the camera ready paper with the latest ICML2025 style files <https://media.icml.cc/Conferences/ICML2025/Styles/icml2025.zip> and the compiled PDF includes an unnumbered Impact Statement section.
Paper Verification Code: YzY0N
Permissions Form: pdf
Primary Area: System Risks, Safety, and Government Policy
Keywords: Certified Robustness, Randomised Smoothing, Security
Submission Number: 96
Loading