GUARDIAN: Guarding Against Uncertainty and Adversarial Risks in Robot-Assisted Surgeries

03 Aug 2024 (modified: 01 Sept 2024)MICCAI 2024 Workshop UNSURE SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Robotic surgery, adversarial attacks, adversarial training, uncertainty estimation, trustworthy robotic surgery
Abstract: In the realm of robotic-assisted surgeries, like laparoscopic cholecystectomy, the integration of deep learning (DL) models marks a significant advancement in achieving surgical precision and minimal invasiveness, which in turn, elevates patient outcomes and reduces recovery times. However, the vulnerability of these DL models to adversarial attacks introduces a critical risk, emphasizing the need for enhanced model robustness. Our study addresses this challenge by proposing a comprehensive framework that not only fortifies surgical action recognition models against adversarial threats through adversarial training and pre-processing strategies but also incorporates uncertainty estimation to enhance prediction confidence and trustworthiness. Our framework demonstrates superior resilience against a wide spectrum of adversarial attacks and showcases improved reliability in surgical tool detection under adversarial conditions. It achieves an improvement from 8\% to 23.58\% in terms of triplet (instrument, verb, triplet) predictions. These contributions significantly enhance the security and reliability of deep learning applications in the critical domain of robotic surgery, offering an approach that safeguards advanced surgical technologies against malicious threats, thereby promising enhanced patient care and surgical precision. Code is available at https://github.com/umair1221/guardian.
Supplementary Material: pdf
Submission Number: 6
Loading