Towards Robust Foundation Models: Adversarial Contrastive Learning

Published: 16 Feb 2024, Last Modified: 28 Mar 2024BT@ICLR2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: robust foundation model, adversarial robustness, adversarial contrastive learning
Blogpost Url: https://iclr-blogposts.github.io/2024/blog/robust-foundation-model/
Abstract: Foundation models pre-trained on large-scale unlabelled datasets using self-supervision can be generalizable to a wide range of downstream tasks. Existing work has shown that there exist adversarial attacks that can effectively fool any downstream model obtained by fine-tuning foundation models. The existence of such adversarial attacks necessitates the development of robust foundation models which can yield both standard generalization and adversarial robustness in safety-critical downstream tasks. Currently, adversarial contrastive learning (ACL) is one of the most effective methods for building robust foundation models. ACL incorporates contrastive learning with adversarial data to effectively learn robust representations without requiring costly annotations. In this blog, based on two NeurIPS 2023 publications, we will introduce two techniques for enhancing ACL's effectiveness and efficiency, respectively. (1) This blog introduces Adversarial Invariant Regularization (AIR) which is the state-of-the-art ACL algorithm. A causal theoretical framework is built to interpret ACL and the AIR algorithm is derived from the causal framework to regulate and improve ACL. (2) This blog introduces a Robustness-aware Coreset Selection (RCS) method to speed up ACL. RCS does not require label information and searches for an informative training subset that helps maintain the adversarial robustness of the representation. RCS for the first time applies the ACL on the large-scale ImageNet-1K dataset.
Ref Papers: https://openreview.net/forum?id=zuXyQsXVLF, https://openreview.net/forum?id=fpzA8uRA95
Id Of The Authors Of The Papers: ~Xilie_Xu1, ~Jingfeng Zhang1, ~Feng_Liu2, ~Masashi Sugiyama1, ~Mohan Kankanhalli1
Conflict Of Interest: N/A
Submission Number: 4
Loading