Attention Shift: Steering AI Away from Unsafe Content

Published: 10 Oct 2024, Last Modified: 08 Dec 2024NeurIPS 2024 Workshop RBFM PosterEveryoneRevisionsBibTeXCC BY-NC 4.0
Keywords: AI Ethics, Diffusion Models, Unlearning, SafeGenerativeAI
TL;DR: Analysis of unsafe content removal techniques in outputs of generative models
Abstract: This study analyses the generation of unsafe or harmful content in state-of-the-art generative models with a focus on techniques used for restricting such generations. We introduce a training-free approach using attention reweighing to remove unsafe concepts without additional training during inference. We compare the performance of models post the application of ablation techniques on both, direct as well as jailbreak prompt attacks, hypothesize potential reasons for the observed results, and discuss the limitations and broader implications of the approaches.
Submission Number: 22
Loading