Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning

ACL ARR 2024 June Submission621 Authors

12 Jun 2024 (modified: 24 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The open-sourcing of large language models (LLMs) accelerates application development, innovation, and scientific progress. This includes both base models, which are pre-trained on extensive datasets without alignment, and aligned models, deliberately designed to align with ethical standards and human values. Contrary to the prevalent assumption that the inherent instruction-following limitations of base LLMs serve as a safeguard against misuse, our investigation exposes a critical oversight in this belief. By deploying carefully designed demonstrations, our research demonstrates that base LLMs could effectively interpret and execute malicious instructions. To systematically assess these risks, we introduce a novel set of risk evaluation metrics. Empirical results reveal that the outputs from base LLMs can exhibit risk levels on par with those of models fine-tuned for malicious purposes. This vulnerability, requiring neither specialized knowledge nor training, can be manipulated by almost anyone, highlighting the substantial risk and the critical need for immediate attention to the base LLMs' security protocols.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Large Language Models,Misuse Vulnerability,Risk Evaluation Metrics,Security Protocols
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English,Chinese,German,French
Submission Number: 621
Loading