Assessing biomedical knowledge robustness in large language models by query-efficient sampling attacks

TMLR Paper3370 Authors

21 Sept 2024 (modified: 23 Nov 2024)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The increasing depth of parametric domain knowledge in large language models (LLMs) is fueling their rapid deployment in real-world applications. Understanding model vulnerabilities in high-stakes and knowledge-intensive tasks is essential for quantifying the trustworthiness of model predictions and regulating their use. The recent discovery of named entities as adversarial examples (i.e. adversarial entities) in natural language processing tasks raises questions about their potential impact on the knowledge robustness of pre-trained and finetuned LLMs in high-stakes and specialized domains. We examined the use of type-consistent entity substitution as a template for collecting adversarial entities for medium-sized billion-parameter LLMs with biomedical knowledge. To this end, we developed an embedding-space, gradient-free attack based on powerscaled distance-weighted sampling to assess the robustness of their biomedical knowledge with a low query budget and controllable coverage. Our method has favorable query efficiency and scaling over alternative approaches based on blackbox gradient-guided search, which we demonstrated for adversarial distractor generation in biomedical question answering. Subsequent failure mode analysis uncovered two regimes of adversarial entities on the attack surface with distinct characteristics. We showed that entity substitution attacks can manipulate token-wise Shapley value explanations, which become deceptive in this setting. Our approach complements standard evaluations for high-capacity models and the results highlight the brittleness of domain knowledge in LLMs.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=mNFcCGLYv1
Changes Since Last Submission: 1. Recovered the original fonts. 2. Reduced the length of work by deleting and rewriting parts of the manuscript. [10/23/24] Updated manuscript to address the issues raised by reviewer D6wa. [11/08/24] Updated manuscript to address points 1,3,4 raised by reviewer HHFG. [11/12/24] Updated manuscript to address point 2 raised by reviewer HHFG. [11/13/24] Updated manuscript to address the issues raised by reviewer rYjV. [11/21/24] Minor fixes in math symbols and text of Appendix B.
Assigned Action Editor: ~Grigorios_Chrysos1
Submission Number: 3370
Loading