Keywords: RAG, SNLI, LLM, text generation, adversarial training, data augmentation, few‑shot learning, natural language inference
TL;DR: VAULT automates RAG for NLI: for each premise-label pair, it retrieves balanced contexts via semantic and lexical similarity, generates adversarial hypotheses, ensemble‑validates them, and injects them back into training to harden the model.
Abstract: We introduce VAULT, a fully automated adversarial RAG pipeline that systematically uncovers and remedies weaknesses in NLI models through three stages: retrieval, adversarial generation, and iterative retraining. First, we perform balanced few‑shot retrieval by embedding premises with both semantic (BGE) and lexical (BM25) similarity. Next, we assemble these contexts into LLM prompts to generate adversarial hypotheses, which are then validated by an LLM ensemble for label fidelity. Finally, the validated adversarial examples are injected back into the training set at increasing mixing ratios, progressively fortifying a zero‑shot target NLI model. On standard benchmarks, VAULT elevates RoBERTa‑base accuracy from 88.48% to 92.60% on SNLI (+4.12%), from 75.04% to 80.95% on ANLI (+5.91%), and from 54.67% to 71.99% on MultiNLI (+17.32%). It also consistently outperforms prior in‑context adversarial methods by up to 2.0% across datasets. By automating high‑quality adversarial data curation at scale, VAULT enables rapid, human‑independent robustness improvements in NLI inference tasks.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 10778
Loading