Keywords: synthetic data, croudsourcing data, adversarial arena, data diversity
TL;DR: A framework for building high quality conversational datasets by framing data generation as an adversarial task: attackers create prompts, and defenders generate responses.
Abstract: Post-training Large Language Models requires diverse, high-quality data which is rare and costly to obtain, especially in low resource domains and for multi-turn conversations. Common solutions are crowdsourcing or synthetic generation, but both often yield low-quality or low-diversity data. We introduce Adversarial Arena for building high quality conversational datasets by framing data generation as an adversarial task: attackers create prompts, and defenders generate responses. This interactive competition between multiple teams naturally produces diverse and complex data. We validated this approach by conducting a competition with 10 academic teams from top US and European universities, each building attacker or defender bots. The competition, focused on safety alignment of LLMs in cybersecurity, generated 19,683 multi-turn conversations. Fine-tuning an open-source model on this dataset produced an 18.47\% improvement in secure code generation on CyberSecEval-Instruct and 29.42\% improvement on CyberSecEval-MITRE.
Primary Area: datasets and benchmarks
Submission Number: 20739
Loading