On the Consideration of AI Openness: Can Good Intent Be Abused?

Published: 24 Jun 2024, Last Modified: 01 May 2025IJCAI TIDMwFM 2024 OralEveryoneRevisionsBibTeXCC BY-NC 4.0
Keywords: Openness, LLM, legal-AI, open-source, unethical
TL;DR: Openness in AI development, while key to progress, risks misuse. We examine how open-source AIs can be manipulated with a custom dataset (EVE) based on Korean legal precedents to produce unethical responses.
Abstract: Openness is critical for the advancement of science. In particular, recent rapid progress in AI has been made possible only by various open-source models, datasets, and libraries. However, this openness also means that technologies can be freely used for socially harmful purposes. Can open-source models or datasets be used for malicious purposes? If so, how easy is it to adapt technology for such goals? Here, we conduct a case study in the legal domain, a realm where individual decisions can have profound social consequences. To this end, we build EVE, a dataset consisting of 200 examples of questions and corresponding answers about criminal activities based on 200 Korean precedents. We found that a widely accepted open-source LLM, which initially refuses to answer unethical questions, can be easily tuned with EVE to provide unethical and informative answers about criminal activities. This implies that although open-source technologies contribute to scientific progress, some care must be taken to mitigate possible malicious use cases. Warning: This paper contains contents that some may find unethical.
Submission Number: 1
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview