Secure and Trustworthy Large Language Models

Published: 08 Mar 2024, Last Modified: 08 Mar 2024ICLR 2024 WorkshopsEveryoneRevisionsBibTeX
Workshop Type: In-person
Keywords: large language models, trustworthy, reliability assessment
Abstract: Large Language Models (LLMs) have emerged as transformative tools in natural language processing, redefining benchmarks across tasks from machine translation to dialog systems. However, with these advancements come intricate challenges centered around the security, transparency, and ethical dimensions of LLMs. These challenges, ranging from biases and misinformation dissemination to vulnerabilities against sophisticated attacks, have garnered considerable research attention. Our proposed workshop seeks to shine a spotlight on these pivotal issues, focusing on a myriad of topics including, but not limited to, LLM reliability, interpretability, backdoor defenses, and emerging learning paradigms. This assembly aims to bridge gaps between academia and industry, offering a platform for rigorous discussion, collaborative brainstorming, and a showcase of the latest research breakthroughs. Through this endeavor, we aspire to pave a pathway towards more secure, transparent, and ethically-grounded developments in LLMs, underlining the importance of collaborative, cross-disciplinary efforts in the process.
Submission Number: 6
Loading