Proposal: ICLR 2025 Workshop on Building Trust in Language Models and Applications

Published: 03 Dec 2024, Last Modified: 03 Dec 2024ICLR 2025 Workshop ProposalsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Trustworthiness in LLMs, Explainability and interpretability of LLM, Robustness of LLMs, Benchmarks and Evaluation of trustworthy LLMs, Fairness in LLMs, Guardrails and regulations for LLMs
TL;DR: This workshop explores solutions for improving the trustworthiness of Large Language Models in real-world applications, focusing on ethics, safety, and regulation.
Abstract: As Large Language Models (LLMs) are rapidly adopted across diverse industries, concerns around their trustworthiness, safety, and ethical implications increasingly motivate academic research, industrial development, and legal innovation. LLMs are increasingly integrated into complex applications, where they must navigate challenges related to data privacy, regulatory compliance, and dynamic user interactions. These complex applications amplify the potential of LLMs to violate the trust of humans. Ensuring the trustworthiness of LLMs is paramount as they transition from standalone tools to integral components of real-world applications used by millions. This workshop addresses the unique challenges posed by the deployment of LLMs, ranging from guardrails to explainability to regulation and beyond. The proposed workshop will bring together researchers and practitioners from academia and industry to explore cutting-edge solutions for improving the trustworthiness of LLMs and LLM-driven applications. The workshop will feature invited talks, a panel discussion, interactive breakout discussion sessions, and poster presentations, fostering rich dialogue and knowledge exchange. We aim to bridge the gap between foundational research and the practical challenges of deploying LLMs in trustworthy, use-centric systems.
Submission Number: 110
Loading