Keywords: Model understanding, explainable AI, attribution methods, model alignment, model behavior shaping, ML for science, human in the loop, trustworth AI, physics-informed ML
TL;DR: We aim to bring together XAI and ML4Science researchers to understand and shape model behavior for the advancement of human knowledge.
Abstract: Machine learning (ML) models are impressive when they work but they can also show unreliable, untrustworthy, and harmful dangerous behavior. Such behavior is even more common in the era of large models, such as chatGPT, which are quickly being adopted even though we do not understand why they work so well and fail miserably at times. Unfortunately, such rapid dissemination encourages irresponsible use, for example, to spread misinformation or create deep fakes, while hindering the efforts to use them to solve pressing societal problems and advance human knowledge. Ideally, we want models that have a human-like capacity to learn by observing, theorizing, and validating the theories to improve the understanding of the world. At the very least, we want them to aid human knowledge and help us to further enrich it. Our goal in this workshop is to bring together researchers working on understanding model behavior and show how this key aspect can lead to discovering new human knowledge. The workshop will include theoretical topics on understanding model behavior, namely interpretability and explainability (XAI), but also three distinct scientific application areas: weather and climate, healthcare, and material science (ML4Science).
Submission Number: 100
Loading