Explainable Deep Multi-View Methodology for HEDP Foam Quality Assessment

Published: 08 Jul 2024, Last Modified: 23 Jul 2024AI4Mat-Vienna-2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Submission Track: Full Paper
Submission Category: AI-Guided Design + Automated Material Characterization
Keywords: HEDP, Foam Quality, Multi-View Classification, Multi-View Explainability, Deep Learning, LIME, SHAP
Abstract: Physical experiments often involve multiple imaging representations, such as X-ray scans, microscopic or spectroscopic images, and diffraction patterns. Deep learning models have been widely used for supervised analysis in these experiments. Combining these different image representations is frequently required to analyze and make a decision properly. Consequently, multi-view data has emerged - datasets where each sample is described by multiple feature representations or views from different angles, sources, or modalities. These problems are addressed with the concept of multi-view learning. Understanding the decision-making process of deep learning models is essential for reliable and credible analysis. Hence, many explainability methods have been devised recently. Nonetheless, there is a lack of proper explainability in multi-view models, which are challenging to explain due to their architectures. In this paper, we suggest four different multi-view architectures for the vision domain, each suited to another problem, with different relations between its views, and present a methodology for explaining these models. To demonstrate the effectiveness of our methodology, we focus on the domain of High Energy Density Physics (HEDP) experiments, where multiple imaging representations are used to assess the quality of foam samples. We expand the existing dataset and apply our methodology to classify the foam samples using the suggested multi-view architectures. Through experimental results, we show an improvement by accurate architecture choice on both accuracy (78\% to 84\%) and AUC (83\% to 93\%) while presenting a trade-off between performance and explainability. Specifically, we demonstrate that our approach enables the explanation of individual one-view models through model-agnostic techniques, providing insights into the decision-making process of each view. This comprehensive understanding enhances the interpretability of the overall multi-view model. The sources of this work are available at: https://github.com/Scientific-Computing-Lab-NRCN/Multi-View-Explainability.git
Submission Number: 2
Loading