Abstract: Deep neural networks (DNNs) have recently delivered impressive results across various tasks, yet understanding how they make decisions remains challenging. Feature importance has appeared as a key technique for interpreting both shallow and deep neural networks. In this paper, we introduce XDATE, a framework that extends the Garson algorithm to Deep Belief Networks (DBNs) based on autoencoders (DBNA). Unlike traditional DNNs, DBNs offer a layered learning approach that captures hierarchical features more effectively, making them more suitable for complex tasks. XDATE allows us to assess feature importance across multiple hidden layers, providing a clearer understanding of the decision-making process. The XDATE framework demonstrates robustness by maintaining consistent feature importance metrics across different network depths in classification tasks. This stability, paired with efficient feature extraction, makes XDATE a valuable tool for interpreting DBNA. Our method effectiveness is validated on various classification problems and regression datasets, highlighting the importance of feature selection and the potential of XDATE in improving model interpretability in real-world application.
Loading