Neural Additive Tensor Decomposition for Sparse Tensors

Published: 01 Jan 2024, Last Modified: 04 Mar 2025CIKM 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Given a sparse tensor, how can we accurately capture complex latent structures inherent in the tensor while maintaining the interpretability of those structures? Tensor decomposition is a fundamental technique for analyzing tensors. Classical tensor models provide multi-linear structures that are easy to interpret, but have limitations in capturing complex structures present in real-world sparse tensors. Recent neural tensor models have extended the capabilities of classical tensor models in capturing complex structures within the data. However, this has come at the cost of interpretability: neural tensor models entangle interactions across and within latent structures in a black-box manner, making it difficult to readily understand the discovered structures. Understanding these structures, however, is crucial in applications such as healthcare, which requires transparency in critical decision-making processes.To overcome this major limitation and bridge the gap between the classical multi-linear models and neural tensor models, we propose Neural Additive Tensor Decomposition (NeAT), an accurate and interpretable neural tensor model for sparse tensors. The main idea of NeAT is to apply neural networks to each latent component in an additive fashion. This not only captures diverse patterns and complex structures in sparse tensors, but also provides a direct and intuitive interpretation of the structures by being close to the multi-linear tensor model. We conduct extensive experiments on six large real-world sparse tensors. NeAT outperforms the state-of-the-art neural tensor models in link prediction, surpassing a linear tensor model by 10% and the second-best neural tensor model by 4%, in accuracy. Through ablation studies, we explore various model designs for NeAT to identify key factors that impact generalization. Finally, we evaluate qualitatively and quantitatively latent patterns discovered by NeAT, demonstrating how to analyze the discovered latent patterns in real data obtained from NeAT.
Loading