Abstract: Tensors (multiway arrays) and tensor decompositions (TDs) have recently received tremendous attention in the data analytics community, due to their ability to mitigate the curse of dimensionality associated with modern large-dimensional big data <xref ref-type="bibr" rid="ref1" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">[1]</xref> , <xref ref-type="bibr" rid="ref2" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">[2]</xref> . Indeed, TDs allow for data volume (e.g., the parameter complexity) to be reduced from scaling exponentially to scaling linearly in the tensor dimensions, which facilitates applications in areas including the compression and interpretability of neural networks <xref ref-type="bibr" rid="ref1" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">[1]</xref> , <xref ref-type="bibr" rid="ref3" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">[3]</xref> , multimodal learning <xref ref-type="bibr" rid="ref1" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">[1]</xref> , and completion of knowledge graphs <xref ref-type="bibr" rid="ref4" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">[4]</xref> , <xref ref-type="bibr" rid="ref5" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">[5]</xref> . At the heart of TD techniques is the tensor contraction product (TCP), an operator used for representing even the most unmanageable higher-order tensors through a set of small-scale core tensors that are interconnected via TCP operations <xref ref-type="bibr" rid="ref2" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">[2]</xref> .
0 Replies
Loading