Abstract: In recent years, tensor computation has become a promising tool for solving big data analysis, machine learning, medical image, and EDA problems. To ease the memory and computation intensity of tensor processing, decomposition techniques, especially tensor-train decomposition (TTD), are widely adopted to compress the extremely high-dimensional tensor data. Despite TTD’s potential to break the curse of dimensionality, researchers have not yet leveraged its full computational potential, mainly because of two reasons: 1) executing TTD itself is time- and energy-consuming due to the singular value decomposition (SVD) operation inside each of TTD’s iteration and 2) additional software/hardware optimizations are often required to process the obtained TT-format data in certain applications such as deep learning inference. In this article, we address these challenges with two approaches. First, we propose an algorithm-hardware co-design with customized architecture, namely, TTD Engine to accelerate TTD. We use MRI image compression as a demo application to illustrate the efficacy of the proposed accelerator. Second, we present a case study demonstrating the benefit of TT-format data processing and the efficacy of using TTD Engine. In the case study, we use the TT approach to realize convolution operation, which is difficult and nontrivial for TT-format data. Experimental results show that, TTD Engine achieves, on average, <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$14.9 \times $ </tex-math></inline-formula> – <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$36.9 \times $ </tex-math></inline-formula> speedup over CPU implementations and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1\times $ </tex-math></inline-formula> – <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$9.9\times $ </tex-math></inline-formula> speedup compared to the GPU baseline. The energy efficiency is also improved by at least <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$14.4\times $ </tex-math></inline-formula> and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$5.4\times $ </tex-math></inline-formula> over CPU and GPU, respectively. Moreover, our hardware-enabled TT-format data processing further leads to more efficient implementations of complicated operations and applications.
0 Replies
Loading