Real-Valued Backpropagation is Unsuitable for Complex-Valued Neural NetworksDownload PDF

Published: 31 Oct 2022, Last Modified: 01 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: complex-valued neural network, complex backpropagation, neural tangent kernel
TL;DR: We theoretically show that real-valued backpropagation reduces the training dynamics of complex networks to that of ordinary real networks as the widths grow.
Abstract: Recently complex-valued neural networks have received increasing attention due to successful applications in various tasks and the potential advantages of better theoretical properties and richer representational capacity. However, the training dynamics of complex networks compared to real networks remains an open problem. In this paper, we investigate the dynamics of deep complex networks during real-valued backpropagation in the infinite-width limit via neural tangent kernel (NTK). We first extend the Tensor Program to the complex domain, to show that the dynamics of any basic complex network architecture is governed by its NTK under real-valued backpropagation. Then we propose a way to investigate the comparison of training dynamics between complex and real networks by studying their NTKs. As a result, we surprisingly prove that for most complex activation functions, the commonly used real-valued backpropagation reduces the training dynamics of complex networks to that of ordinary real networks as the widths tend to infinity, thus eliminating the characteristics of complex-valued neural networks. Finally, the experiments validate our theoretical findings numerically.
Supplementary Material: pdf
10 Replies

Loading