Abstract: Feature whitening [16] is a known technique for speeding up training of DNN.
Under certain assumptions, whitening the activations [7] reduces the Fisher information matrix to a simple identity matrix, in which case stochastic gradient
descent is equivalent to the faster natural gradient descent. Due to the additional
complexity resulting from transforming the layer inputs and their corresponding
gradients in the forward and backward propagation, and from repeatedly computing
the Eigenvalue decomposition (EVD), this method is not commonly used to date.
In this work, we address the complexity drawbacks of feature whitening. Our
contribution is twofold. First, we derive an equivalent method, which replaces
the sample transformations by a transformation to the weight gradients, applied to
every batch of B samples. The complexity is reduced by a factor of S/(2B), where
S denotes the feature dimension of the layer output. As the batch size increases
with distributed training, the benefit of using the proposed method becomes more
compelling. Second, motivated by the theoretical relation between the condition
number of the sample covariance matrix and the convergence speed, we derive an
alternative sub-optimal algorithm which recursively reduces the condition number
of the latter matrix. Compared to EVD, complexity is reduced by a factor of
the input feature dimension M. We exemplify the proposed algorithms with
ResNet-based networks for image classification demonstrated on the CIFAR and
Imagenet datasets. Parallelizing the proposed algorithms is straightforward and we
implement a distributed version thereof. Improved convergence, in terms of speed
and attained accuracy, can be observed in our experiments.
0 Replies
Loading