Abstract: The concept of weight pruning has shown success
in neural network model compression with marginal loss in
classification performance. However, similar concepts have not
been well recognized in improving unsupervised learning. To the
best of our knowledge, this paper proposes one of the first studies
on weight pruning in unsupervised autoencoder models using
non-imaging data points. We adapt the weight pruning concept to
investigate the dynamic behavior of weights while reconstructing
data using an autoencoder and propose a deterministic model
perturbation algorithm based on the weight statistics. The model
perturbation at periodic intervals resets a percentage of weight
values using a binary weight mask. Experiments across eight non-
imaging data sets ranging from gene sequence to swarm behavior
data show that only a few periodic perturbations of weights
improve the data reconstruction accuracy of autoencoders and
additionally introduce model compression. All data sets yield a
small portion of (<5%) weights that are substantially higher than
the mean weight value. These weights are found to be much more
informative than a substantial portion (>90%) of the weights
with negative values. In general, the perturbation of low or
negative weight values at periodic intervals has improved the data
reconstruction loss for most data sets when compared to the case
without perturbation. The proposed approach may help explain
and correct the dynamic behavior of neural network models in a
deterministic way for data reconstruction and obtaining a more
accurate representation of latent variables using autoencoders
0 Replies
Loading