Keywords: Sparsity, Unsupervised Learning, Single Layer Models
Abstract: We study the emergence of sparse representations in neural networks. We show that in unsupervised
models with regularization, the emergence of sparsity is the result of the input data samples being
distributed along highly non-linear or discontinuous manifold. We also derive a similar argument
for discriminatively trained networks and present experiments to support this hypothesis. Based
on our study of sparsity, we introduce a new loss function which can be used as regularization
term for models like autoencoders and MLPs. Further, the same loss function can also be used
as a cost function for an unsupervised single-layered neural network model for learning efficient
representations.
3 Replies
Loading