Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint

Xanadu Halkias, S├ębastien PARIS, Herve Glotin

Jan 20, 2013 (modified: Jan 20, 2013) ICLR 2013 conference submission readers: everyone
  • Decision: reject
  • Abstract: Deep Belief Networks (DBN) have been successfully applied on popular machine learning tasks. Specifically, when applied on hand-written digit recognition, DBNs have achieved approximate accuracy rates of 98.8%. In an effort to optimize the data representation achieved by the DBN and maximize their descriptive power, recent advances have focused on inducing sparse constraints at each layer of the DBN. In this paper we present a theoretical approach for sparse constraints in the DBN using the mixed norm for both non-overlapping and overlapping groups. We explore how these constraints affect the classification accuracy for digit recognition in three different datasets (MNIST, USPS, RIMES) and provide initial estimations of their usefulness by altering different parameters such as the group size and overlap percentage.