Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Learning non-linear transform with discriminative and minimum information loss priors
Nov 03, 2017 (modified: Dec 12, 2017)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:This paper proposes a novel approach for the learning of discriminative and sparse representations. It consists of utilizing two different models. A predefined number of non-linear transform models are used in the learning stage and one sparsifying transform model is used at test time. The non-linear transform models have discriminative and minimum information loss priors. A novel measure related to the discriminative prior is proposed and defined on the support intersection for the transform representations. The minimum information loss prior is expressed as a constraint on the conditioning and the expected coherence of the transform matrix. An equivalence between the non-linear models and the sparsifiyng model is shown only when the measure that is used to define the discriminative prior goes to zero. An approximation of the measure used in the discriminative prior is addressed, connecting it to a similarity concentration. To quantify the discriminative properties of the transform representation another measure, reflecting the discriminative quality of the transform representation, named as discrimination power, is introduced and its bounds are presented.
To support and validate the theoretical analysis a practical learning algorithm is presented. The advantages and the potential of the proposed algorithm are evaluated by a computer simulation. A favorable performance is shown considering the execution time, the quality of the representation, measured by the discrimination power and the recognition accuracy in comparison with the state-of-the-art methods of the same category.
Keywords:transform learning, sparse representation, discrimininative prior, information preservation, discrimination power
Enter your feedback below and we'll get back to you as soon as possible.