Abstract: A complete and discriminative dictionary can achieve superior performance. However, it also consumes extra processing time and memory, especially for large datasets. Most existing compact dictionary learning methods need to set the dictionary size manually, therefore an appropriate dictionary size is usually obtained in an exhaustive search manner. How to automatically learn a compact dictionary with high fidelity is still an open challenge. We propose an automatic compact dictionary learning (ACDL) method which can guarantee a more compact and discriminative dictionary while at the same time maintaining the state-of-the-art classification performance. We incorporate two innovative components in the formulation of the dictionary learning algorithm. First, an indicator function is introduced that automatically removes highly correlated dictionary atoms with weak discrimination capacity. Second, two additional constraints, namely, the sum-to-one and the non-negative constraints are imposed on the sparse coefficients. On one hand, this achieves the same functionality as the \(L_2\)-normalization on the raw data to maintain a stable sparsity threshold. On the other hand, this effectively preserves the geometric structure of the raw data which would be otherwise destroyed by the \(L_2\)-normalization. Extensive evaluations have shown that the preservation of geometric structure of the raw data plays an important role in achieving high classification performance with smallest dictionary size. Experimental results conducted on four recognition problems demonstrate the proposed ACDL can achieve competitive classification performance using a drastically reduced dictionary (https://github.com/susanqq/ACDL.git).
Loading