Abstract: Highlights • First, we propose a kernel transformer to learn the complementary properties of different features for feature fusion. Based on the Mahalanobis distance, for feature concatenation in kernel space, we prove that the fusion can be achieved in the data space using each feature. • Second, kernel metric learning with triplets and label constraints is proposed, which leads to a better performance compared with the method that only triplets constraints are used. Based on the theory of extreme learning machine, label constraints are also embedded into our model. • Third, we built a complete optimization objective function. Based on the alternating direction method of multipliers solver and the Karush-Kuhn-Tucker theorem, the proposed optimization problem is solved with rigorous theoretical analysis. Abstract Feature fusion is an important skill to improve the performance in computer vision, the difficult problem of feature fusion is how to learn the complementary properties of different features. We recognize that feature fusion can benefit from kernel metric learning. Thus, a metric learning-based kernel transformer method for feature fusion is proposed in this paper. First, we propose a kernel transformer to convert data from data space to kernel space, which makes feature fusion and metric learning can be performed in the transformed kernel space. Second, in order to realize supervised learning, both triplets and label constraints are embedded into our model. Third, in order to solve the unknown kernel matrices, LogDet divergence is also introduced into our model. Finally, a complete optimization objective function is formed. Based on an alternating direction method of multipliers (ADMM) solver and the Karush-Kuhn-Tucker (KKT) theorem, the proposed optimization problem is solved with the rigorous theoretical analysis. Experimental results on image retrieval demonstrate the effectiveness of the proposed methods.
0 Replies
Loading