Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Regularized Discriminant Embedding for Visual Descriptor Learning
Kye-Hyeon Kim, Rui Cai, Lei Zhang, Seungjin Choi
Jan 18, 2013 (modified: Jan 18, 2013)ICLR 2013 conference submissionreaders: everyone
Abstract:Images can vary according to changes in viewpoint, resolution, noise, and illumination. In this paper, we aim to learn representations for an image, which are robust to wide changes in such environmental conditions, using training pairs of matching and non-matching local image patches that are collected under various environmental conditions. We present a regularized discriminant analysis that emphasizes two challenging categories among the given training pairs: (1) matching, but far apart pairs and (2) non-matching, but close pairs in the original feature space (e.g., SIFT feature space). Compared to existing work on metric learning and discriminant analysis, our method can better distinguish relevant images from irrelevant, but look-alike images.
Enter your feedback below and we'll get back to you as soon as possible.