Abstract: Graph regularized Auto-Encoder (GAE) is an AE variant that incorporates with manifold learning. Using graph Laplacian as regularization towards encodings, GAE could preserve locality of original data in low dimensions, achieving better clustering and visualization results than original AE and its other variants. Nevertheless, graph Laplacian in GAE takes 2-norm loss function to penalize locality of encodings, likely leading to instability. Pinpointing at this, in this paper we propose a robust graph regularized auto-encoder (RGAE). Instead of graph Laplacian, RGAE adopts cross entropy for penalization, pursuing consistency between locality of original data and that of its low-dimensional encodings. In particular, RGAE is less parameterized, since it takes no local weights into account. Experiments on benchmark datasets reveal that RGAE is on the whole more performant than GAE at clustering along with varied encoding dimensions, and achieves better visualization results.
Loading