Locally-Linear Embedding. More...
#include <FgLLE.h>
Public Member Functions | |
LLE (int numNeighbours) | |
Constructor. More... | |
~LLE () | |
Destructor. More... | |
af::array | Compute (af::array &M, int q) |
Solves. More... | |
virtual af::array | Compute (af::array &M, int q)=0 |
Private Attributes | |
int | _numNeighbours |
Locally-Linear Embedding.
Locally-Linear Embedding (LLE) was presented at approximately the same time as Isomap. It has several advantages over Isomap, including faster optimization when implemented to take advantage of sparse matrix algorithms, and better results with many problems. LLE also begins by finding a set of the nearest neighbors of each point. It then computes a set of weights for each point that best describe the point as a linear combination of its neighbors. Finally, it uses an eigenvector-based optimization technique to find the low-dimensional embedding of points, such that each point is still described with the same linear combination of its neighbors. LLE tends to handle non-uniform sample densities poorly because there is no fixed unit to prevent the weights from drifting as various regions differ in sample densities. LLE has no internal model.
LLE computes the barycentric coordinates of a point \(\mathbf{x}_i\) based on its neighbors \(\mathbf{x}_j\).The original point is reconstructed by a linear combination, given by the weight matrix \(\mathbf{W}_{ij}\), of its neighbors. The reconstruction error is given by the cost function \(E(\mathbf{W})\),
\[ E(\mathbf{W}) = \sum_i | \mathbf{x}_i − \sum_j \mathbf{W}_{ij} \mathbf{x}_j |^2 .\]
The weights \(\mathbf{W}_{ij}\) refer to the amount of contribution the point \(\mathbf{x}_j\) has while reconstructing the point \(\mathbf{x}_i\).The cost function is minimized under two constraints:
\[\sum_j \mathbf{W}_{ij} = 1.\]
The original data points are collected in a D dimensional space and the goal of the algorithm is to reduce the dimensionality to d such that \(D >> d\).The same weights \(\mathbf{W}_{ij}\) that reconstructs the \(i\)th data point in the \(D\)-dimensional space will be used to reconstruct the same point in the lower \(d\)-dimensional space. A neighborhood preserving map is created based on this idea.Each point \(\mathbf{x}_i\) in the \(D\)-dimensional space is mapped onto a point \(\mathbf{Y}_i\) in the \(d\)-dimensional space by minimizing the cost function,
\[C(\mathbf{Y}) = \sum_i | \mathbf{Y}_i − \sum_j \mathbf{W}_{ij} \mathbf{Y}_j |^2.\]
In this cost function, unlike the previous one, the weights \(\mathbf{W}_{ij}\) are kept fixed and the minimization is done on the points \(\mathbf{Y}_i\) to optimize the coordinates.This minimization problem can be solved by solving a sparse \(N \times N\) eigen value problem( \(N\) being the number of data points), whose bottom d nonzero eigen vectors provide an orthogonal set of coordinates. Generally the data points are reconstructed from \(K\) nearest neighbors, as measured by Euclidean distance.For such an implementation the algorithm has only one free parameter \(K\), which can be chosen by cross validation.
NeuralEngine::MachineLearning::LLE::LLE | ( | int | numNeighbours | ) |
Constructor.
Hmetal T, 11.04.2017.
numNeighbours | Number of neighbours. |
NeuralEngine::MachineLearning::LLE::~LLE | ( | ) |
Destructor.
Hmetal T, 11.04.2017.
|
virtual |
Solves.
Hmetal T, 11.04.2017.
M | [in,out] N by D data matrix. |
q | Latent dimension. |
Implements NeuralEngine::MachineLearning::IEmbed.
|
private |