|
| SDGPLVM (const af::array &Y, int latentDimension, HiddenLayerDescription description, Scalar alpha=1.0, Scalar priorMean=0.0, Scalar priorVariance=1.0, PropagationMode probMode=PropagationMode::MomentMatching, LogLikType lType=LogLikType::Gaussian, XInit emethod=XInit::pca) |
| Constructor. More...
|
|
| SDGPLVM (const af::array &Y, int latentDimension, std::vector< HiddenLayerDescription > descriptions, Scalar alpha=1.0, Scalar priorMean=0.0, Scalar priorVariance=1.0, PropagationMode probMode=PropagationMode::MomentMatching, LogLikType lType=LogLikType::Gaussian, XInit emethod=XInit::pca) |
| Constructor. More...
|
|
| SDGPLVM () |
| Default Constructor. More...
|
|
virtual | ~SDGPLVM () |
| Destructor. More...
|
|
virtual Scalar | Function (const af::array &x, af::array &outGradient) override |
| Cost function the given parameter inputs. More...
|
|
| SparseDeepGPLVMBaseModel (const af::array &Y, int latentDimension, HiddenLayerDescription description, Scalar priorMean=0.0, Scalar priorVariance=1.0, LogLikType lType=LogLikType::Gaussian, XInit emethod=XInit::pca) |
| Constructor. More...
|
|
| SparseDeepGPLVMBaseModel (const af::array &Y, int latentDimension, std::vector< HiddenLayerDescription > descriptions, Scalar priorMean=0.0, Scalar priorVariance=1.0, LogLikType lType=LogLikType::Gaussian, XInit emethod=XInit::pca) |
| Constructor. More...
|
|
| SparseDeepGPLVMBaseModel () |
| Default Constructor. More...
|
|
virtual | ~SparseDeepGPLVMBaseModel () |
| Destructor. More...
|
|
virtual bool | Init () override |
| Initializes the model. More...
|
|
virtual void | PredictF (const af::array &testInputs, af::array &mf, af::array &vf) override |
| Predict noise free functions values \(\mathbf{F}_*\). More...
|
|
virtual void | SampleY (const af::array inputs, int numSamples, af::array &outFunctions) override |
| Generate function samples from posterior. More...
|
|
virtual int | GetNumParameters () override |
| Gets number of parameters. More...
|
|
virtual int | GetNumGPLayerParameters () |
| Gets number gp layer parameters. More...
|
|
virtual void | SetParameters (const af::array ¶m) override |
| Sets the parameters for each optimization iteration. More...
|
|
virtual af::array | GetParameters () override |
| Gets the parameters for each optimization iteration. More...
|
|
virtual void | UpdateParameters () override |
| Updates the parameters. More...
|
|
virtual int | GetNumLayers () |
| Gets number of GP layers. More...
|
|
virtual std::vector< SparseGPBaseLayer< Scalar > * > | GetGPLayers () |
| Gets vector of GP layers. More...
|
|
virtual void | FixKernelParameters (bool isfixed) |
| Sets fixation for hyperparameters. More...
|
|
| GPLVMBaseModel (const af::array &Y, int latentDimension, Scalar priorMean=0.0, Scalar priorVariance=1.0, LogLikType lType=LogLikType::Gaussian, XInit emethod=XInit::pca) |
| Constructor. More...
|
|
| GPLVMBaseModel () |
| Default constructor. More...
|
|
virtual | ~GPLVMBaseModel () |
| Destructor. More...
|
|
virtual void | Optimise (OptimizerType method=L_BFGS, Scalar tol=0.0, bool reinit_hypers=true, int maxiter=1000, int mb_size=0, LineSearchType lsType=MoreThuente, bool disp=true, int *cycle=nullptr) override |
| Optimizes the model parameters for best fit. More...
|
|
virtual bool | Init (af::array &mx) |
| Initializes the model. More...
|
|
virtual bool | Init () override |
| Initializes the model. More...
|
|
virtual void | PosteriorLatents (af::array &mx, af::array &vx) |
| Get posterior distribution of latent variables /f$\mathbf{X}/f$. More...
|
|
virtual int | GetNumParameters () override |
| Gets number of parameters. More...
|
|
virtual void | SetParameters (const af::array ¶m) override |
| Sets the parameters for each optimization iteration. More...
|
|
virtual af::array | GetParameters () override |
| Gets the parameters for each optimization iteration. More...
|
|
virtual void | UpdateParameters () override |
| Updates the parameters. More...
|
|
virtual void | FixKernelParameters (bool isfixed) |
| Sets fixation for hyperparameters. More...
|
|
virtual void | FixInducing (bool isfixed) |
| Set fixation for inducing inputs. More...
|
|
void | FixLatents (bool isFixed) |
|
af::array | GetMeanGradient () |
| Gets prior mean gradient. More...
|
|
af::array | GetVarGradient () |
| Gets prior variance gradient. More...
|
|
void | SetPrior (const af::array mean, const af::array var) |
| Sets the prior. More...
|
|
void | SetPriorCavity (const af::array meanCav, const af::array varCav) |
| Sets the cavity prior. More...
|
|
void | SetLatentGradient (const af::array &dmParent, const af::array &dvParent) |
| Sets latent gradient. More...
|
|
void | SetLatentGradientCavity (const af::array &dmParent, const af::array &dvParent) |
| Sets the latent cavity gradient. More...
|
|
int | GetLatentDimension () |
| Gets latent dimension. More...
|
|
void | SetBackConstraint (IBackconstraint< Scalar > *constraint) |
| Sets a back-constraint. More...
|
|
IBackconstraint< Scalar > * | GetBackConstraint () |
| Gets the back-constraint. More...
|
|
void | SetStyles (std::map< std::string, Style< Scalar > > *styles) |
| Sets the syles. More...
|
|
void | AddStyle (Style< Scalar > style) |
| Adds a style. More...
|
|
std::map< std::string, Style< Scalar > > * | GetStyles () |
| Gets the styles. More...
|
|
| GPBaseModel (const af::array &Y, LogLikType lType=LogLikType::Gaussian, ModelType mtype=ModelType::GPR) |
| Constructor. More...
|
|
| GPBaseModel () |
| Default Constructor. More...
|
|
virtual | ~GPBaseModel () |
| Destructor. More...
|
|
virtual void | Optimise (OptimizerType method=L_BFGS, Scalar tol=0.0, bool reinit_hypers=true, int maxiter=1000, int mb_size=0, LineSearchType lsType=MoreThuente, bool disp=true, int *cycle=nullptr) |
| Optimizes the model parameters for best fit. More...
|
|
virtual bool | Init () |
| Initializes the model. More...
|
|
virtual void | PredictF (const af::array &testInputs, af::array &mf, af::array &vf) |
| Predict noise free functions values \(\mathbf{F}_*\). More...
|
|
virtual void | PredictY (const af::array &testInputs, af::array &my, af::array &vy) |
| Prediction of test outputs \(\mathbf{Y}_*\). More...
|
|
virtual void | SampleY (const af::array inputs, int numSamples, af::array &outFunctions) |
| Generate function samples from posterior. More...
|
|
virtual void | AddData (const af::array Ytrain) |
| Adds training data to the model. More...
|
|
af::array | GetTrainingData () |
| Gets the training data set Y. More...
|
|
void | SetTrainingData (af::array &data) |
| Sets training data Y. More...
|
|
virtual int | GetNumParameters () |
| Gets number of parameters. More...
|
|
virtual void | SetParameters (const af::array ¶m) |
| Sets the parameters for each optimization iteration. More...
|
|
virtual af::array | GetParameters () |
| Gets the parameters for each optimization iteration. More...
|
|
virtual void | UpdateParameters () |
| Updates the parameters. More...
|
|
virtual void | FixLikelihoodParameters (bool isfixed) |
| Sets the likelihood parameters to be fixed or not for optimization. More...
|
|
void | SetSegments (af::array segments) |
| Sets fixation for hyperparameters. More...
|
|
af::array | GetSegments () |
| Gets the start index array for the sequences. More...
|
|
virtual Scalar | Function (const af::array &x, af::array &outGradient) |
| Cost function the given x inputs. More...
|
|
virtual int | GetNumParameters ()=0 |
| Gets number of parameters to be optimized. More...
|
|
virtual void | SetParameters (const af::array ¶m)=0 |
| Sets the parameters for each optimization iteration. More...
|
|
virtual af::array | GetParameters ()=0 |
| Gets the parameters for each optimization iteration. More...
|
|
virtual void | UpdateParameters ()=0 |
| Updates the parameters. More...
|
|
int | GetDataLenght () |
| Gets data lenght. More...
|
|
int | GetDataDimensionality () |
| Gets data dimensionality. More...
|
|
ModelType | GetModelType () |
| Gets model type. More...
|
|
virtual void | SetBatchSize (int size) |
| Sets batch size. More...
|
|
int | GetBatchSize () |
| Gets batch size. More...
|
|
void | SetIndexes (af::array &indexes) |
| Sets the batch indexes. More...
|
|
| GPNode () |
| Default constructor. More...
|
|
virtual | ~GPNode () |
| Destructor. More...
|
|
int | GetNumChildren () const |
| Gets the number of children of this item. More...
|
|
int | AttachChild (std::shared_ptr< GPNode< Scalar > > const &child) |
| Attaches a child. More...
|
|
int | DetachChild (std::shared_ptr< GPNode< Scalar > > const &child) |
| Detaches a child. More...
|
|
std::shared_ptr< GPNode< Scalar > > | DetachChildAt (int i) |
| Detaches a child at index. More...
|
|
void | DetachAllChildren () |
| Detach all children from this node. More...
|
|
std::shared_ptr< GPNode< Scalar > > | SetChild (int i, std::shared_ptr< GPNode< Scalar > > const &child) |
| Sets a child. More...
|
|
std::shared_ptr< GPNode< Scalar > > | GetChild (int i) |
| Gets a child at index. More...
|
|
GPNode< Scalar > * | GetParent () |
| Access to the parent object, which is null for the root of the hierarchy. More...
|
|
void | SetParent (GPNode< Scalar > *parent) |
|
Access to the parent object. Node calls this during attach/detach of children. More...
|
|
template<typename
Scalar>
class NeuralEngine::MachineLearning::GPModels::AEP::SDGPLVM< Scalar >
Sparse deep GPLVM via Approximated Expectation Propagation (AEP).
Instead of taking one Gaussian portion out to form the cavity, we take out a fraction defined by the parameter \(\alpha\), which can also be seen as a ratio parameter between VFE and PowerEp with FITC approximation. This enables deep structures for GPLVM.
GPLVM are the nonlinear dual version of probabilistic PCA, where a low dimensional latent variable \(\mathbf{X}=[\mathbf{x}_1,...,\mathbf{x}_N]^T$\f is mapped onto a
high dimensional data variable \)\mathbf{Y}=[\mathbf{y}_1,...,\mathbf{y}_N]^T$\f via prior mapping function \(f(\mathbf{x})$\f. The difference to normal GPs is the uncertainty
of \)\mathbf{X}$\f, which will be initialized via PCA and optimized during learning. To avoid memory issues for larger data sets, the algorithm takes use of sparse approximation techniques.
Sparse approximations are used for larger data sets to reduce memory size and computational complexity. This is done by introducing a subset of inducing points or pseudo inputs to approximate the full set. The inversion of the kernel matrix depends only on those points and reduces the computational complexity from \(O(N^3)$\) to $$O(k^2N)$\f, where \(k\) is the number of inducing points and \(N\) the length of the data set.
References:
, 24.11.2019.
Definition at line 66 of file FgAEPSparseDGPLVM.h.