NeuralEngine
A Game Engine with embeded Machine Learning algorithms based on Gaussian Processes.
NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType > Class Template Reference

Adam optimizer. More...

#include <FgAdamSolver.h>

Inheritance diagram for NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >:
Collaboration diagram for NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >:

Public Member Functions

 AdamSolver (int numberOfVariables)
 Creates a new instance of the L-BFGS optimization algorithm. More...
 
 AdamSolver (int numberOfVariables, std::function< Scalar(const af::array &, af::array &)> function)
 Creates a new instance of the L-BFGS optimization algorithm. More...
 
 AdamSolver (NonlinearObjectiveFunction< Scalar > *function)
 Creates a new instance of the L-BFGS optimization algorithm. More...
 
 ~AdamSolver ()
 Destructor. More...
 
void SetBeta1 (Scalar beta1)
 Sets decay rate for the first moment estimates. More...
 
void SetBeta2 (Scalar beta2)
 Sets decay rate for the second-moment estimates. More...
 
void SetAlpha (Scalar alpha)
 Sets the learning rate. More...
 
void SetEpsilon (Scalar epsilon)
 Sets an epsilon to avoid division by zero. More...
 
void SetDecay (Scalar decay)
 Sets initial decay rate. More...
 
Scalar GetBeta1 ()
 Gets decay rate for the first moment estimates. More...
 
Scalar GetBeta2 ()
 Gets decay rate for the second-moment estimates. More...
 
Scalar GetAlpha ()
 Gets the learning rate. More...
 
Scalar GetEpsilon ()
 Gets the epsilon. More...
 
Scalar GetDecay ()
 Gets the initial decay. More...
 
- Public Member Functions inherited from NeuralEngine::MachineLearning::BaseGradientOptimizationMethod< Scalar, MoreThuente >
Scalar GetTolerance ()
 Gets the relative difference threshold to be used as stopping criteria between two iterations. Default is 0 (iterate until convergence). More...
 
void SetTolerance (Scalar tolerance)
 Sets the relative difference threshold to be used as stopping criteria between two iterations. Default is 0 (iterate until convergence). More...
 
int GetMaxIterations ()
 Gets the maximum number of iterations to be performed during optimization. Default is 0 (iterate until convergence). More...
 
void SetMaxIterations (int iter)
 Sets the maximum number of iterations to be performed during optimization. Default is 0 (iterate until convergence). More...
 
int GetIterations ()
 Gets the number of iterations performed in the last call to IOptimizationMethod.Minimize(). More...
 
- Public Member Functions inherited from NeuralEngine::MachineLearning::BaseOptimizationMethod< Scalar >
virtual int GetNumberOfVariables ()
 Gets the number of variables (free parameters) in the optimization problem. More...
 
virtual af::array GetSolution ()
 Gets the current solution found, the values of the parameters which optimizes the function. More...
 
virtual void SetSolution (af::array &x)
 Sets the current solution found, the values of the parameters which optimizes the function. More...
 
virtual Scalar GetValue ()
 Gets the output of the function at the current Solution. More...
 
virtual bool Maximize (af::array &values, int *cycle=nullptr)
 Finds the maximum value of a function. The solution vector will be made available at the Solution property. More...
 
virtual bool Minimize (af::array &values, int *cycle=nullptr)
 Finds the minimum value of a function. The solution vector will be made available at the Solution property. More...
 
virtual bool Maximize (int *cycle=nullptr)
 Finds the maximum value of a function. The solution vector will be made available at the Solution property. More...
 
virtual bool Minimize (int *cycle=nullptr)
 Finds the minimum value of a function. The solution vector will be made available at the Solution property. More...
 
void Display (bool display)
 Set to display optimization information. More...
 
virtual int GetNumberOfVariables ()=0
 Gets the number of variables (free parameters) in the optimization problem. More...
 
virtual af::array GetSolution ()=0
 Gets the current solution found, the values of the parameters which optimizes the function. More...
 
virtual void SetSolution (af::array &x)=0
 Gets a solution. More...
 
virtual Scalar GetValue ()=0
 Gets the output of the function at the current Solution. More...
 
virtual bool Minimize (int *cycle=nullptr)=0
 Finds the minimum value of a function. The solution vector will be made available at the Solution property. More...
 
virtual bool Maximize (int *cycle=nullptr)=0
 Finds the maximum value of a function. The solution vector will be made available at the Solution property. More...
 

Protected Member Functions

virtual bool Optimize (int *cycle=nullptr) override
 Implements the actual optimization algorithm. This method should try to minimize the objective function. More...
 
- Protected Member Functions inherited from NeuralEngine::MachineLearning::BaseGradientOptimizationMethod< Scalar, MoreThuente >
 BaseGradientOptimizationMethod (int numberOfVariables)
 Initializes a new instance of the BaseGradientOptimizationMethod class. More...
 
 BaseGradientOptimizationMethod (int numberOfVariables, std::function< Scalar(const af::array &, af::array &)> function)
 Initializes a new instance of the BaseGradientOptimizationMethod class. More...
 
 BaseGradientOptimizationMethod (NonlinearObjectiveFunction< Scalar > *function)
 Initializes a new instance of the BaseGradientOptimizationMethod class. More...
 
void InitLinesearch ()
 Inits linesearch. More...
 
- Protected Member Functions inherited from NeuralEngine::MachineLearning::BaseOptimizationMethod< Scalar >
void SetValue (Scalar v)
 Sets the output of the function at the current Solution. More...
 
void SetNumberOfVariables (int n)
 Sets the number of variables (free parameters) in the optimization problem. More...
 
 BaseOptimizationMethod (int numberOfVariables)
 Initializes a new instance of the BaseOptimizationMethod class. More...
 
 BaseOptimizationMethod (int numberOfVariables, std::function< Scalar(const af::array &, af::array &)> function)
 Initializes a new instance of the BaseOptimizationMethod class. More...
 
 BaseOptimizationMethod (NonlinearObjectiveFunction< Scalar > *function)
 Initializes a new instance of the BaseOptimizationMethod class. More...
 
virtual bool Optimize (int *cycle=nullptr)=0
 Implements the actual optimization algorithm. This method should try to minimize the objective function. More...
 

Private Attributes

Scalar min_step
 
Scalar max_step
 
Scalar sAlpha
 
Scalar sBeta1
 
Scalar sBeta2
 
Scalar sEpsilon
 
Scalar sDecay
 
Scalar delta
 

Additional Inherited Members

- Protected Attributes inherited from NeuralEngine::MachineLearning::BaseGradientOptimizationMethod< Scalar, MoreThuente >
int maxIterations
 
Scalar _tolerance
 
int iterations
 
ILineSearch< Scalar > * linesearch
 
- Protected Attributes inherited from NeuralEngine::MachineLearning::BaseOptimizationMethod< Scalar >
NonlinearObjectiveFunction< Scalar > * _function
 
af::array _x
 
bool _display
 
af::dtype m_dtype
 

Detailed Description

template<typename Scalar, LineSearchType LSType = MoreThuente>
class NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >

Adam optimizer.


Adam is an optimization algorithm that can used instead of the classical stochastic gradient descent procedure to update network weights iterative based in training data. Adam is different to classical stochastic gradient descent. Stochastic gradient descent maintains a single learning rate (termed alpha) for all weight updates and the learning rate does not change during training. A learning rate is maintained for each network weight(parameter) and separately adapted as learning unfolds.

The authors describe Adam as combining the advantages of two other extensions of stochastic gradient descent. Specifically:

  • Adaptive Gradient Algorithm (AdaGrad) that maintains a per - parameter learning rate that improves performance on problems with sparse gradients (e.g.natural language and computer vision problems).
  • Root Mean Square Propagation (RMSProp) that also maintains per - parameter learning rates that are adapted based on the average of recent magnitudes of the gradients for the weight(e.g.how quickly it is changing).This means the algorithm does well on online and non - stationary problems(e.g.noisy).

Adam realizes the benefits of both AdaGrad and RMSProp. Instead of adapting the parameter learning rates based on the average first moment(the mean) as in RMSProp, Adam also makes use of the average of the second moments of the gradients (the uncentered variance). Specifically, the algorithm calculates an exponential moving average of the gradient and the squared gradient, and the parameters beta1 and beta2 control the decay rates of these moving averages.

References:

HmetalT, 02.05.2019.

Definition at line 74 of file FgAdamSolver.h.

Constructor & Destructor Documentation

◆ AdamSolver() [1/3]

template<typename Scalar , LineSearchType LSType = MoreThuente>
NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::AdamSolver ( int  numberOfVariables)

Creates a new instance of the L-BFGS optimization algorithm.

Admin, 3/27/2017.

Parameters
numberOfVariablesThe number of free parameters in the optimization problem.

◆ AdamSolver() [2/3]

template<typename Scalar , LineSearchType LSType = MoreThuente>
NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::AdamSolver ( int  numberOfVariables,
std::function< Scalar(const af::array &, af::array &)>  function 
)

Creates a new instance of the L-BFGS optimization algorithm.

Admin, 3/27/2017.

Parameters
numberOfVariablesThe number of free parameters in the function to be optimized.
function[in,out] The function to be optimized.
gradient[in,out] The gradient of the function.

◆ AdamSolver() [3/3]

template<typename Scalar , LineSearchType LSType = MoreThuente>
NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::AdamSolver ( NonlinearObjectiveFunction< Scalar > *  function)

Creates a new instance of the L-BFGS optimization algorithm.

Admin, 3/27/2017.

Parameters
functionThe objective function and gradients whose optimum values should be found.

◆ ~AdamSolver()

template<typename Scalar , LineSearchType LSType = MoreThuente>
NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::~AdamSolver ( )

Destructor.

, 15.08.2019.

Member Function Documentation

◆ SetBeta1()

template<typename Scalar , LineSearchType LSType = MoreThuente>
void NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::SetBeta1 ( Scalar  beta1)

Sets decay rate for the first moment estimates.

, 15.08.2019.

Parameters
beta1The first beta.

◆ SetBeta2()

template<typename Scalar , LineSearchType LSType = MoreThuente>
void NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::SetBeta2 ( Scalar  beta2)

Sets decay rate for the second-moment estimates.

, 15.08.2019.

Parameters
beta2The second beta.

◆ SetAlpha()

template<typename Scalar , LineSearchType LSType = MoreThuente>
void NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::SetAlpha ( Scalar  alpha)

Sets the learning rate.

, 15.08.2019.

Parameters
alphaThe alpha.

◆ SetEpsilon()

template<typename Scalar , LineSearchType LSType = MoreThuente>
void NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::SetEpsilon ( Scalar  epsilon)

Sets an epsilon to avoid division by zero.

, 15.08.2019.

Parameters
epsilonThe epsilon.

◆ SetDecay()

template<typename Scalar , LineSearchType LSType = MoreThuente>
void NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::SetDecay ( Scalar  decay)

Sets initial decay rate.

, 15.08.2019.

Parameters
decayThe decay.

◆ GetBeta1()

template<typename Scalar , LineSearchType LSType = MoreThuente>
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::GetBeta1 ( )

Gets decay rate for the first moment estimates.

, 15.08.2019.

Returns
The beta 1.

◆ GetBeta2()

template<typename Scalar , LineSearchType LSType = MoreThuente>
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::GetBeta2 ( )

Gets decay rate for the second-moment estimates.

, 15.08.2019.

Returns
The beta 2.

◆ GetAlpha()

template<typename Scalar , LineSearchType LSType = MoreThuente>
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::GetAlpha ( )

Gets the learning rate.

, 15.08.2019.

Returns
The alpha.

◆ GetEpsilon()

template<typename Scalar , LineSearchType LSType = MoreThuente>
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::GetEpsilon ( )

Gets the epsilon.

, 15.08.2019.

Returns
The epsilon.

◆ GetDecay()

template<typename Scalar , LineSearchType LSType = MoreThuente>
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::GetDecay ( )

Gets the initial decay.

, 15.08.2019.

Returns
The decay.

◆ Optimize()

template<typename Scalar , LineSearchType LSType = MoreThuente>
virtual bool NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::Optimize ( int cycle = nullptr)
overrideprotectedvirtual

Implements the actual optimization algorithm. This method should try to minimize the objective function.

Hmetal T, 11.04.2017.

Returns
true if it succeeds, false if it fails.

Implements NeuralEngine::MachineLearning::BaseOptimizationMethod< Scalar >.

Member Data Documentation

◆ min_step

template<typename Scalar , LineSearchType LSType = MoreThuente>
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::min_step
private

Definition at line 224 of file FgAdamSolver.h.

◆ max_step

template<typename Scalar , LineSearchType LSType = MoreThuente>
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::max_step
private

Definition at line 225 of file FgAdamSolver.h.

◆ sAlpha

template<typename Scalar , LineSearchType LSType = MoreThuente>
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::sAlpha
private

Definition at line 227 of file FgAdamSolver.h.

◆ sBeta1

template<typename Scalar , LineSearchType LSType = MoreThuente>
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::sBeta1
private

Definition at line 228 of file FgAdamSolver.h.

◆ sBeta2

template<typename Scalar , LineSearchType LSType = MoreThuente>
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::sBeta2
private

Definition at line 229 of file FgAdamSolver.h.

◆ sEpsilon

template<typename Scalar , LineSearchType LSType = MoreThuente>
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::sEpsilon
private

Definition at line 230 of file FgAdamSolver.h.

◆ sDecay

template<typename Scalar , LineSearchType LSType = MoreThuente>
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::sDecay
private

Definition at line 231 of file FgAdamSolver.h.

◆ delta

template<typename Scalar , LineSearchType LSType = MoreThuente>
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::delta
private

Definition at line 232 of file FgAdamSolver.h.


The documentation for this class was generated from the following file: