Adam optimizer. More...
#include <FgAdamSolver.h>
Public Member Functions | |
AdamSolver (int numberOfVariables) | |
Creates a new instance of the L-BFGS optimization algorithm. More... | |
AdamSolver (int numberOfVariables, std::function< Scalar(const af::array &, af::array &)> function) | |
Creates a new instance of the L-BFGS optimization algorithm. More... | |
AdamSolver (NonlinearObjectiveFunction< Scalar > *function) | |
Creates a new instance of the L-BFGS optimization algorithm. More... | |
~AdamSolver () | |
Destructor. More... | |
void | SetBeta1 (Scalar beta1) |
Sets decay rate for the first moment estimates. More... | |
void | SetBeta2 (Scalar beta2) |
Sets decay rate for the second-moment estimates. More... | |
void | SetAlpha (Scalar alpha) |
Sets the learning rate. More... | |
void | SetEpsilon (Scalar epsilon) |
Sets an epsilon to avoid division by zero. More... | |
void | SetDecay (Scalar decay) |
Sets initial decay rate. More... | |
Scalar | GetBeta1 () |
Gets decay rate for the first moment estimates. More... | |
Scalar | GetBeta2 () |
Gets decay rate for the second-moment estimates. More... | |
Scalar | GetAlpha () |
Gets the learning rate. More... | |
Scalar | GetEpsilon () |
Gets the epsilon. More... | |
Scalar | GetDecay () |
Gets the initial decay. More... | |
![]() | |
Scalar | GetTolerance () |
Gets the relative difference threshold to be used as stopping criteria between two iterations. Default is 0 (iterate until convergence). More... | |
void | SetTolerance (Scalar tolerance) |
Sets the relative difference threshold to be used as stopping criteria between two iterations. Default is 0 (iterate until convergence). More... | |
int | GetMaxIterations () |
Gets the maximum number of iterations to be performed during optimization. Default is 0 (iterate until convergence). More... | |
void | SetMaxIterations (int iter) |
Sets the maximum number of iterations to be performed during optimization. Default is 0 (iterate until convergence). More... | |
int | GetIterations () |
Gets the number of iterations performed in the last call to IOptimizationMethod.Minimize(). More... | |
![]() | |
virtual int | GetNumberOfVariables () |
Gets the number of variables (free parameters) in the optimization problem. More... | |
virtual af::array | GetSolution () |
Gets the current solution found, the values of the parameters which optimizes the function. More... | |
virtual void | SetSolution (af::array &x) |
Sets the current solution found, the values of the parameters which optimizes the function. More... | |
virtual Scalar | GetValue () |
Gets the output of the function at the current Solution. More... | |
virtual bool | Maximize (af::array &values, int *cycle=nullptr) |
Finds the maximum value of a function. The solution vector will be made available at the Solution property. More... | |
virtual bool | Minimize (af::array &values, int *cycle=nullptr) |
Finds the minimum value of a function. The solution vector will be made available at the Solution property. More... | |
virtual bool | Maximize (int *cycle=nullptr) |
Finds the maximum value of a function. The solution vector will be made available at the Solution property. More... | |
virtual bool | Minimize (int *cycle=nullptr) |
Finds the minimum value of a function. The solution vector will be made available at the Solution property. More... | |
void | Display (bool display) |
Set to display optimization information. More... | |
virtual int | GetNumberOfVariables ()=0 |
Gets the number of variables (free parameters) in the optimization problem. More... | |
virtual af::array | GetSolution ()=0 |
Gets the current solution found, the values of the parameters which optimizes the function. More... | |
virtual void | SetSolution (af::array &x)=0 |
Gets a solution. More... | |
virtual Scalar | GetValue ()=0 |
Gets the output of the function at the current Solution. More... | |
virtual bool | Minimize (int *cycle=nullptr)=0 |
Finds the minimum value of a function. The solution vector will be made available at the Solution property. More... | |
virtual bool | Maximize (int *cycle=nullptr)=0 |
Finds the maximum value of a function. The solution vector will be made available at the Solution property. More... | |
Protected Member Functions | |
virtual bool | Optimize (int *cycle=nullptr) override |
Implements the actual optimization algorithm. This method should try to minimize the objective function. More... | |
![]() | |
BaseGradientOptimizationMethod (int numberOfVariables) | |
Initializes a new instance of the BaseGradientOptimizationMethod class. More... | |
BaseGradientOptimizationMethod (int numberOfVariables, std::function< Scalar(const af::array &, af::array &)> function) | |
Initializes a new instance of the BaseGradientOptimizationMethod class. More... | |
BaseGradientOptimizationMethod (NonlinearObjectiveFunction< Scalar > *function) | |
Initializes a new instance of the BaseGradientOptimizationMethod class. More... | |
void | InitLinesearch () |
Inits linesearch. More... | |
![]() | |
void | SetValue (Scalar v) |
Sets the output of the function at the current Solution. More... | |
void | SetNumberOfVariables (int n) |
Sets the number of variables (free parameters) in the optimization problem. More... | |
BaseOptimizationMethod (int numberOfVariables) | |
Initializes a new instance of the BaseOptimizationMethod class. More... | |
BaseOptimizationMethod (int numberOfVariables, std::function< Scalar(const af::array &, af::array &)> function) | |
Initializes a new instance of the BaseOptimizationMethod class. More... | |
BaseOptimizationMethod (NonlinearObjectiveFunction< Scalar > *function) | |
Initializes a new instance of the BaseOptimizationMethod class. More... | |
virtual bool | Optimize (int *cycle=nullptr)=0 |
Implements the actual optimization algorithm. This method should try to minimize the objective function. More... | |
Private Attributes | |
Scalar | min_step |
Scalar | max_step |
Scalar | sAlpha |
Scalar | sBeta1 |
Scalar | sBeta2 |
Scalar | sEpsilon |
Scalar | sDecay |
Scalar | delta |
Additional Inherited Members | |
![]() | |
int | maxIterations |
Scalar | _tolerance |
int | iterations |
ILineSearch< Scalar > * | linesearch |
![]() | |
NonlinearObjectiveFunction< Scalar > * | _function |
af::array | _x |
bool | _display |
af::dtype | m_dtype |
Adam optimizer.
Adam is an optimization algorithm that can used instead of the classical stochastic gradient descent procedure to update network weights iterative based in training data. Adam is different to classical stochastic gradient descent. Stochastic gradient descent maintains a single learning rate (termed alpha) for all weight updates and the learning rate does not change during training. A learning rate is maintained for each network weight(parameter) and separately adapted as learning unfolds.
The authors describe Adam as combining the advantages of two other extensions of stochastic gradient descent. Specifically:
Adam realizes the benefits of both AdaGrad and RMSProp. Instead of adapting the parameter learning rates based on the average first moment(the mean) as in RMSProp, Adam also makes use of the average of the second moments of the gradients (the uncentered variance). Specifically, the algorithm calculates an exponential moving average of the gradient and the squared gradient, and the parameters beta1 and beta2 control the decay rates of these moving averages.
References:
HmetalT, 02.05.2019.
Definition at line 74 of file FgAdamSolver.h.
NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::AdamSolver | ( | int | numberOfVariables | ) |
Creates a new instance of the L-BFGS optimization algorithm.
Admin, 3/27/2017.
numberOfVariables | The number of free parameters in the optimization problem. |
NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::AdamSolver | ( | int | numberOfVariables, |
std::function< Scalar(const af::array &, af::array &)> | function | ||
) |
Creates a new instance of the L-BFGS optimization algorithm.
Admin, 3/27/2017.
numberOfVariables | The number of free parameters in the function to be optimized. |
function | [in,out] The function to be optimized. |
gradient | [in,out] The gradient of the function. |
NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::AdamSolver | ( | NonlinearObjectiveFunction< Scalar > * | function | ) |
Creates a new instance of the L-BFGS optimization algorithm.
Admin, 3/27/2017.
function | The objective function and gradients whose optimum values should be found. |
NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::~AdamSolver | ( | ) |
Destructor.
, 15.08.2019.
void NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::SetBeta1 | ( | Scalar | beta1 | ) |
Sets decay rate for the first moment estimates.
, 15.08.2019.
beta1 | The first beta. |
void NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::SetBeta2 | ( | Scalar | beta2 | ) |
Sets decay rate for the second-moment estimates.
, 15.08.2019.
beta2 | The second beta. |
void NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::SetAlpha | ( | Scalar | alpha | ) |
Sets the learning rate.
, 15.08.2019.
alpha | The alpha. |
void NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::SetEpsilon | ( | Scalar | epsilon | ) |
Sets an epsilon to avoid division by zero.
, 15.08.2019.
epsilon | The epsilon. |
void NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::SetDecay | ( | Scalar | decay | ) |
Sets initial decay rate.
, 15.08.2019.
decay | The decay. |
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::GetBeta1 | ( | ) |
Gets decay rate for the first moment estimates.
, 15.08.2019.
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::GetBeta2 | ( | ) |
Gets decay rate for the second-moment estimates.
, 15.08.2019.
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::GetAlpha | ( | ) |
Gets the learning rate.
, 15.08.2019.
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::GetEpsilon | ( | ) |
Gets the epsilon.
, 15.08.2019.
Scalar NeuralEngine::MachineLearning::AdamSolver< Scalar, LSType >::GetDecay | ( | ) |
Gets the initial decay.
, 15.08.2019.
|
overrideprotectedvirtual |
Implements the actual optimization algorithm. This method should try to minimize the objective function.
Hmetal T, 11.04.2017.
Implements NeuralEngine::MachineLearning::BaseOptimizationMethod< Scalar >.
|
private |
Definition at line 224 of file FgAdamSolver.h.
|
private |
Definition at line 225 of file FgAdamSolver.h.
|
private |
Definition at line 227 of file FgAdamSolver.h.
|
private |
Definition at line 228 of file FgAdamSolver.h.
|
private |
Definition at line 229 of file FgAdamSolver.h.
|
private |
Definition at line 230 of file FgAdamSolver.h.
|
private |
Definition at line 231 of file FgAdamSolver.h.
|
private |
Definition at line 232 of file FgAdamSolver.h.