This file contains instructions on how to run
codes for experiments mentioned in this paper.

This paper performs experiments on single neuron
and Multi-layer perceptron on different datasets.

We have used Python3.5 and Pytorch for some of our
experiments. To run the experiments, we need to have
Python 3.5 installed. The rest of the requirements
are mentioned in requirements.txt

For demonstrating the effectiveness of our approach
on single neuron, we use two scenarios:

1) Train single neuron on a linear equation 
  (0.5x+0.2y = 0.9). 
In this experiment, we try to find x and y 
(weights of our single neuron) such that the output 
is 0.9.

To launch this experiment, run the following command:
python3 train_single_neuron.py linear

This will print out the optimal value achieved by
different loss functions as well as produce graphs
showing convergence of different loss functions with
epochs and time.

2) Train single neuron on Iris dataset
Iris dataset is publicly available dataset on UCI
machine learning repository. The code does not require
you to download the dataset as it loads it from sklearn
library. We use 80 images for train and 20 images for
test. We only predict 2 classes.

For this experiment, run the following command:
python train_single_neuron.py iris

This will also produce plots of convergence for Iris
dataset apart from training. It also does convergence
analysis for all three losses.

For Multi-Neuron case, we demonstrate on two kinds of
datasets as well.

1) Train MLP on Boston Housing Dataset:

Boston Housing is a regression task, that uses 13
features to predict the price of the house.

The training dataset contains 400 images and testing
dataset contains in 100 images. This code loads the
dataset from sklearn, hence no need to download it.

We show the convergence difference between l1 with
gradient descent, l2 with gradient descent
and our proposed lyapunov function with modified
gradient descent.

For this experiment, run the following command:

python mlp_train_boston.py

This will train an MLP on boston dataset. It will also
save a plot of loss vs time for 3 different losses
After that it will do a convergence analysis of the three
losses.

2) Train MLP on IMDB+Wiki Faces dataset for age prediction
IMDB+Wiki dataset contains around 500,000 images with
age prediction as targets. We train a MLP with 20,000 IMDB face
images and test with 4000 images. This dataset needs to be
downloaded from "https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/".
For this experiment, download only IMDB faces dataset.
 
The code to run this experiment is in Pytorch in order to
enable faster training and use of GPU.

To run the analysis on this dataset:

If training on GPU:

python mlp_train_imdb.py --gpu 

Else

python mlp_train_imdb.py

This code will train and produce checkpoints for L1, L2 and
Lyapunov. It will also generate csv files containing the
progression of training and test loss over time.

