For installation :
	- in a terminal, in the `src_mma_forked_with_gudhi` folder, `pip install .`
	This will install the python libraries that we used to run all of our code.
	- then, our code can be used in python with : 
		import mma
	- The non-synthetic datasets can be retrieved from the internet:
	 	- The immunohistochemistry datasets can be retrieved at https://github.com/MultiparameterTDAHistology/SpatialPatterningOfImmuneCells/
	 	- the UCR time series datasets can be retrieved from: https://www.cs.ucr.edu/~eamonn/time_series_data_2018/

/!\ The scripts expects the datasets to be stored at ~/Datasets/
	 For instance ~/Datasets/UCR, or ~/Datasets/1.5mmRegions/

/!\    Note that if you want to compute the multiparameter persistence image of Carrière & Blumberg 
    (https://papers.nips.cc/paper/2020/hash/fdff71fcab656abfbefaabecab1a7f6d-Abstract.html),
    you will need to compile the dionysus implementation of vineyards. For this, follow 
    the procedure detailed in https://github.com/MathieuCarriere/multipers
    Otherwise, if you just need our GS representations, just comment 
    `from dionysus_vineyards import ls_vineyards as lsvine` in line 18 in 
    `src_mma_forked_with_gudhi/multipers.py` and `from multipers import *` in all Python scripts that
    contain this line.



For convergence rates, ie, in the convergence rate folder,
	- You can compute the modules with the cv_immuno.py / cv_synthetic.py / cv_synthetic2.py scripts
		they will be stored in the same folder as the dataset
		
			python cv_immuno.py 1 30 10
				where	1: dataset number
						30: triangulation grid size (faster with smaller numbers)
						10: number of iterations
			python cv_synthetic.py 10 1000 100
				where 	10: minimum number of points in the datasets
						1000: maximum number of points 
						100: number of iterations
	- You can then compute the images using the get_imgs.py / get_immuno_imgs.py scripts.
		the parameters have to be tuned inside the scripts

For classification, ie in the classification folder,
	- experiments.py will allow you to compute modules, and cross-validate (parameters inside have to be tuned) the image parameters. You may want to reduce the number of parameters to cross-validate to test the script (line 370).
			python experiments.py Coffee 50 1 10
				where:	Coffee: the name of the datasets
						50: the image resolutions
						1 : force recompute the modules
						10 : number of cores to use
	- landscape_experiments.py : the same, but with our multiparameter persistence landscape implementation
			python landscape_experiments.py Coffee 50 1 10
				where	Coffee: the dataset name
						50: image resolution
						1 : force recompute modules
						10 : number of cores to use
	- multipers_experiments.py : the same, but with the MPI image from https://github.com/MathieuCarriere/multipers/blob/main/multipers.py
	
	- in the immuno folder:
		classif_immuno.py : will compute the immuno classification with specified parameters, eg,
			python classif_immuno.py 0.001 0 
		will compute the 10-fold accuracy of a randomforest of our image with parameter bandwidth = 0.001 and p = 0
		classif_immuno_landscape.py : the same, but with our multiparameter persistence landscape implementation.
			python classif_immuno_landscape.py
		get_img will compute an image / landscape of a random immuno module (if they are already computed), to have an idea of the output
			python get_img.py

