##Long-Short Decision Transformer

##The dataset document includes two types of data. Due to file size limitation, we only include two relatively small datasets.

	1.The original dataset used in DT with suffix "-v2"
	2.The augmented dataset by our Goal-State concatenation method are marked with suffix "-v6"


##Our proposed LSDT model is included in LSDT file.


Software Requirments follwed by min-decision-transformer: https://github.com/nikhilbarhate99/min-decision-transformer.

For training
	##Here is an example command for training:
		
		python3 scripts/train.py --env maze2d --dataset large --device cuda --context_len 30 --log_dir maze2d_large 



	##If you want to implement the goal state concatenation, please add " --goalconcate " in the command.  

For evaluation
	##Here is an example command for testing:
	
		python3 scripts/test_o.py --env hopper --dataset medium-expert  --num_eval_ep 10  --chk_pt_name 1_hopper_medium_expert_best.pt --chk_pt_dir /home/usr/LSDT_code/Hopper_medium  --context_len 10 --convdim 96 --render  
