# Neural Experience Replay Sampler (NERS) 

We imported and changed codes in "https://github.com/Kaixhin/Rainbow" to implement NERS in Rainbow.
This is demo version for NERS under Rainbow. We can provide full version codes (with SAC and TD3) if requested or after accepted
## How to run

For instance, run training by the following command
```
CUDA_VISIBLE_DEVICES=0 python main.py --target-update 2000 \
               --T-max 100000 \
               --learn-start 1600 \
               --memory-capacity 100000 \
               --replay-frequency 1 \
               --multi-step 20 \
               --architecture data-efficient \
               --hidden-size 256 \
               --learning-rate 0.0001 \
               --evaluation-interval 1000
               --replay-type NERS
               --game ms_pacman
```
Prioritized Experience Replay (PER) is also used by "--replay-type PER".
