wandb: Currently logged in as: 804703098. Use `wandb login --relogin` to force relogin
wandb: wandb version 0.17.0 is available!  To upgrade, please run:
wandb:  $ pip install wandb --upgrade
wandb: Tracking run with wandb version 0.16.6
wandb: Run data is saved locally in /home/user/zhangyang/PycharmProjects/Nips2024-ITPC-v2/Nips2024-ITPC-v2/onpolicy/scripts/results/MPE/simple_tag_tr/rmappotrsyn/exp_train_continue_tag_base_CMT_s2r2_v1/wandb/run-20240508_193302-f0rehbk5
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run MPE_6
wandb: ⭐️ View project at https://wandb.ai/804703098/Continue_Tag_Base_v1
wandb: 🚀 View run at https://wandb.ai/804703098/Continue_Tag_Base_v1/runs/f0rehbk5
choose to use cpu...
idv policy and team policy use same initial params!

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 0/10000 episodes, total num timesteps 200/2000000, FPS 361.

team_policy eval average step individual rewards of agent0: 0.020699686343491258
team_policy eval average team episode rewards of agent0: 0.0
team_policy eval idv catch total num of agent0: 4
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent1: 0.024430205508187672
team_policy eval average team episode rewards of agent1: 0.0
team_policy eval idv catch total num of agent1: 4
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent2: -0.07086827252363231
team_policy eval average team episode rewards of agent2: 0.0
team_policy eval idv catch total num of agent2: 1
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent3: 0.020300773243975473
team_policy eval average team episode rewards of agent3: 0.0
team_policy eval idv catch total num of agent3: 4
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent4: 0.02315309803901373
team_policy eval average team episode rewards of agent4: 0.0
team_policy eval idv catch total num of agent4: 4
team_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent0: -0.00454099827690438
idv_policy eval average team episode rewards of agent0: 0.0
idv_policy eval idv catch total num of agent0: 3
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent1: -0.09447647614075926
idv_policy eval average team episode rewards of agent1: 0.0
idv_policy eval idv catch total num of agent1: 0
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent2: 0.10774217569433052
idv_policy eval average team episode rewards of agent2: 0.0
idv_policy eval idv catch total num of agent2: 7
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent3: 0.03655033592121482
idv_policy eval average team episode rewards of agent3: 0.0
idv_policy eval idv catch total num of agent3: 5
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent4: -0.0346344672618819
idv_policy eval average team episode rewards of agent4: 0.0
idv_policy eval idv catch total num of agent4: 3
idv_policy eval team catch total num: 0

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1/10000 episodes, total num timesteps 400/2000000, FPS 298.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2/10000 episodes, total num timesteps 600/2000000, FPS 322.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3/10000 episodes, total num timesteps 800/2000000, FPS 320.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 4/10000 episodes, total num timesteps 1000/2000000, FPS 320.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 5/10000 episodes, total num timesteps 1200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 6/10000 episodes, total num timesteps 1400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 7/10000 episodes, total num timesteps 1600/2000000, FPS 336.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 8/10000 episodes, total num timesteps 1800/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 9/10000 episodes, total num timesteps 2000/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 10/10000 episodes, total num timesteps 2200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 11/10000 episodes, total num timesteps 2400/2000000, FPS 334.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 12/10000 episodes, total num timesteps 2600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 13/10000 episodes, total num timesteps 2800/2000000, FPS 335.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 14/10000 episodes, total num timesteps 3000/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 15/10000 episodes, total num timesteps 3200/2000000, FPS 323.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 16/10000 episodes, total num timesteps 3400/2000000, FPS 322.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 17/10000 episodes, total num timesteps 3600/2000000, FPS 322.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 18/10000 episodes, total num timesteps 3800/2000000, FPS 323.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 19/10000 episodes, total num timesteps 4000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 20/10000 episodes, total num timesteps 4200/2000000, FPS 322.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 21/10000 episodes, total num timesteps 4400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 22/10000 episodes, total num timesteps 4600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 23/10000 episodes, total num timesteps 4800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 24/10000 episodes, total num timesteps 5000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 25/10000 episodes, total num timesteps 5200/2000000, FPS 325.

team_policy eval average step individual rewards of agent0: -0.042913940281442714
team_policy eval average team episode rewards of agent0: 0.0
team_policy eval idv catch total num of agent0: 1
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent1: 0.008373165288672322
team_policy eval average team episode rewards of agent1: 0.0
team_policy eval idv catch total num of agent1: 3
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent2: -0.04614032916101128
team_policy eval average team episode rewards of agent2: 0.0
team_policy eval idv catch total num of agent2: 1
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent3: 0.037183398825317376
team_policy eval average team episode rewards of agent3: 0.0
team_policy eval idv catch total num of agent3: 4
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent4: 0.023692770896920488
team_policy eval average team episode rewards of agent4: 0.0
team_policy eval idv catch total num of agent4: 4
team_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent0: -0.08061929218609035
idv_policy eval average team episode rewards of agent0: 0.0
idv_policy eval idv catch total num of agent0: 1
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent1: -0.04333195643440304
idv_policy eval average team episode rewards of agent1: 0.0
idv_policy eval idv catch total num of agent1: 2
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent2: -0.03497941034961383
idv_policy eval average team episode rewards of agent2: 0.0
idv_policy eval idv catch total num of agent2: 2
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent3: -0.11351861412862142
idv_policy eval average team episode rewards of agent3: 0.0
idv_policy eval idv catch total num of agent3: 0
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent4: -0.10326436480412507
idv_policy eval average team episode rewards of agent4: 0.0
idv_policy eval idv catch total num of agent4: 0
idv_policy eval team catch total num: 0

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 26/10000 episodes, total num timesteps 5400/2000000, FPS 323.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 27/10000 episodes, total num timesteps 5600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 28/10000 episodes, total num timesteps 5800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 29/10000 episodes, total num timesteps 6000/2000000, FPS 322.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 30/10000 episodes, total num timesteps 6200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 31/10000 episodes, total num timesteps 6400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 32/10000 episodes, total num timesteps 6600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 33/10000 episodes, total num timesteps 6800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 34/10000 episodes, total num timesteps 7000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 35/10000 episodes, total num timesteps 7200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 36/10000 episodes, total num timesteps 7400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 37/10000 episodes, total num timesteps 7600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 38/10000 episodes, total num timesteps 7800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 39/10000 episodes, total num timesteps 8000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 40/10000 episodes, total num timesteps 8200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 41/10000 episodes, total num timesteps 8400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 42/10000 episodes, total num timesteps 8600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 43/10000 episodes, total num timesteps 8800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 44/10000 episodes, total num timesteps 9000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 45/10000 episodes, total num timesteps 9200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 46/10000 episodes, total num timesteps 9400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 47/10000 episodes, total num timesteps 9600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 48/10000 episodes, total num timesteps 9800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 49/10000 episodes, total num timesteps 10000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 50/10000 episodes, total num timesteps 10200/2000000, FPS 327.

team_policy eval average step individual rewards of agent0: -0.04265817376420213
team_policy eval average team episode rewards of agent0: 0.0
team_policy eval idv catch total num of agent0: 2
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent1: -0.024401829825966927
team_policy eval average team episode rewards of agent1: 0.0
team_policy eval idv catch total num of agent1: 2
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent2: -0.08612921457559919
team_policy eval average team episode rewards of agent2: 0.0
team_policy eval idv catch total num of agent2: 0
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent3: -0.032124067287200825
team_policy eval average team episode rewards of agent3: 0.0
team_policy eval idv catch total num of agent3: 2
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent4: 0.06716935563895433
team_policy eval average team episode rewards of agent4: 0.0
team_policy eval idv catch total num of agent4: 6
team_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent0: -0.0116315149150019
idv_policy eval average team episode rewards of agent0: 0.0
idv_policy eval idv catch total num of agent0: 3
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent1: -0.07512478666985246
idv_policy eval average team episode rewards of agent1: 0.0
idv_policy eval idv catch total num of agent1: 0
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent2: -0.0802035531703913
idv_policy eval average team episode rewards of agent2: 0.0
idv_policy eval idv catch total num of agent2: 1
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent3: -0.08968947104791422
idv_policy eval average team episode rewards of agent3: 0.0
idv_policy eval idv catch total num of agent3: 0
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent4: -0.10821578360322238
idv_policy eval average team episode rewards of agent4: 0.0
idv_policy eval idv catch total num of agent4: 0
idv_policy eval team catch total num: 0

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 51/10000 episodes, total num timesteps 10400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 52/10000 episodes, total num timesteps 10600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 53/10000 episodes, total num timesteps 10800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 54/10000 episodes, total num timesteps 11000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 55/10000 episodes, total num timesteps 11200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 56/10000 episodes, total num timesteps 11400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 57/10000 episodes, total num timesteps 11600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 58/10000 episodes, total num timesteps 11800/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 59/10000 episodes, total num timesteps 12000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 60/10000 episodes, total num timesteps 12200/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 61/10000 episodes, total num timesteps 12400/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 62/10000 episodes, total num timesteps 12600/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 63/10000 episodes, total num timesteps 12800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 64/10000 episodes, total num timesteps 13000/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 65/10000 episodes, total num timesteps 13200/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 66/10000 episodes, total num timesteps 13400/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 67/10000 episodes, total num timesteps 13600/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 68/10000 episodes, total num timesteps 13800/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 69/10000 episodes, total num timesteps 14000/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 70/10000 episodes, total num timesteps 14200/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 71/10000 episodes, total num timesteps 14400/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 72/10000 episodes, total num timesteps 14600/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 73/10000 episodes, total num timesteps 14800/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 74/10000 episodes, total num timesteps 15000/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 75/10000 episodes, total num timesteps 15200/2000000, FPS 331.

team_policy eval average step individual rewards of agent0: 0.033528589114779205
team_policy eval average team episode rewards of agent0: 7.5
team_policy eval idv catch total num of agent0: 4
team_policy eval team catch total num: 3
team_policy eval average step individual rewards of agent1: -0.0505653994291859
team_policy eval average team episode rewards of agent1: 7.5
team_policy eval idv catch total num of agent1: 1
team_policy eval team catch total num: 3
team_policy eval average step individual rewards of agent2: 0.15354529259640196
team_policy eval average team episode rewards of agent2: 7.5
team_policy eval idv catch total num of agent2: 9
team_policy eval team catch total num: 3
team_policy eval average step individual rewards of agent3: 0.024773327034391005
team_policy eval average team episode rewards of agent3: 7.5
team_policy eval idv catch total num of agent3: 4
team_policy eval team catch total num: 3
team_policy eval average step individual rewards of agent4: 0.13205002923142778
team_policy eval average team episode rewards of agent4: 7.5
team_policy eval idv catch total num of agent4: 8
team_policy eval team catch total num: 3
idv_policy eval average step individual rewards of agent0: 0.026086173141468744
idv_policy eval average team episode rewards of agent0: 0.0
idv_policy eval idv catch total num of agent0: 4
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent1: -0.04970232703477121
idv_policy eval average team episode rewards of agent1: 0.0
idv_policy eval idv catch total num of agent1: 2
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent2: -0.06042620772819669
idv_policy eval average team episode rewards of agent2: 0.0
idv_policy eval idv catch total num of agent2: 1
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent3: -0.07984259242116552
idv_policy eval average team episode rewards of agent3: 0.0
idv_policy eval idv catch total num of agent3: 1
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent4: -0.011035084156058686
idv_policy eval average team episode rewards of agent4: 0.0
idv_policy eval idv catch total num of agent4: 3
idv_policy eval team catch total num: 0

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 76/10000 episodes, total num timesteps 15400/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 77/10000 episodes, total num timesteps 15600/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 78/10000 episodes, total num timesteps 15800/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 79/10000 episodes, total num timesteps 16000/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 80/10000 episodes, total num timesteps 16200/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 81/10000 episodes, total num timesteps 16400/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 82/10000 episodes, total num timesteps 16600/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 83/10000 episodes, total num timesteps 16800/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 84/10000 episodes, total num timesteps 17000/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 85/10000 episodes, total num timesteps 17200/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 86/10000 episodes, total num timesteps 17400/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 87/10000 episodes, total num timesteps 17600/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 88/10000 episodes, total num timesteps 17800/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 89/10000 episodes, total num timesteps 18000/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 90/10000 episodes, total num timesteps 18200/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 91/10000 episodes, total num timesteps 18400/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 92/10000 episodes, total num timesteps 18600/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 93/10000 episodes, total num timesteps 18800/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 94/10000 episodes, total num timesteps 19000/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 95/10000 episodes, total num timesteps 19200/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 96/10000 episodes, total num timesteps 19400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 97/10000 episodes, total num timesteps 19600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 98/10000 episodes, total num timesteps 19800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 99/10000 episodes, total num timesteps 20000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 100/10000 episodes, total num timesteps 20200/2000000, FPS 332.

team_policy eval average step individual rewards of agent0: -0.06001974969716998
team_policy eval average team episode rewards of agent0: 0.0
team_policy eval idv catch total num of agent0: 1
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent1: -0.021970125050812212
team_policy eval average team episode rewards of agent1: 0.0
team_policy eval idv catch total num of agent1: 3
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent2: -0.1136173402636269
team_policy eval average team episode rewards of agent2: 0.0
team_policy eval idv catch total num of agent2: 0
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent3: -0.0824940744184929
team_policy eval average team episode rewards of agent3: 0.0
team_policy eval idv catch total num of agent3: 1
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent4: 0.12106937753323668
team_policy eval average team episode rewards of agent4: 0.0
team_policy eval idv catch total num of agent4: 10
team_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent0: -0.024728549583296483
idv_policy eval average team episode rewards of agent0: 0.0
idv_policy eval idv catch total num of agent0: 4
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent1: 0.05247836616818935
idv_policy eval average team episode rewards of agent1: 0.0
idv_policy eval idv catch total num of agent1: 6
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent2: -0.07845987557484885
idv_policy eval average team episode rewards of agent2: 0.0
idv_policy eval idv catch total num of agent2: 1
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent3: -0.11329717168179627
idv_policy eval average team episode rewards of agent3: 0.0
idv_policy eval idv catch total num of agent3: 0
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent4: -0.11530231936763197
idv_policy eval average team episode rewards of agent4: 0.0
idv_policy eval idv catch total num of agent4: 0
idv_policy eval team catch total num: 0

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 101/10000 episodes, total num timesteps 20400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 102/10000 episodes, total num timesteps 20600/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 103/10000 episodes, total num timesteps 20800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 104/10000 episodes, total num timesteps 21000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 105/10000 episodes, total num timesteps 21200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 106/10000 episodes, total num timesteps 21400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 107/10000 episodes, total num timesteps 21600/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 108/10000 episodes, total num timesteps 21800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 109/10000 episodes, total num timesteps 22000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 110/10000 episodes, total num timesteps 22200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 111/10000 episodes, total num timesteps 22400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 112/10000 episodes, total num timesteps 22600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 113/10000 episodes, total num timesteps 22800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 114/10000 episodes, total num timesteps 23000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 115/10000 episodes, total num timesteps 23200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 116/10000 episodes, total num timesteps 23400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 117/10000 episodes, total num timesteps 23600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 118/10000 episodes, total num timesteps 23800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 119/10000 episodes, total num timesteps 24000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 120/10000 episodes, total num timesteps 24200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 121/10000 episodes, total num timesteps 24400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 122/10000 episodes, total num timesteps 24600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 123/10000 episodes, total num timesteps 24800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 124/10000 episodes, total num timesteps 25000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 125/10000 episodes, total num timesteps 25200/2000000, FPS 332.

team_policy eval average step individual rewards of agent0: -0.050767156208862206
team_policy eval average team episode rewards of agent0: 0.0
team_policy eval idv catch total num of agent0: 2
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent1: -0.09202467345254409
team_policy eval average team episode rewards of agent1: 0.0
team_policy eval idv catch total num of agent1: 0
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent2: -0.1026605613092039
team_policy eval average team episode rewards of agent2: 0.0
team_policy eval idv catch total num of agent2: 0
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent3: -0.08717725964967876
team_policy eval average team episode rewards of agent3: 0.0
team_policy eval idv catch total num of agent3: 1
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent4: -0.1091861369626423
team_policy eval average team episode rewards of agent4: 0.0
team_policy eval idv catch total num of agent4: 0
team_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent0: 0.00903535851736472
idv_policy eval average team episode rewards of agent0: 0.0
idv_policy eval idv catch total num of agent0: 4
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent1: -0.09564713734963531
idv_policy eval average team episode rewards of agent1: 0.0
idv_policy eval idv catch total num of agent1: 0
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent2: 0.004076981478168738
idv_policy eval average team episode rewards of agent2: 0.0
idv_policy eval idv catch total num of agent2: 4
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent3: 0.05628743885974626
idv_policy eval average team episode rewards of agent3: 0.0
idv_policy eval idv catch total num of agent3: 6
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent4: 0.05711488579826259
idv_policy eval average team episode rewards of agent4: 0.0
idv_policy eval idv catch total num of agent4: 6
idv_policy eval team catch total num: 0

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 126/10000 episodes, total num timesteps 25400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 127/10000 episodes, total num timesteps 25600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 128/10000 episodes, total num timesteps 25800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 129/10000 episodes, total num timesteps 26000/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 130/10000 episodes, total num timesteps 26200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 131/10000 episodes, total num timesteps 26400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 132/10000 episodes, total num timesteps 26600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 133/10000 episodes, total num timesteps 26800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 134/10000 episodes, total num timesteps 27000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 135/10000 episodes, total num timesteps 27200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 136/10000 episodes, total num timesteps 27400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 137/10000 episodes, total num timesteps 27600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 138/10000 episodes, total num timesteps 27800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 139/10000 episodes, total num timesteps 28000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 140/10000 episodes, total num timesteps 28200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 141/10000 episodes, total num timesteps 28400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 142/10000 episodes, total num timesteps 28600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 143/10000 episodes, total num timesteps 28800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 144/10000 episodes, total num timesteps 29000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 145/10000 episodes, total num timesteps 29200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 146/10000 episodes, total num timesteps 29400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 147/10000 episodes, total num timesteps 29600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 148/10000 episodes, total num timesteps 29800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 149/10000 episodes, total num timesteps 30000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 150/10000 episodes, total num timesteps 30200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.0473996466814077
team_policy eval average team episode rewards of agent0: 2.5
team_policy eval idv catch total num of agent0: 5
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent1: -0.004422631223058353
team_policy eval average team episode rewards of agent1: 2.5
team_policy eval idv catch total num of agent1: 3
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent2: 0.05169613829791979
team_policy eval average team episode rewards of agent2: 2.5
team_policy eval idv catch total num of agent2: 5
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent3: 0.006703083538517145
team_policy eval average team episode rewards of agent3: 2.5
team_policy eval idv catch total num of agent3: 3
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent4: 0.05433588854891122
team_policy eval average team episode rewards of agent4: 2.5
team_policy eval idv catch total num of agent4: 5
team_policy eval team catch total num: 1
idv_policy eval average step individual rewards of agent0: -0.027677091490153044
idv_policy eval average team episode rewards of agent0: 0.0
idv_policy eval idv catch total num of agent0: 2
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent1: 0.03630774530529836
idv_policy eval average team episode rewards of agent1: 0.0
idv_policy eval idv catch total num of agent1: 4
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent2: 0.06074134897141045
idv_policy eval average team episode rewards of agent2: 0.0
idv_policy eval idv catch total num of agent2: 5
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent3: 0.08391262866063631
idv_policy eval average team episode rewards of agent3: 0.0
idv_policy eval idv catch total num of agent3: 6
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent4: -0.05832621386207343
idv_policy eval average team episode rewards of agent4: 0.0
idv_policy eval idv catch total num of agent4: 1
idv_policy eval team catch total num: 0

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 151/10000 episodes, total num timesteps 30400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 152/10000 episodes, total num timesteps 30600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 153/10000 episodes, total num timesteps 30800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 154/10000 episodes, total num timesteps 31000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 155/10000 episodes, total num timesteps 31200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 156/10000 episodes, total num timesteps 31400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 157/10000 episodes, total num timesteps 31600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 158/10000 episodes, total num timesteps 31800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 159/10000 episodes, total num timesteps 32000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 160/10000 episodes, total num timesteps 32200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 161/10000 episodes, total num timesteps 32400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 162/10000 episodes, total num timesteps 32600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 163/10000 episodes, total num timesteps 32800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 164/10000 episodes, total num timesteps 33000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 165/10000 episodes, total num timesteps 33200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 166/10000 episodes, total num timesteps 33400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 167/10000 episodes, total num timesteps 33600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 168/10000 episodes, total num timesteps 33800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 169/10000 episodes, total num timesteps 34000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 170/10000 episodes, total num timesteps 34200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 171/10000 episodes, total num timesteps 34400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 172/10000 episodes, total num timesteps 34600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 173/10000 episodes, total num timesteps 34800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 174/10000 episodes, total num timesteps 35000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 175/10000 episodes, total num timesteps 35200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: -0.06596359039083234
team_policy eval average team episode rewards of agent0: 2.5
team_policy eval idv catch total num of agent0: 0
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent1: 0.013298609256029074
team_policy eval average team episode rewards of agent1: 2.5
team_policy eval idv catch total num of agent1: 3
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent2: -0.002125845356911773
team_policy eval average team episode rewards of agent2: 2.5
team_policy eval idv catch total num of agent2: 3
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent3: 0.032828150550767626
team_policy eval average team episode rewards of agent3: 2.5
team_policy eval idv catch total num of agent3: 4
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent4: 0.009967461477947283
team_policy eval average team episode rewards of agent4: 2.5
team_policy eval idv catch total num of agent4: 3
team_policy eval team catch total num: 1
idv_policy eval average step individual rewards of agent0: -0.0027638711180537445
idv_policy eval average team episode rewards of agent0: 5.0
idv_policy eval idv catch total num of agent0: 3
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent1: -0.023739718353373743
idv_policy eval average team episode rewards of agent1: 5.0
idv_policy eval idv catch total num of agent1: 2
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent2: -0.016401417040836287
idv_policy eval average team episode rewards of agent2: 5.0
idv_policy eval idv catch total num of agent2: 3
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent3: -0.07195263719290161
idv_policy eval average team episode rewards of agent3: 5.0
idv_policy eval idv catch total num of agent3: 1
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent4: 0.04837249435462138
idv_policy eval average team episode rewards of agent4: 5.0
idv_policy eval idv catch total num of agent4: 5
idv_policy eval team catch total num: 2

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 176/10000 episodes, total num timesteps 35400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 177/10000 episodes, total num timesteps 35600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 178/10000 episodes, total num timesteps 35800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 179/10000 episodes, total num timesteps 36000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 180/10000 episodes, total num timesteps 36200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 181/10000 episodes, total num timesteps 36400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 182/10000 episodes, total num timesteps 36600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 183/10000 episodes, total num timesteps 36800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 184/10000 episodes, total num timesteps 37000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 185/10000 episodes, total num timesteps 37200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 186/10000 episodes, total num timesteps 37400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 187/10000 episodes, total num timesteps 37600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 188/10000 episodes, total num timesteps 37800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 189/10000 episodes, total num timesteps 38000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 190/10000 episodes, total num timesteps 38200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 191/10000 episodes, total num timesteps 38400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 192/10000 episodes, total num timesteps 38600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 193/10000 episodes, total num timesteps 38800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 194/10000 episodes, total num timesteps 39000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 195/10000 episodes, total num timesteps 39200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 196/10000 episodes, total num timesteps 39400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 197/10000 episodes, total num timesteps 39600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 198/10000 episodes, total num timesteps 39800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 199/10000 episodes, total num timesteps 40000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 200/10000 episodes, total num timesteps 40200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: -0.06647982234568466
team_policy eval average team episode rewards of agent0: 0.0
team_policy eval idv catch total num of agent0: 2
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent1: -0.0852428619184801
team_policy eval average team episode rewards of agent1: 0.0
team_policy eval idv catch total num of agent1: 1
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent2: -0.06669233200049828
team_policy eval average team episode rewards of agent2: 0.0
team_policy eval idv catch total num of agent2: 2
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent3: -0.10209314532890117
team_policy eval average team episode rewards of agent3: 0.0
team_policy eval idv catch total num of agent3: 0
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent4: -0.0030241952882152745
team_policy eval average team episode rewards of agent4: 0.0
team_policy eval idv catch total num of agent4: 4
team_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent0: -0.046750760722593085
idv_policy eval average team episode rewards of agent0: 0.0
idv_policy eval idv catch total num of agent0: 2
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent1: -0.08653162645792666
idv_policy eval average team episode rewards of agent1: 0.0
idv_policy eval idv catch total num of agent1: 0
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent2: -0.050053561358629484
idv_policy eval average team episode rewards of agent2: 0.0
idv_policy eval idv catch total num of agent2: 2
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent3: 0.023603931800852126
idv_policy eval average team episode rewards of agent3: 0.0
idv_policy eval idv catch total num of agent3: 4
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent4: -0.08647149225361506
idv_policy eval average team episode rewards of agent4: 0.0
idv_policy eval idv catch total num of agent4: 0
idv_policy eval team catch total num: 0

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 201/10000 episodes, total num timesteps 40400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 202/10000 episodes, total num timesteps 40600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 203/10000 episodes, total num timesteps 40800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 204/10000 episodes, total num timesteps 41000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 205/10000 episodes, total num timesteps 41200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 206/10000 episodes, total num timesteps 41400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 207/10000 episodes, total num timesteps 41600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 208/10000 episodes, total num timesteps 41800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 209/10000 episodes, total num timesteps 42000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 210/10000 episodes, total num timesteps 42200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 211/10000 episodes, total num timesteps 42400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 212/10000 episodes, total num timesteps 42600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 213/10000 episodes, total num timesteps 42800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 214/10000 episodes, total num timesteps 43000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 215/10000 episodes, total num timesteps 43200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 216/10000 episodes, total num timesteps 43400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 217/10000 episodes, total num timesteps 43600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 218/10000 episodes, total num timesteps 43800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 219/10000 episodes, total num timesteps 44000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 220/10000 episodes, total num timesteps 44200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 221/10000 episodes, total num timesteps 44400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 222/10000 episodes, total num timesteps 44600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 223/10000 episodes, total num timesteps 44800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 224/10000 episodes, total num timesteps 45000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 225/10000 episodes, total num timesteps 45200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: -0.09739369294523373
team_policy eval average team episode rewards of agent0: 0.0
team_policy eval idv catch total num of agent0: 0
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent1: -0.0032798520670947262
team_policy eval average team episode rewards of agent1: 0.0
team_policy eval idv catch total num of agent1: 4
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent2: -0.11322487664451569
team_policy eval average team episode rewards of agent2: 0.0
team_policy eval idv catch total num of agent2: 0
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent3: -0.06979529590324798
team_policy eval average team episode rewards of agent3: 0.0
team_policy eval idv catch total num of agent3: 1
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent4: -0.03086915951888591
team_policy eval average team episode rewards of agent4: 0.0
team_policy eval idv catch total num of agent4: 2
team_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent0: 0.10656941571361508
idv_policy eval average team episode rewards of agent0: 2.5
idv_policy eval idv catch total num of agent0: 8
idv_policy eval team catch total num: 1
idv_policy eval average step individual rewards of agent1: -0.06820446181748721
idv_policy eval average team episode rewards of agent1: 2.5
idv_policy eval idv catch total num of agent1: 1
idv_policy eval team catch total num: 1
idv_policy eval average step individual rewards of agent2: 0.10469042732101269
idv_policy eval average team episode rewards of agent2: 2.5
idv_policy eval idv catch total num of agent2: 7
idv_policy eval team catch total num: 1
idv_policy eval average step individual rewards of agent3: -0.07307333103617104
idv_policy eval average team episode rewards of agent3: 2.5
idv_policy eval idv catch total num of agent3: 1
idv_policy eval team catch total num: 1
idv_policy eval average step individual rewards of agent4: 0.06822691578136306
idv_policy eval average team episode rewards of agent4: 2.5
idv_policy eval idv catch total num of agent4: 6
idv_policy eval team catch total num: 1

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 226/10000 episodes, total num timesteps 45400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 227/10000 episodes, total num timesteps 45600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 228/10000 episodes, total num timesteps 45800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 229/10000 episodes, total num timesteps 46000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 230/10000 episodes, total num timesteps 46200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 231/10000 episodes, total num timesteps 46400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 232/10000 episodes, total num timesteps 46600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 233/10000 episodes, total num timesteps 46800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 234/10000 episodes, total num timesteps 47000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 235/10000 episodes, total num timesteps 47200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 236/10000 episodes, total num timesteps 47400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 237/10000 episodes, total num timesteps 47600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 238/10000 episodes, total num timesteps 47800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 239/10000 episodes, total num timesteps 48000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 240/10000 episodes, total num timesteps 48200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 241/10000 episodes, total num timesteps 48400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 242/10000 episodes, total num timesteps 48600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 243/10000 episodes, total num timesteps 48800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 244/10000 episodes, total num timesteps 49000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 245/10000 episodes, total num timesteps 49200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 246/10000 episodes, total num timesteps 49400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 247/10000 episodes, total num timesteps 49600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 248/10000 episodes, total num timesteps 49800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 249/10000 episodes, total num timesteps 50000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 250/10000 episodes, total num timesteps 50200/2000000, FPS 332.

team_policy eval average step individual rewards of agent0: -0.09552588980577673
team_policy eval average team episode rewards of agent0: 0.0
team_policy eval idv catch total num of agent0: 1
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent1: -0.13836971785822474
team_policy eval average team episode rewards of agent1: 0.0
team_policy eval idv catch total num of agent1: 0
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent2: -0.10333929790800564
team_policy eval average team episode rewards of agent2: 0.0
team_policy eval idv catch total num of agent2: 1
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent3: -0.07768645399288834
team_policy eval average team episode rewards of agent3: 0.0
team_policy eval idv catch total num of agent3: 1
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent4: -0.12597593823921358
team_policy eval average team episode rewards of agent4: 0.0
team_policy eval idv catch total num of agent4: 0
team_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent0: 0.11074233177732841
idv_policy eval average team episode rewards of agent0: 0.0
idv_policy eval idv catch total num of agent0: 8
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent1: 0.040323796637816496
idv_policy eval average team episode rewards of agent1: 0.0
idv_policy eval idv catch total num of agent1: 5
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent2: -0.059739011445368154
idv_policy eval average team episode rewards of agent2: 0.0
idv_policy eval idv catch total num of agent2: 1
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent3: -0.015229236819090409
idv_policy eval average team episode rewards of agent3: 0.0
idv_policy eval idv catch total num of agent3: 3
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent4: -0.07268706406643859
idv_policy eval average team episode rewards of agent4: 0.0
idv_policy eval idv catch total num of agent4: 1
idv_policy eval team catch total num: 0

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 251/10000 episodes, total num timesteps 50400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 252/10000 episodes, total num timesteps 50600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 253/10000 episodes, total num timesteps 50800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 254/10000 episodes, total num timesteps 51000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 255/10000 episodes, total num timesteps 51200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 256/10000 episodes, total num timesteps 51400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 257/10000 episodes, total num timesteps 51600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 258/10000 episodes, total num timesteps 51800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 259/10000 episodes, total num timesteps 52000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 260/10000 episodes, total num timesteps 52200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 261/10000 episodes, total num timesteps 52400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 262/10000 episodes, total num timesteps 52600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 263/10000 episodes, total num timesteps 52800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 264/10000 episodes, total num timesteps 53000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 265/10000 episodes, total num timesteps 53200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 266/10000 episodes, total num timesteps 53400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 267/10000 episodes, total num timesteps 53600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 268/10000 episodes, total num timesteps 53800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 269/10000 episodes, total num timesteps 54000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 270/10000 episodes, total num timesteps 54200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 271/10000 episodes, total num timesteps 54400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 272/10000 episodes, total num timesteps 54600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 273/10000 episodes, total num timesteps 54800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 274/10000 episodes, total num timesteps 55000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 275/10000 episodes, total num timesteps 55200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.12919527106781487
team_policy eval average team episode rewards of agent0: 2.5
team_policy eval idv catch total num of agent0: 8
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent1: -0.006880951384099583
team_policy eval average team episode rewards of agent1: 2.5
team_policy eval idv catch total num of agent1: 3
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent2: -0.026614161591031388
team_policy eval average team episode rewards of agent2: 2.5
team_policy eval idv catch total num of agent2: 2
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent3: -0.011341623011924859
team_policy eval average team episode rewards of agent3: 2.5
team_policy eval idv catch total num of agent3: 3
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent4: 0.05256652078949307
team_policy eval average team episode rewards of agent4: 2.5
team_policy eval idv catch total num of agent4: 5
team_policy eval team catch total num: 1
idv_policy eval average step individual rewards of agent0: 0.023929993309088328
idv_policy eval average team episode rewards of agent0: 5.0
idv_policy eval idv catch total num of agent0: 4
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent1: 0.06861125042094823
idv_policy eval average team episode rewards of agent1: 5.0
idv_policy eval idv catch total num of agent1: 6
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent2: 0.012745857213828721
idv_policy eval average team episode rewards of agent2: 5.0
idv_policy eval idv catch total num of agent2: 4
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent3: 0.11289112092335535
idv_policy eval average team episode rewards of agent3: 5.0
idv_policy eval idv catch total num of agent3: 7
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent4: -0.008528026197974426
idv_policy eval average team episode rewards of agent4: 5.0
idv_policy eval idv catch total num of agent4: 3
idv_policy eval team catch total num: 2

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 276/10000 episodes, total num timesteps 55400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 277/10000 episodes, total num timesteps 55600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 278/10000 episodes, total num timesteps 55800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 279/10000 episodes, total num timesteps 56000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 280/10000 episodes, total num timesteps 56200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 281/10000 episodes, total num timesteps 56400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 282/10000 episodes, total num timesteps 56600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 283/10000 episodes, total num timesteps 56800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 284/10000 episodes, total num timesteps 57000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 285/10000 episodes, total num timesteps 57200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 286/10000 episodes, total num timesteps 57400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 287/10000 episodes, total num timesteps 57600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 288/10000 episodes, total num timesteps 57800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 289/10000 episodes, total num timesteps 58000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 290/10000 episodes, total num timesteps 58200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 291/10000 episodes, total num timesteps 58400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 292/10000 episodes, total num timesteps 58600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 293/10000 episodes, total num timesteps 58800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 294/10000 episodes, total num timesteps 59000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 295/10000 episodes, total num timesteps 59200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 296/10000 episodes, total num timesteps 59400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 297/10000 episodes, total num timesteps 59600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 298/10000 episodes, total num timesteps 59800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 299/10000 episodes, total num timesteps 60000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 300/10000 episodes, total num timesteps 60200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: -0.03720964869774841
team_policy eval average team episode rewards of agent0: 2.5
team_policy eval idv catch total num of agent0: 2
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent1: -0.03433519718427815
team_policy eval average team episode rewards of agent1: 2.5
team_policy eval idv catch total num of agent1: 2
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent2: -0.09225715292881999
team_policy eval average team episode rewards of agent2: 2.5
team_policy eval idv catch total num of agent2: 0
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent3: -0.06390785222314839
team_policy eval average team episode rewards of agent3: 2.5
team_policy eval idv catch total num of agent3: 1
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent4: -0.05947471616368991
team_policy eval average team episode rewards of agent4: 2.5
team_policy eval idv catch total num of agent4: 1
team_policy eval team catch total num: 1
idv_policy eval average step individual rewards of agent0: -0.052731172138132255
idv_policy eval average team episode rewards of agent0: 5.0
idv_policy eval idv catch total num of agent0: 1
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent1: 0.0370259696657337
idv_policy eval average team episode rewards of agent1: 5.0
idv_policy eval idv catch total num of agent1: 5
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent2: -0.026972105471112093
idv_policy eval average team episode rewards of agent2: 5.0
idv_policy eval idv catch total num of agent2: 2
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent3: -0.0033497711570674004
idv_policy eval average team episode rewards of agent3: 5.0
idv_policy eval idv catch total num of agent3: 3
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent4: -0.02827903584012155
idv_policy eval average team episode rewards of agent4: 5.0
idv_policy eval idv catch total num of agent4: 2
idv_policy eval team catch total num: 2

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 301/10000 episodes, total num timesteps 60400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 302/10000 episodes, total num timesteps 60600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 303/10000 episodes, total num timesteps 60800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 304/10000 episodes, total num timesteps 61000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 305/10000 episodes, total num timesteps 61200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 306/10000 episodes, total num timesteps 61400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 307/10000 episodes, total num timesteps 61600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 308/10000 episodes, total num timesteps 61800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 309/10000 episodes, total num timesteps 62000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 310/10000 episodes, total num timesteps 62200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 311/10000 episodes, total num timesteps 62400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 312/10000 episodes, total num timesteps 62600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 313/10000 episodes, total num timesteps 62800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 314/10000 episodes, total num timesteps 63000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 315/10000 episodes, total num timesteps 63200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 316/10000 episodes, total num timesteps 63400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 317/10000 episodes, total num timesteps 63600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 318/10000 episodes, total num timesteps 63800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 319/10000 episodes, total num timesteps 64000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 320/10000 episodes, total num timesteps 64200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 321/10000 episodes, total num timesteps 64400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 322/10000 episodes, total num timesteps 64600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 323/10000 episodes, total num timesteps 64800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 324/10000 episodes, total num timesteps 65000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 325/10000 episodes, total num timesteps 65200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: -0.04106705406294742
team_policy eval average team episode rewards of agent0: 7.5
team_policy eval idv catch total num of agent0: 1
team_policy eval team catch total num: 3
team_policy eval average step individual rewards of agent1: 0.05924594803580715
team_policy eval average team episode rewards of agent1: 7.5
team_policy eval idv catch total num of agent1: 5
team_policy eval team catch total num: 3
team_policy eval average step individual rewards of agent2: 0.08276086730181387
team_policy eval average team episode rewards of agent2: 7.5
team_policy eval idv catch total num of agent2: 6
team_policy eval team catch total num: 3
team_policy eval average step individual rewards of agent3: 0.16932108187449335
team_policy eval average team episode rewards of agent3: 7.5
team_policy eval idv catch total num of agent3: 9
team_policy eval team catch total num: 3
team_policy eval average step individual rewards of agent4: 0.03354100651706568
team_policy eval average team episode rewards of agent4: 7.5
team_policy eval idv catch total num of agent4: 4
team_policy eval team catch total num: 3
idv_policy eval average step individual rewards of agent0: 0.0769953153060369
idv_policy eval average team episode rewards of agent0: 10.0
idv_policy eval idv catch total num of agent0: 6
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent1: -0.026962217105488163
idv_policy eval average team episode rewards of agent1: 10.0
idv_policy eval idv catch total num of agent1: 2
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent2: -0.024487391496972776
idv_policy eval average team episode rewards of agent2: 10.0
idv_policy eval idv catch total num of agent2: 2
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent3: 0.08330786324426953
idv_policy eval average team episode rewards of agent3: 10.0
idv_policy eval idv catch total num of agent3: 6
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent4: 0.05575799299826991
idv_policy eval average team episode rewards of agent4: 10.0
idv_policy eval idv catch total num of agent4: 5
idv_policy eval team catch total num: 4

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 326/10000 episodes, total num timesteps 65400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 327/10000 episodes, total num timesteps 65600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 328/10000 episodes, total num timesteps 65800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 329/10000 episodes, total num timesteps 66000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 330/10000 episodes, total num timesteps 66200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 331/10000 episodes, total num timesteps 66400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 332/10000 episodes, total num timesteps 66600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 333/10000 episodes, total num timesteps 66800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 334/10000 episodes, total num timesteps 67000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 335/10000 episodes, total num timesteps 67200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 336/10000 episodes, total num timesteps 67400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 337/10000 episodes, total num timesteps 67600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 338/10000 episodes, total num timesteps 67800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 339/10000 episodes, total num timesteps 68000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 340/10000 episodes, total num timesteps 68200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 341/10000 episodes, total num timesteps 68400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 342/10000 episodes, total num timesteps 68600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 343/10000 episodes, total num timesteps 68800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 344/10000 episodes, total num timesteps 69000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 345/10000 episodes, total num timesteps 69200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 346/10000 episodes, total num timesteps 69400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 347/10000 episodes, total num timesteps 69600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 348/10000 episodes, total num timesteps 69800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 349/10000 episodes, total num timesteps 70000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 350/10000 episodes, total num timesteps 70200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: -0.08153440497941103
team_policy eval average team episode rewards of agent0: 0.0
team_policy eval idv catch total num of agent0: 0
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent1: -0.07829964068643601
team_policy eval average team episode rewards of agent1: 0.0
team_policy eval idv catch total num of agent1: 0
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent2: -0.08233798706668867
team_policy eval average team episode rewards of agent2: 0.0
team_policy eval idv catch total num of agent2: 0
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent3: -0.04595352607611442
team_policy eval average team episode rewards of agent3: 0.0
team_policy eval idv catch total num of agent3: 1
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent4: 0.15358153934248026
team_policy eval average team episode rewards of agent4: 0.0
team_policy eval idv catch total num of agent4: 9
team_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent0: 0.02115930036659089
idv_policy eval average team episode rewards of agent0: 2.5
idv_policy eval idv catch total num of agent0: 4
idv_policy eval team catch total num: 1
idv_policy eval average step individual rewards of agent1: -0.0701694363379257
idv_policy eval average team episode rewards of agent1: 2.5
idv_policy eval idv catch total num of agent1: 0
idv_policy eval team catch total num: 1
idv_policy eval average step individual rewards of agent2: -0.01131623401468866
idv_policy eval average team episode rewards of agent2: 2.5
idv_policy eval idv catch total num of agent2: 3
idv_policy eval team catch total num: 1
idv_policy eval average step individual rewards of agent3: -0.05186320854597426
idv_policy eval average team episode rewards of agent3: 2.5
idv_policy eval idv catch total num of agent3: 1
idv_policy eval team catch total num: 1
idv_policy eval average step individual rewards of agent4: -0.0403063068632137
idv_policy eval average team episode rewards of agent4: 2.5
idv_policy eval idv catch total num of agent4: 2
idv_policy eval team catch total num: 1

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 351/10000 episodes, total num timesteps 70400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 352/10000 episodes, total num timesteps 70600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 353/10000 episodes, total num timesteps 70800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 354/10000 episodes, total num timesteps 71000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 355/10000 episodes, total num timesteps 71200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 356/10000 episodes, total num timesteps 71400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 357/10000 episodes, total num timesteps 71600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 358/10000 episodes, total num timesteps 71800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 359/10000 episodes, total num timesteps 72000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 360/10000 episodes, total num timesteps 72200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 361/10000 episodes, total num timesteps 72400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 362/10000 episodes, total num timesteps 72600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 363/10000 episodes, total num timesteps 72800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 364/10000 episodes, total num timesteps 73000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 365/10000 episodes, total num timesteps 73200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 366/10000 episodes, total num timesteps 73400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 367/10000 episodes, total num timesteps 73600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 368/10000 episodes, total num timesteps 73800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 369/10000 episodes, total num timesteps 74000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 370/10000 episodes, total num timesteps 74200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 371/10000 episodes, total num timesteps 74400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 372/10000 episodes, total num timesteps 74600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 373/10000 episodes, total num timesteps 74800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 374/10000 episodes, total num timesteps 75000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 375/10000 episodes, total num timesteps 75200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.018145156674060088
team_policy eval average team episode rewards of agent0: 5.0
team_policy eval idv catch total num of agent0: 4
team_policy eval team catch total num: 2
team_policy eval average step individual rewards of agent1: 0.10315849894689201
team_policy eval average team episode rewards of agent1: 5.0
team_policy eval idv catch total num of agent1: 7
team_policy eval team catch total num: 2
team_policy eval average step individual rewards of agent2: -0.08349245735885821
team_policy eval average team episode rewards of agent2: 5.0
team_policy eval idv catch total num of agent2: 0
team_policy eval team catch total num: 2
team_policy eval average step individual rewards of agent3: 0.09586171583717902
team_policy eval average team episode rewards of agent3: 5.0
team_policy eval idv catch total num of agent3: 7
team_policy eval team catch total num: 2
team_policy eval average step individual rewards of agent4: -0.026683052147159395
team_policy eval average team episode rewards of agent4: 5.0
team_policy eval idv catch total num of agent4: 2
team_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent0: 0.020575617503304163
idv_policy eval average team episode rewards of agent0: 7.5
idv_policy eval idv catch total num of agent0: 4
idv_policy eval team catch total num: 3
idv_policy eval average step individual rewards of agent1: 0.1553375548326984
idv_policy eval average team episode rewards of agent1: 7.5
idv_policy eval idv catch total num of agent1: 9
idv_policy eval team catch total num: 3
idv_policy eval average step individual rewards of agent2: 0.024704916576079726
idv_policy eval average team episode rewards of agent2: 7.5
idv_policy eval idv catch total num of agent2: 4
idv_policy eval team catch total num: 3
idv_policy eval average step individual rewards of agent3: 0.07582814928447819
idv_policy eval average team episode rewards of agent3: 7.5
idv_policy eval idv catch total num of agent3: 6
idv_policy eval team catch total num: 3
idv_policy eval average step individual rewards of agent4: 0.025378863876886832
idv_policy eval average team episode rewards of agent4: 7.5
idv_policy eval idv catch total num of agent4: 4
idv_policy eval team catch total num: 3

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 376/10000 episodes, total num timesteps 75400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 377/10000 episodes, total num timesteps 75600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 378/10000 episodes, total num timesteps 75800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 379/10000 episodes, total num timesteps 76000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 380/10000 episodes, total num timesteps 76200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 381/10000 episodes, total num timesteps 76400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 382/10000 episodes, total num timesteps 76600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 383/10000 episodes, total num timesteps 76800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 384/10000 episodes, total num timesteps 77000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 385/10000 episodes, total num timesteps 77200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 386/10000 episodes, total num timesteps 77400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 387/10000 episodes, total num timesteps 77600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 388/10000 episodes, total num timesteps 77800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 389/10000 episodes, total num timesteps 78000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 390/10000 episodes, total num timesteps 78200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 391/10000 episodes, total num timesteps 78400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 392/10000 episodes, total num timesteps 78600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 393/10000 episodes, total num timesteps 78800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 394/10000 episodes, total num timesteps 79000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 395/10000 episodes, total num timesteps 79200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 396/10000 episodes, total num timesteps 79400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 397/10000 episodes, total num timesteps 79600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 398/10000 episodes, total num timesteps 79800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 399/10000 episodes, total num timesteps 80000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 400/10000 episodes, total num timesteps 80200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.006713524366192161
team_policy eval average team episode rewards of agent0: 0.0
team_policy eval idv catch total num of agent0: 3
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent1: -0.0694980039167981
team_policy eval average team episode rewards of agent1: 0.0
team_policy eval idv catch total num of agent1: 0
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent2: -0.019409853600860547
team_policy eval average team episode rewards of agent2: 0.0
team_policy eval idv catch total num of agent2: 2
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent3: 0.1099425927260312
team_policy eval average team episode rewards of agent3: 0.0
team_policy eval idv catch total num of agent3: 7
team_policy eval team catch total num: 0
team_policy eval average step individual rewards of agent4: -0.065446077753935
team_policy eval average team episode rewards of agent4: 0.0
team_policy eval idv catch total num of agent4: 0
team_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent0: 0.0006349094999464899
idv_policy eval average team episode rewards of agent0: 0.0
idv_policy eval idv catch total num of agent0: 3
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent1: -0.03709514610468448
idv_policy eval average team episode rewards of agent1: 0.0
idv_policy eval idv catch total num of agent1: 2
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent2: 0.04837919352372751
idv_policy eval average team episode rewards of agent2: 0.0
idv_policy eval idv catch total num of agent2: 5
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent3: -0.06387403999762328
idv_policy eval average team episode rewards of agent3: 0.0
idv_policy eval idv catch total num of agent3: 1
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent4: -0.008145698252711836
idv_policy eval average team episode rewards of agent4: 0.0
idv_policy eval idv catch total num of agent4: 3
idv_policy eval team catch total num: 0

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 401/10000 episodes, total num timesteps 80400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 402/10000 episodes, total num timesteps 80600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 403/10000 episodes, total num timesteps 80800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 404/10000 episodes, total num timesteps 81000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 405/10000 episodes, total num timesteps 81200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 406/10000 episodes, total num timesteps 81400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 407/10000 episodes, total num timesteps 81600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 408/10000 episodes, total num timesteps 81800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 409/10000 episodes, total num timesteps 82000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 410/10000 episodes, total num timesteps 82200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 411/10000 episodes, total num timesteps 82400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 412/10000 episodes, total num timesteps 82600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 413/10000 episodes, total num timesteps 82800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 414/10000 episodes, total num timesteps 83000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 415/10000 episodes, total num timesteps 83200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 416/10000 episodes, total num timesteps 83400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 417/10000 episodes, total num timesteps 83600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 418/10000 episodes, total num timesteps 83800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 419/10000 episodes, total num timesteps 84000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 420/10000 episodes, total num timesteps 84200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 421/10000 episodes, total num timesteps 84400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 422/10000 episodes, total num timesteps 84600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 423/10000 episodes, total num timesteps 84800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 424/10000 episodes, total num timesteps 85000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 425/10000 episodes, total num timesteps 85200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.05854191565928698
team_policy eval average team episode rewards of agent0: 2.5
team_policy eval idv catch total num of agent0: 5
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent1: -0.032032417693136896
team_policy eval average team episode rewards of agent1: 2.5
team_policy eval idv catch total num of agent1: 1
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent2: 0.0638355206327811
team_policy eval average team episode rewards of agent2: 2.5
team_policy eval idv catch total num of agent2: 5
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent3: 0.006884308724460339
team_policy eval average team episode rewards of agent3: 2.5
team_policy eval idv catch total num of agent3: 3
team_policy eval team catch total num: 1
team_policy eval average step individual rewards of agent4: 0.09276573787375014
team_policy eval average team episode rewards of agent4: 2.5
team_policy eval idv catch total num of agent4: 6
team_policy eval team catch total num: 1
idv_policy eval average step individual rewards of agent0: -0.015342724943167884
idv_policy eval average team episode rewards of agent0: 10.0
idv_policy eval idv catch total num of agent0: 2
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent1: 0.05389982546458105
idv_policy eval average team episode rewards of agent1: 10.0
idv_policy eval idv catch total num of agent1: 5
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent2: 0.05549409551671582
idv_policy eval average team episode rewards of agent2: 10.0
idv_policy eval idv catch total num of agent2: 5
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent3: -0.016902906432930277
idv_policy eval average team episode rewards of agent3: 10.0
idv_policy eval idv catch total num of agent3: 2
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent4: 0.18390537933019158
idv_policy eval average team episode rewards of agent4: 10.0
idv_policy eval idv catch total num of agent4: 10
idv_policy eval team catch total num: 4

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 426/10000 episodes, total num timesteps 85400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 427/10000 episodes, total num timesteps 85600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 428/10000 episodes, total num timesteps 85800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 429/10000 episodes, total num timesteps 86000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 430/10000 episodes, total num timesteps 86200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 431/10000 episodes, total num timesteps 86400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 432/10000 episodes, total num timesteps 86600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 433/10000 episodes, total num timesteps 86800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 434/10000 episodes, total num timesteps 87000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 435/10000 episodes, total num timesteps 87200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 436/10000 episodes, total num timesteps 87400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 437/10000 episodes, total num timesteps 87600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 438/10000 episodes, total num timesteps 87800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 439/10000 episodes, total num timesteps 88000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 440/10000 episodes, total num timesteps 88200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 441/10000 episodes, total num timesteps 88400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 442/10000 episodes, total num timesteps 88600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 443/10000 episodes, total num timesteps 88800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 444/10000 episodes, total num timesteps 89000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 445/10000 episodes, total num timesteps 89200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 446/10000 episodes, total num timesteps 89400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 447/10000 episodes, total num timesteps 89600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 448/10000 episodes, total num timesteps 89800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 449/10000 episodes, total num timesteps 90000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 450/10000 episodes, total num timesteps 90200/2000000, FPS 332.

team_policy eval average step individual rewards of agent0: 0.08495088134186676
team_policy eval average team episode rewards of agent0: 10.0
team_policy eval idv catch total num of agent0: 6
team_policy eval team catch total num: 4
team_policy eval average step individual rewards of agent1: 0.03567194466648334
team_policy eval average team episode rewards of agent1: 10.0
team_policy eval idv catch total num of agent1: 4
team_policy eval team catch total num: 4
team_policy eval average step individual rewards of agent2: 0.06118921867198047
team_policy eval average team episode rewards of agent2: 10.0
team_policy eval idv catch total num of agent2: 5
team_policy eval team catch total num: 4
team_policy eval average step individual rewards of agent3: -0.011437295346748848
team_policy eval average team episode rewards of agent3: 10.0
team_policy eval idv catch total num of agent3: 2
team_policy eval team catch total num: 4
team_policy eval average step individual rewards of agent4: 0.06291513448650977
team_policy eval average team episode rewards of agent4: 10.0
team_policy eval idv catch total num of agent4: 5
team_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent0: -0.017692619924645627
idv_policy eval average team episode rewards of agent0: 10.0
idv_policy eval idv catch total num of agent0: 2
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent1: 0.1589508872161658
idv_policy eval average team episode rewards of agent1: 10.0
idv_policy eval idv catch total num of agent1: 9
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent2: 0.14023757357184055
idv_policy eval average team episode rewards of agent2: 10.0
idv_policy eval idv catch total num of agent2: 8
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent3: 0.058352356759573815
idv_policy eval average team episode rewards of agent3: 10.0
idv_policy eval idv catch total num of agent3: 5
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent4: 0.06210445744763987
idv_policy eval average team episode rewards of agent4: 10.0
idv_policy eval idv catch total num of agent4: 5
idv_policy eval team catch total num: 4

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 451/10000 episodes, total num timesteps 90400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 452/10000 episodes, total num timesteps 90600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 453/10000 episodes, total num timesteps 90800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 454/10000 episodes, total num timesteps 91000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 455/10000 episodes, total num timesteps 91200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 456/10000 episodes, total num timesteps 91400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 457/10000 episodes, total num timesteps 91600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 458/10000 episodes, total num timesteps 91800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 459/10000 episodes, total num timesteps 92000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 460/10000 episodes, total num timesteps 92200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 461/10000 episodes, total num timesteps 92400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 462/10000 episodes, total num timesteps 92600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 463/10000 episodes, total num timesteps 92800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 464/10000 episodes, total num timesteps 93000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 465/10000 episodes, total num timesteps 93200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 466/10000 episodes, total num timesteps 93400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 467/10000 episodes, total num timesteps 93600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 468/10000 episodes, total num timesteps 93800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 469/10000 episodes, total num timesteps 94000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 470/10000 episodes, total num timesteps 94200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 471/10000 episodes, total num timesteps 94400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 472/10000 episodes, total num timesteps 94600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 473/10000 episodes, total num timesteps 94800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 474/10000 episodes, total num timesteps 95000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 475/10000 episodes, total num timesteps 95200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.23820712882451814
team_policy eval average team episode rewards of agent0: 17.5
team_policy eval idv catch total num of agent0: 12
team_policy eval team catch total num: 7
team_policy eval average step individual rewards of agent1: 0.006189513042506787
team_policy eval average team episode rewards of agent1: 17.5
team_policy eval idv catch total num of agent1: 3
team_policy eval team catch total num: 7
team_policy eval average step individual rewards of agent2: 0.11605720064701595
team_policy eval average team episode rewards of agent2: 17.5
team_policy eval idv catch total num of agent2: 7
team_policy eval team catch total num: 7
team_policy eval average step individual rewards of agent3: 0.001580977757465678
team_policy eval average team episode rewards of agent3: 17.5
team_policy eval idv catch total num of agent3: 3
team_policy eval team catch total num: 7
team_policy eval average step individual rewards of agent4: 0.1372380792777733
team_policy eval average team episode rewards of agent4: 17.5
team_policy eval idv catch total num of agent4: 8
team_policy eval team catch total num: 7
idv_policy eval average step individual rewards of agent0: 0.1797769660946391
idv_policy eval average team episode rewards of agent0: 20.0
idv_policy eval idv catch total num of agent0: 9
idv_policy eval team catch total num: 8
idv_policy eval average step individual rewards of agent1: 0.2303788927428659
idv_policy eval average team episode rewards of agent1: 20.0
idv_policy eval idv catch total num of agent1: 11
idv_policy eval team catch total num: 8
idv_policy eval average step individual rewards of agent2: 0.14232988382439238
idv_policy eval average team episode rewards of agent2: 20.0
idv_policy eval idv catch total num of agent2: 8
idv_policy eval team catch total num: 8
idv_policy eval average step individual rewards of agent3: 0.1485214725394582
idv_policy eval average team episode rewards of agent3: 20.0
idv_policy eval idv catch total num of agent3: 8
idv_policy eval team catch total num: 8
idv_policy eval average step individual rewards of agent4: 0.11972036390415765
idv_policy eval average team episode rewards of agent4: 20.0
idv_policy eval idv catch total num of agent4: 7
idv_policy eval team catch total num: 8

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 476/10000 episodes, total num timesteps 95400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 477/10000 episodes, total num timesteps 95600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 478/10000 episodes, total num timesteps 95800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 479/10000 episodes, total num timesteps 96000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 480/10000 episodes, total num timesteps 96200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 481/10000 episodes, total num timesteps 96400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 482/10000 episodes, total num timesteps 96600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 483/10000 episodes, total num timesteps 96800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 484/10000 episodes, total num timesteps 97000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 485/10000 episodes, total num timesteps 97200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 486/10000 episodes, total num timesteps 97400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 487/10000 episodes, total num timesteps 97600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 488/10000 episodes, total num timesteps 97800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 489/10000 episodes, total num timesteps 98000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 490/10000 episodes, total num timesteps 98200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 491/10000 episodes, total num timesteps 98400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 492/10000 episodes, total num timesteps 98600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 493/10000 episodes, total num timesteps 98800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 494/10000 episodes, total num timesteps 99000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 495/10000 episodes, total num timesteps 99200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 496/10000 episodes, total num timesteps 99400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 497/10000 episodes, total num timesteps 99600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 498/10000 episodes, total num timesteps 99800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 499/10000 episodes, total num timesteps 100000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 500/10000 episodes, total num timesteps 100200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.22082611851974307
team_policy eval average team episode rewards of agent0: 27.5
team_policy eval idv catch total num of agent0: 11
team_policy eval team catch total num: 11
team_policy eval average step individual rewards of agent1: 0.2736918208986179
team_policy eval average team episode rewards of agent1: 27.5
team_policy eval idv catch total num of agent1: 13
team_policy eval team catch total num: 11
team_policy eval average step individual rewards of agent2: 0.11949934387420248
team_policy eval average team episode rewards of agent2: 27.5
team_policy eval idv catch total num of agent2: 7
team_policy eval team catch total num: 11
team_policy eval average step individual rewards of agent3: 0.11573401800319899
team_policy eval average team episode rewards of agent3: 27.5
team_policy eval idv catch total num of agent3: 7
team_policy eval team catch total num: 11
team_policy eval average step individual rewards of agent4: 0.1687136229049002
team_policy eval average team episode rewards of agent4: 27.5
team_policy eval idv catch total num of agent4: 9
team_policy eval team catch total num: 11
idv_policy eval average step individual rewards of agent0: -0.01658434204266819
idv_policy eval average team episode rewards of agent0: 7.5
idv_policy eval idv catch total num of agent0: 2
idv_policy eval team catch total num: 3
idv_policy eval average step individual rewards of agent1: 0.09157709766045091
idv_policy eval average team episode rewards of agent1: 7.5
idv_policy eval idv catch total num of agent1: 6
idv_policy eval team catch total num: 3
idv_policy eval average step individual rewards of agent2: 0.01156872557160296
idv_policy eval average team episode rewards of agent2: 7.5
idv_policy eval idv catch total num of agent2: 3
idv_policy eval team catch total num: 3
idv_policy eval average step individual rewards of agent3: 0.009069088332814855
idv_policy eval average team episode rewards of agent3: 7.5
idv_policy eval idv catch total num of agent3: 3
idv_policy eval team catch total num: 3
idv_policy eval average step individual rewards of agent4: 0.031322187462102524
idv_policy eval average team episode rewards of agent4: 7.5
idv_policy eval idv catch total num of agent4: 4
idv_policy eval team catch total num: 3

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 501/10000 episodes, total num timesteps 100400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 502/10000 episodes, total num timesteps 100600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 503/10000 episodes, total num timesteps 100800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 504/10000 episodes, total num timesteps 101000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 505/10000 episodes, total num timesteps 101200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 506/10000 episodes, total num timesteps 101400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 507/10000 episodes, total num timesteps 101600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 508/10000 episodes, total num timesteps 101800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 509/10000 episodes, total num timesteps 102000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 510/10000 episodes, total num timesteps 102200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 511/10000 episodes, total num timesteps 102400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 512/10000 episodes, total num timesteps 102600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 513/10000 episodes, total num timesteps 102800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 514/10000 episodes, total num timesteps 103000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 515/10000 episodes, total num timesteps 103200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 516/10000 episodes, total num timesteps 103400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 517/10000 episodes, total num timesteps 103600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 518/10000 episodes, total num timesteps 103800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 519/10000 episodes, total num timesteps 104000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 520/10000 episodes, total num timesteps 104200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 521/10000 episodes, total num timesteps 104400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 522/10000 episodes, total num timesteps 104600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 523/10000 episodes, total num timesteps 104800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 524/10000 episodes, total num timesteps 105000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 525/10000 episodes, total num timesteps 105200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: -0.009342189311256228
team_policy eval average team episode rewards of agent0: 7.5
team_policy eval idv catch total num of agent0: 2
team_policy eval team catch total num: 3
team_policy eval average step individual rewards of agent1: 0.09586824904523404
team_policy eval average team episode rewards of agent1: 7.5
team_policy eval idv catch total num of agent1: 6
team_policy eval team catch total num: 3
team_policy eval average step individual rewards of agent2: 0.06792293857089556
team_policy eval average team episode rewards of agent2: 7.5
team_policy eval idv catch total num of agent2: 5
team_policy eval team catch total num: 3
team_policy eval average step individual rewards of agent3: -0.007066561177808537
team_policy eval average team episode rewards of agent3: 7.5
team_policy eval idv catch total num of agent3: 2
team_policy eval team catch total num: 3
team_policy eval average step individual rewards of agent4: 0.019795219124989414
team_policy eval average team episode rewards of agent4: 7.5
team_policy eval idv catch total num of agent4: 3
team_policy eval team catch total num: 3
idv_policy eval average step individual rewards of agent0: -0.044065837152343
idv_policy eval average team episode rewards of agent0: 0.0
idv_policy eval idv catch total num of agent0: 1
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent1: 0.0021882525171275312
idv_policy eval average team episode rewards of agent1: 0.0
idv_policy eval idv catch total num of agent1: 3
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent2: -0.04878330369366434
idv_policy eval average team episode rewards of agent2: 0.0
idv_policy eval idv catch total num of agent2: 1
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent3: 0.034413924288989976
idv_policy eval average team episode rewards of agent3: 0.0
idv_policy eval idv catch total num of agent3: 4
idv_policy eval team catch total num: 0
idv_policy eval average step individual rewards of agent4: -0.07466042952143284
idv_policy eval average team episode rewards of agent4: 0.0
idv_policy eval idv catch total num of agent4: 0
idv_policy eval team catch total num: 0

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 526/10000 episodes, total num timesteps 105400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 527/10000 episodes, total num timesteps 105600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 528/10000 episodes, total num timesteps 105800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 529/10000 episodes, total num timesteps 106000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 530/10000 episodes, total num timesteps 106200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 531/10000 episodes, total num timesteps 106400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 532/10000 episodes, total num timesteps 106600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 533/10000 episodes, total num timesteps 106800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 534/10000 episodes, total num timesteps 107000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 535/10000 episodes, total num timesteps 107200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 536/10000 episodes, total num timesteps 107400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 537/10000 episodes, total num timesteps 107600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 538/10000 episodes, total num timesteps 107800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 539/10000 episodes, total num timesteps 108000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 540/10000 episodes, total num timesteps 108200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 541/10000 episodes, total num timesteps 108400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 542/10000 episodes, total num timesteps 108600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 543/10000 episodes, total num timesteps 108800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 544/10000 episodes, total num timesteps 109000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 545/10000 episodes, total num timesteps 109200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 546/10000 episodes, total num timesteps 109400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 547/10000 episodes, total num timesteps 109600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 548/10000 episodes, total num timesteps 109800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 549/10000 episodes, total num timesteps 110000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 550/10000 episodes, total num timesteps 110200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.07993640329916589
team_policy eval average team episode rewards of agent0: 10.0
team_policy eval idv catch total num of agent0: 6
team_policy eval team catch total num: 4
team_policy eval average step individual rewards of agent1: -0.05157674131759223
team_policy eval average team episode rewards of agent1: 10.0
team_policy eval idv catch total num of agent1: 1
team_policy eval team catch total num: 4
team_policy eval average step individual rewards of agent2: 0.026531920225345947
team_policy eval average team episode rewards of agent2: 10.0
team_policy eval idv catch total num of agent2: 4
team_policy eval team catch total num: 4
team_policy eval average step individual rewards of agent3: 0.13552451709702149
team_policy eval average team episode rewards of agent3: 10.0
team_policy eval idv catch total num of agent3: 8
team_policy eval team catch total num: 4
team_policy eval average step individual rewards of agent4: 0.057387097604399306
team_policy eval average team episode rewards of agent4: 10.0
team_policy eval idv catch total num of agent4: 5
team_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent0: -0.004955937444019225
idv_policy eval average team episode rewards of agent0: 5.0
idv_policy eval idv catch total num of agent0: 2
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent1: -0.031519702164503526
idv_policy eval average team episode rewards of agent1: 5.0
idv_policy eval idv catch total num of agent1: 1
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent2: 0.07235099743304109
idv_policy eval average team episode rewards of agent2: 5.0
idv_policy eval idv catch total num of agent2: 5
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent3: -0.03586740647938633
idv_policy eval average team episode rewards of agent3: 5.0
idv_policy eval idv catch total num of agent3: 1
idv_policy eval team catch total num: 2
idv_policy eval average step individual rewards of agent4: -0.014800169979515282
idv_policy eval average team episode rewards of agent4: 5.0
idv_policy eval idv catch total num of agent4: 2
idv_policy eval team catch total num: 2

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 551/10000 episodes, total num timesteps 110400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 552/10000 episodes, total num timesteps 110600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 553/10000 episodes, total num timesteps 110800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 554/10000 episodes, total num timesteps 111000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 555/10000 episodes, total num timesteps 111200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 556/10000 episodes, total num timesteps 111400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 557/10000 episodes, total num timesteps 111600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 558/10000 episodes, total num timesteps 111800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 559/10000 episodes, total num timesteps 112000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 560/10000 episodes, total num timesteps 112200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 561/10000 episodes, total num timesteps 112400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 562/10000 episodes, total num timesteps 112600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 563/10000 episodes, total num timesteps 112800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 564/10000 episodes, total num timesteps 113000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 565/10000 episodes, total num timesteps 113200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 566/10000 episodes, total num timesteps 113400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 567/10000 episodes, total num timesteps 113600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 568/10000 episodes, total num timesteps 113800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 569/10000 episodes, total num timesteps 114000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 570/10000 episodes, total num timesteps 114200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 571/10000 episodes, total num timesteps 114400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 572/10000 episodes, total num timesteps 114600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 573/10000 episodes, total num timesteps 114800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 574/10000 episodes, total num timesteps 115000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 575/10000 episodes, total num timesteps 115200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.07371862390091302
team_policy eval average team episode rewards of agent0: 20.0
team_policy eval idv catch total num of agent0: 5
team_policy eval team catch total num: 8
team_policy eval average step individual rewards of agent1: 0.1794104324182949
team_policy eval average team episode rewards of agent1: 20.0
team_policy eval idv catch total num of agent1: 9
team_policy eval team catch total num: 8
team_policy eval average step individual rewards of agent2: 0.15183211335814184
team_policy eval average team episode rewards of agent2: 20.0
team_policy eval idv catch total num of agent2: 8
team_policy eval team catch total num: 8
team_policy eval average step individual rewards of agent3: 0.18131033750187836
team_policy eval average team episode rewards of agent3: 20.0
team_policy eval idv catch total num of agent3: 9
team_policy eval team catch total num: 8
team_policy eval average step individual rewards of agent4: 0.09763713774194495
team_policy eval average team episode rewards of agent4: 20.0
team_policy eval idv catch total num of agent4: 6
team_policy eval team catch total num: 8
idv_policy eval average step individual rewards of agent0: 0.0803277579680847
idv_policy eval average team episode rewards of agent0: 32.5
idv_policy eval idv catch total num of agent0: 5
idv_policy eval team catch total num: 13
idv_policy eval average step individual rewards of agent1: 0.10123098065687669
idv_policy eval average team episode rewards of agent1: 32.5
idv_policy eval idv catch total num of agent1: 6
idv_policy eval team catch total num: 13
idv_policy eval average step individual rewards of agent2: 0.4587057047474735
idv_policy eval average team episode rewards of agent2: 32.5
idv_policy eval idv catch total num of agent2: 20
idv_policy eval team catch total num: 13
idv_policy eval average step individual rewards of agent3: 0.3615266791891848
idv_policy eval average team episode rewards of agent3: 32.5
idv_policy eval idv catch total num of agent3: 16
idv_policy eval team catch total num: 13
idv_policy eval average step individual rewards of agent4: 0.2284902211333213
idv_policy eval average team episode rewards of agent4: 32.5
idv_policy eval idv catch total num of agent4: 11
idv_policy eval team catch total num: 13

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 576/10000 episodes, total num timesteps 115400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 577/10000 episodes, total num timesteps 115600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 578/10000 episodes, total num timesteps 115800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 579/10000 episodes, total num timesteps 116000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 580/10000 episodes, total num timesteps 116200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 581/10000 episodes, total num timesteps 116400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 582/10000 episodes, total num timesteps 116600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 583/10000 episodes, total num timesteps 116800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 584/10000 episodes, total num timesteps 117000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 585/10000 episodes, total num timesteps 117200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 586/10000 episodes, total num timesteps 117400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 587/10000 episodes, total num timesteps 117600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 588/10000 episodes, total num timesteps 117800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 589/10000 episodes, total num timesteps 118000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 590/10000 episodes, total num timesteps 118200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 591/10000 episodes, total num timesteps 118400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 592/10000 episodes, total num timesteps 118600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 593/10000 episodes, total num timesteps 118800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 594/10000 episodes, total num timesteps 119000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 595/10000 episodes, total num timesteps 119200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 596/10000 episodes, total num timesteps 119400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 597/10000 episodes, total num timesteps 119600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 598/10000 episodes, total num timesteps 119800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 599/10000 episodes, total num timesteps 120000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 600/10000 episodes, total num timesteps 120200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.22913057017389565
team_policy eval average team episode rewards of agent0: 20.0
team_policy eval idv catch total num of agent0: 11
team_policy eval team catch total num: 8
team_policy eval average step individual rewards of agent1: 0.1459877616574239
team_policy eval average team episode rewards of agent1: 20.0
team_policy eval idv catch total num of agent1: 8
team_policy eval team catch total num: 8
team_policy eval average step individual rewards of agent2: 0.06888278071047788
team_policy eval average team episode rewards of agent2: 20.0
team_policy eval idv catch total num of agent2: 5
team_policy eval team catch total num: 8
team_policy eval average step individual rewards of agent3: 0.028815701334969756
team_policy eval average team episode rewards of agent3: 20.0
team_policy eval idv catch total num of agent3: 3
team_policy eval team catch total num: 8
team_policy eval average step individual rewards of agent4: 0.15118833189376807
team_policy eval average team episode rewards of agent4: 20.0
team_policy eval idv catch total num of agent4: 8
team_policy eval team catch total num: 8
idv_policy eval average step individual rewards of agent0: 0.3817830576162882
idv_policy eval average team episode rewards of agent0: 30.0
idv_policy eval idv catch total num of agent0: 17
idv_policy eval team catch total num: 12
idv_policy eval average step individual rewards of agent1: 0.24999432338700026
idv_policy eval average team episode rewards of agent1: 30.0
idv_policy eval idv catch total num of agent1: 12
idv_policy eval team catch total num: 12
idv_policy eval average step individual rewards of agent2: 0.1531475671467755
idv_policy eval average team episode rewards of agent2: 30.0
idv_policy eval idv catch total num of agent2: 8
idv_policy eval team catch total num: 12
idv_policy eval average step individual rewards of agent3: 0.07857346357244865
idv_policy eval average team episode rewards of agent3: 30.0
idv_policy eval idv catch total num of agent3: 5
idv_policy eval team catch total num: 12
idv_policy eval average step individual rewards of agent4: 0.10382207684116353
idv_policy eval average team episode rewards of agent4: 30.0
idv_policy eval idv catch total num of agent4: 6
idv_policy eval team catch total num: 12

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 601/10000 episodes, total num timesteps 120400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 602/10000 episodes, total num timesteps 120600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 603/10000 episodes, total num timesteps 120800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 604/10000 episodes, total num timesteps 121000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 605/10000 episodes, total num timesteps 121200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 606/10000 episodes, total num timesteps 121400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 607/10000 episodes, total num timesteps 121600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 608/10000 episodes, total num timesteps 121800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 609/10000 episodes, total num timesteps 122000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 610/10000 episodes, total num timesteps 122200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 611/10000 episodes, total num timesteps 122400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 612/10000 episodes, total num timesteps 122600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 613/10000 episodes, total num timesteps 122800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 614/10000 episodes, total num timesteps 123000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 615/10000 episodes, total num timesteps 123200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 616/10000 episodes, total num timesteps 123400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 617/10000 episodes, total num timesteps 123600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 618/10000 episodes, total num timesteps 123800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 619/10000 episodes, total num timesteps 124000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 620/10000 episodes, total num timesteps 124200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 621/10000 episodes, total num timesteps 124400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 622/10000 episodes, total num timesteps 124600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 623/10000 episodes, total num timesteps 124800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 624/10000 episodes, total num timesteps 125000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 625/10000 episodes, total num timesteps 125200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.06013550465857332
team_policy eval average team episode rewards of agent0: 17.5
team_policy eval idv catch total num of agent0: 5
team_policy eval team catch total num: 7
team_policy eval average step individual rewards of agent1: 0.09372065710122296
team_policy eval average team episode rewards of agent1: 17.5
team_policy eval idv catch total num of agent1: 6
team_policy eval team catch total num: 7
team_policy eval average step individual rewards of agent2: 0.05233070089419122
team_policy eval average team episode rewards of agent2: 17.5
team_policy eval idv catch total num of agent2: 4
team_policy eval team catch total num: 7
team_policy eval average step individual rewards of agent3: 0.12185778553256588
team_policy eval average team episode rewards of agent3: 17.5
team_policy eval idv catch total num of agent3: 7
team_policy eval team catch total num: 7
team_policy eval average step individual rewards of agent4: 0.1707170694991764
team_policy eval average team episode rewards of agent4: 17.5
team_policy eval idv catch total num of agent4: 9
team_policy eval team catch total num: 7
idv_policy eval average step individual rewards of agent0: -0.010779198108270884
idv_policy eval average team episode rewards of agent0: 10.0
idv_policy eval idv catch total num of agent0: 2
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent1: 0.1110753784972101
idv_policy eval average team episode rewards of agent1: 10.0
idv_policy eval idv catch total num of agent1: 7
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent2: 0.056842279769804646
idv_policy eval average team episode rewards of agent2: 10.0
idv_policy eval idv catch total num of agent2: 5
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent3: 0.09196645111829006
idv_policy eval average team episode rewards of agent3: 10.0
idv_policy eval idv catch total num of agent3: 6
idv_policy eval team catch total num: 4
idv_policy eval average step individual rewards of agent4: 0.08800173720589063
idv_policy eval average team episode rewards of agent4: 10.0
idv_policy eval idv catch total num of agent4: 6
idv_policy eval team catch total num: 4

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 626/10000 episodes, total num timesteps 125400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 627/10000 episodes, total num timesteps 125600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 628/10000 episodes, total num timesteps 125800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 629/10000 episodes, total num timesteps 126000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 630/10000 episodes, total num timesteps 126200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 631/10000 episodes, total num timesteps 126400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 632/10000 episodes, total num timesteps 126600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 633/10000 episodes, total num timesteps 126800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 634/10000 episodes, total num timesteps 127000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 635/10000 episodes, total num timesteps 127200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 636/10000 episodes, total num timesteps 127400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 637/10000 episodes, total num timesteps 127600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 638/10000 episodes, total num timesteps 127800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 639/10000 episodes, total num timesteps 128000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 640/10000 episodes, total num timesteps 128200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 641/10000 episodes, total num timesteps 128400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 642/10000 episodes, total num timesteps 128600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 643/10000 episodes, total num timesteps 128800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 644/10000 episodes, total num timesteps 129000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 645/10000 episodes, total num timesteps 129200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 646/10000 episodes, total num timesteps 129400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 647/10000 episodes, total num timesteps 129600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 648/10000 episodes, total num timesteps 129800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 649/10000 episodes, total num timesteps 130000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 650/10000 episodes, total num timesteps 130200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.20116682748883855
team_policy eval average team episode rewards of agent0: 32.5
team_policy eval idv catch total num of agent0: 10
team_policy eval team catch total num: 13
team_policy eval average step individual rewards of agent1: 0.2007847641411684
team_policy eval average team episode rewards of agent1: 32.5
team_policy eval idv catch total num of agent1: 10
team_policy eval team catch total num: 13
team_policy eval average step individual rewards of agent2: 0.27371900180741754
team_policy eval average team episode rewards of agent2: 32.5
team_policy eval idv catch total num of agent2: 13
team_policy eval team catch total num: 13
team_policy eval average step individual rewards of agent3: 0.17320638728049986
team_policy eval average team episode rewards of agent3: 32.5
team_policy eval idv catch total num of agent3: 9
team_policy eval team catch total num: 13
team_policy eval average step individual rewards of agent4: 0.2286024974841834
team_policy eval average team episode rewards of agent4: 32.5
team_policy eval idv catch total num of agent4: 11
team_policy eval team catch total num: 13
idv_policy eval average step individual rewards of agent0: 0.0488029240499744
idv_policy eval average team episode rewards of agent0: 17.5
idv_policy eval idv catch total num of agent0: 4
idv_policy eval team catch total num: 7
idv_policy eval average step individual rewards of agent1: 0.12648806421346834
idv_policy eval average team episode rewards of agent1: 17.5
idv_policy eval idv catch total num of agent1: 7
idv_policy eval team catch total num: 7
idv_policy eval average step individual rewards of agent2: 0.2581710515381611
idv_policy eval average team episode rewards of agent2: 17.5
idv_policy eval idv catch total num of agent2: 12
idv_policy eval team catch total num: 7
idv_policy eval average step individual rewards of agent3: 0.14890127577704834
idv_policy eval average team episode rewards of agent3: 17.5
idv_policy eval idv catch total num of agent3: 8
idv_policy eval team catch total num: 7
idv_policy eval average step individual rewards of agent4: 0.14839968530559422
idv_policy eval average team episode rewards of agent4: 17.5
idv_policy eval idv catch total num of agent4: 8
idv_policy eval team catch total num: 7

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 651/10000 episodes, total num timesteps 130400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 652/10000 episodes, total num timesteps 130600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 653/10000 episodes, total num timesteps 130800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 654/10000 episodes, total num timesteps 131000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 655/10000 episodes, total num timesteps 131200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 656/10000 episodes, total num timesteps 131400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 657/10000 episodes, total num timesteps 131600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 658/10000 episodes, total num timesteps 131800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 659/10000 episodes, total num timesteps 132000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 660/10000 episodes, total num timesteps 132200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 661/10000 episodes, total num timesteps 132400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 662/10000 episodes, total num timesteps 132600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 663/10000 episodes, total num timesteps 132800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 664/10000 episodes, total num timesteps 133000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 665/10000 episodes, total num timesteps 133200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 666/10000 episodes, total num timesteps 133400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 667/10000 episodes, total num timesteps 133600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 668/10000 episodes, total num timesteps 133800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 669/10000 episodes, total num timesteps 134000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 670/10000 episodes, total num timesteps 134200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 671/10000 episodes, total num timesteps 134400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 672/10000 episodes, total num timesteps 134600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 673/10000 episodes, total num timesteps 134800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 674/10000 episodes, total num timesteps 135000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 675/10000 episodes, total num timesteps 135200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.2168750951203434
team_policy eval average team episode rewards of agent0: 27.5
team_policy eval idv catch total num of agent0: 11
team_policy eval team catch total num: 11
team_policy eval average step individual rewards of agent1: 0.14877873842836073
team_policy eval average team episode rewards of agent1: 27.5
team_policy eval idv catch total num of agent1: 8
team_policy eval team catch total num: 11
team_policy eval average step individual rewards of agent2: 0.226147337149027
team_policy eval average team episode rewards of agent2: 27.5
team_policy eval idv catch total num of agent2: 11
team_policy eval team catch total num: 11
team_policy eval average step individual rewards of agent3: 0.18736324729395426
team_policy eval average team episode rewards of agent3: 27.5
team_policy eval idv catch total num of agent3: 10
team_policy eval team catch total num: 11
team_policy eval average step individual rewards of agent4: 0.11774396377743417
team_policy eval average team episode rewards of agent4: 27.5
team_policy eval idv catch total num of agent4: 7
team_policy eval team catch total num: 11
idv_policy eval average step individual rewards of agent0: 0.17789071122104208
idv_policy eval average team episode rewards of agent0: 17.5
idv_policy eval idv catch total num of agent0: 9
idv_policy eval team catch total num: 7
idv_policy eval average step individual rewards of agent1: 0.11976503587536676
idv_policy eval average team episode rewards of agent1: 17.5
idv_policy eval idv catch total num of agent1: 7
idv_policy eval team catch total num: 7
idv_policy eval average step individual rewards of agent2: 0.12387513608747526
idv_policy eval average team episode rewards of agent2: 17.5
idv_policy eval idv catch total num of agent2: 7
idv_policy eval team catch total num: 7
idv_policy eval average step individual rewards of agent3: 0.0955229108060729
idv_policy eval average team episode rewards of agent3: 17.5
idv_policy eval idv catch total num of agent3: 6
idv_policy eval team catch total num: 7
idv_policy eval average step individual rewards of agent4: 0.18901141999644624
idv_policy eval average team episode rewards of agent4: 17.5
idv_policy eval idv catch total num of agent4: 10
idv_policy eval team catch total num: 7

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 676/10000 episodes, total num timesteps 135400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 677/10000 episodes, total num timesteps 135600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 678/10000 episodes, total num timesteps 135800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 679/10000 episodes, total num timesteps 136000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 680/10000 episodes, total num timesteps 136200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 681/10000 episodes, total num timesteps 136400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 682/10000 episodes, total num timesteps 136600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 683/10000 episodes, total num timesteps 136800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 684/10000 episodes, total num timesteps 137000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 685/10000 episodes, total num timesteps 137200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 686/10000 episodes, total num timesteps 137400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 687/10000 episodes, total num timesteps 137600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 688/10000 episodes, total num timesteps 137800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 689/10000 episodes, total num timesteps 138000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 690/10000 episodes, total num timesteps 138200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 691/10000 episodes, total num timesteps 138400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 692/10000 episodes, total num timesteps 138600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 693/10000 episodes, total num timesteps 138800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 694/10000 episodes, total num timesteps 139000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 695/10000 episodes, total num timesteps 139200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 696/10000 episodes, total num timesteps 139400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 697/10000 episodes, total num timesteps 139600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 698/10000 episodes, total num timesteps 139800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 699/10000 episodes, total num timesteps 140000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 700/10000 episodes, total num timesteps 140200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.27821241709618666
team_policy eval average team episode rewards of agent0: 47.5
team_policy eval idv catch total num of agent0: 13
team_policy eval team catch total num: 19
team_policy eval average step individual rewards of agent1: 0.40396000336226534
team_policy eval average team episode rewards of agent1: 47.5
team_policy eval idv catch total num of agent1: 18
team_policy eval team catch total num: 19
team_policy eval average step individual rewards of agent2: 0.35405075332933394
team_policy eval average team episode rewards of agent2: 47.5
team_policy eval idv catch total num of agent2: 16
team_policy eval team catch total num: 19
team_policy eval average step individual rewards of agent3: 0.12376446603774656
team_policy eval average team episode rewards of agent3: 47.5
team_policy eval idv catch total num of agent3: 7
team_policy eval team catch total num: 19
team_policy eval average step individual rewards of agent4: 0.30569821947845727
team_policy eval average team episode rewards of agent4: 47.5
team_policy eval idv catch total num of agent4: 14
team_policy eval team catch total num: 19
idv_policy eval average step individual rewards of agent0: 0.12811095524023552
idv_policy eval average team episode rewards of agent0: 32.5
idv_policy eval idv catch total num of agent0: 7
idv_policy eval team catch total num: 13
idv_policy eval average step individual rewards of agent1: 0.19593972913067603
idv_policy eval average team episode rewards of agent1: 32.5
idv_policy eval idv catch total num of agent1: 10
idv_policy eval team catch total num: 13
idv_policy eval average step individual rewards of agent2: 0.35658911951357924
idv_policy eval average team episode rewards of agent2: 32.5
idv_policy eval idv catch total num of agent2: 16
idv_policy eval team catch total num: 13
idv_policy eval average step individual rewards of agent3: 0.2278694418396122
idv_policy eval average team episode rewards of agent3: 32.5
idv_policy eval idv catch total num of agent3: 11
idv_policy eval team catch total num: 13
idv_policy eval average step individual rewards of agent4: 0.13645505032013852
idv_policy eval average team episode rewards of agent4: 32.5
idv_policy eval idv catch total num of agent4: 8
idv_policy eval team catch total num: 13

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 701/10000 episodes, total num timesteps 140400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 702/10000 episodes, total num timesteps 140600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 703/10000 episodes, total num timesteps 140800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 704/10000 episodes, total num timesteps 141000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 705/10000 episodes, total num timesteps 141200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 706/10000 episodes, total num timesteps 141400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 707/10000 episodes, total num timesteps 141600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 708/10000 episodes, total num timesteps 141800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 709/10000 episodes, total num timesteps 142000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 710/10000 episodes, total num timesteps 142200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 711/10000 episodes, total num timesteps 142400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 712/10000 episodes, total num timesteps 142600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 713/10000 episodes, total num timesteps 142800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 714/10000 episodes, total num timesteps 143000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 715/10000 episodes, total num timesteps 143200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 716/10000 episodes, total num timesteps 143400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 717/10000 episodes, total num timesteps 143600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 718/10000 episodes, total num timesteps 143800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 719/10000 episodes, total num timesteps 144000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 720/10000 episodes, total num timesteps 144200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 721/10000 episodes, total num timesteps 144400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 722/10000 episodes, total num timesteps 144600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 723/10000 episodes, total num timesteps 144800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 724/10000 episodes, total num timesteps 145000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 725/10000 episodes, total num timesteps 145200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.12752077337806988
team_policy eval average team episode rewards of agent0: 25.0
team_policy eval idv catch total num of agent0: 7
team_policy eval team catch total num: 10
team_policy eval average step individual rewards of agent1: 0.017436488838210206
team_policy eval average team episode rewards of agent1: 25.0
team_policy eval idv catch total num of agent1: 3
team_policy eval team catch total num: 10
team_policy eval average step individual rewards of agent2: 0.3242546788151925
team_policy eval average team episode rewards of agent2: 25.0
team_policy eval idv catch total num of agent2: 15
team_policy eval team catch total num: 10
team_policy eval average step individual rewards of agent3: 0.22373932720621678
team_policy eval average team episode rewards of agent3: 25.0
team_policy eval idv catch total num of agent3: 11
team_policy eval team catch total num: 10
team_policy eval average step individual rewards of agent4: 0.12620175731056368
team_policy eval average team episode rewards of agent4: 25.0
team_policy eval idv catch total num of agent4: 7
team_policy eval team catch total num: 10
idv_policy eval average step individual rewards of agent0: 0.10420501700405307
idv_policy eval average team episode rewards of agent0: 35.0
idv_policy eval idv catch total num of agent0: 6
idv_policy eval team catch total num: 14
idv_policy eval average step individual rewards of agent1: 0.20083942305940922
idv_policy eval average team episode rewards of agent1: 35.0
idv_policy eval idv catch total num of agent1: 10
idv_policy eval team catch total num: 14
idv_policy eval average step individual rewards of agent2: 0.20308986452403416
idv_policy eval average team episode rewards of agent2: 35.0
idv_policy eval idv catch total num of agent2: 10
idv_policy eval team catch total num: 14
idv_policy eval average step individual rewards of agent3: 0.27190615444990196
idv_policy eval average team episode rewards of agent3: 35.0
idv_policy eval idv catch total num of agent3: 13
idv_policy eval team catch total num: 14
idv_policy eval average step individual rewards of agent4: 0.3249469824657317
idv_policy eval average team episode rewards of agent4: 35.0
idv_policy eval idv catch total num of agent4: 15
idv_policy eval team catch total num: 14

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 726/10000 episodes, total num timesteps 145400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 727/10000 episodes, total num timesteps 145600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 728/10000 episodes, total num timesteps 145800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 729/10000 episodes, total num timesteps 146000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 730/10000 episodes, total num timesteps 146200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 731/10000 episodes, total num timesteps 146400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 732/10000 episodes, total num timesteps 146600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 733/10000 episodes, total num timesteps 146800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 734/10000 episodes, total num timesteps 147000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 735/10000 episodes, total num timesteps 147200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 736/10000 episodes, total num timesteps 147400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 737/10000 episodes, total num timesteps 147600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 738/10000 episodes, total num timesteps 147800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 739/10000 episodes, total num timesteps 148000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 740/10000 episodes, total num timesteps 148200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 741/10000 episodes, total num timesteps 148400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 742/10000 episodes, total num timesteps 148600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 743/10000 episodes, total num timesteps 148800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 744/10000 episodes, total num timesteps 149000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 745/10000 episodes, total num timesteps 149200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 746/10000 episodes, total num timesteps 149400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 747/10000 episodes, total num timesteps 149600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 748/10000 episodes, total num timesteps 149800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 749/10000 episodes, total num timesteps 150000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 750/10000 episodes, total num timesteps 150200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.25154146542983896
team_policy eval average team episode rewards of agent0: 37.5
team_policy eval idv catch total num of agent0: 12
team_policy eval team catch total num: 15
team_policy eval average step individual rewards of agent1: 0.38394431553300845
team_policy eval average team episode rewards of agent1: 37.5
team_policy eval idv catch total num of agent1: 17
team_policy eval team catch total num: 15
team_policy eval average step individual rewards of agent2: 0.30512786580485624
team_policy eval average team episode rewards of agent2: 37.5
team_policy eval idv catch total num of agent2: 14
team_policy eval team catch total num: 15
team_policy eval average step individual rewards of agent3: 0.15760344107104088
team_policy eval average team episode rewards of agent3: 37.5
team_policy eval idv catch total num of agent3: 8
team_policy eval team catch total num: 15
team_policy eval average step individual rewards of agent4: 0.3253076099095925
team_policy eval average team episode rewards of agent4: 37.5
team_policy eval idv catch total num of agent4: 15
team_policy eval team catch total num: 15
idv_policy eval average step individual rewards of agent0: 0.14982893390886953
idv_policy eval average team episode rewards of agent0: 35.0
idv_policy eval idv catch total num of agent0: 8
idv_policy eval team catch total num: 14
idv_policy eval average step individual rewards of agent1: 0.32635825915707917
idv_policy eval average team episode rewards of agent1: 35.0
idv_policy eval idv catch total num of agent1: 15
idv_policy eval team catch total num: 14
idv_policy eval average step individual rewards of agent2: 0.25237704984036624
idv_policy eval average team episode rewards of agent2: 35.0
idv_policy eval idv catch total num of agent2: 12
idv_policy eval team catch total num: 14
idv_policy eval average step individual rewards of agent3: 0.17811104973306763
idv_policy eval average team episode rewards of agent3: 35.0
idv_policy eval idv catch total num of agent3: 9
idv_policy eval team catch total num: 14
idv_policy eval average step individual rewards of agent4: 0.27282283625841436
idv_policy eval average team episode rewards of agent4: 35.0
idv_policy eval idv catch total num of agent4: 13
idv_policy eval team catch total num: 14

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 751/10000 episodes, total num timesteps 150400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 752/10000 episodes, total num timesteps 150600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 753/10000 episodes, total num timesteps 150800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 754/10000 episodes, total num timesteps 151000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 755/10000 episodes, total num timesteps 151200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 756/10000 episodes, total num timesteps 151400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 757/10000 episodes, total num timesteps 151600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 758/10000 episodes, total num timesteps 151800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 759/10000 episodes, total num timesteps 152000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 760/10000 episodes, total num timesteps 152200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 761/10000 episodes, total num timesteps 152400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 762/10000 episodes, total num timesteps 152600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 763/10000 episodes, total num timesteps 152800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 764/10000 episodes, total num timesteps 153000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 765/10000 episodes, total num timesteps 153200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 766/10000 episodes, total num timesteps 153400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 767/10000 episodes, total num timesteps 153600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 768/10000 episodes, total num timesteps 153800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 769/10000 episodes, total num timesteps 154000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 770/10000 episodes, total num timesteps 154200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 771/10000 episodes, total num timesteps 154400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 772/10000 episodes, total num timesteps 154600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 773/10000 episodes, total num timesteps 154800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 774/10000 episodes, total num timesteps 155000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 775/10000 episodes, total num timesteps 155200/2000000, FPS 333.

team_policy eval average step individual rewards of agent0: 0.45299796361296063
team_policy eval average team episode rewards of agent0: 47.5
team_policy eval idv catch total num of agent0: 20
team_policy eval team catch total num: 19
team_policy eval average step individual rewards of agent1: 0.3863038275588032
team_policy eval average team episode rewards of agent1: 47.5
team_policy eval idv catch total num of agent1: 17
team_policy eval team catch total num: 19
team_policy eval average step individual rewards of agent2: 0.3857384290171534
team_policy eval average team episode rewards of agent2: 47.5
team_policy eval idv catch total num of agent2: 17
team_policy eval team catch total num: 19
team_policy eval average step individual rewards of agent3: 0.43924185035603225
team_policy eval average team episode rewards of agent3: 47.5
team_policy eval idv catch total num of agent3: 19
team_policy eval team catch total num: 19
team_policy eval average step individual rewards of agent4: 0.5614571516212092
team_policy eval average team episode rewards of agent4: 47.5
team_policy eval idv catch total num of agent4: 24
team_policy eval team catch total num: 19
idv_policy eval average step individual rewards of agent0: 0.09546484257094341
idv_policy eval average team episode rewards of agent0: 45.0
idv_policy eval idv catch total num of agent0: 6
idv_policy eval team catch total num: 18
idv_policy eval average step individual rewards of agent1: 0.4786021697949815
idv_policy eval average team episode rewards of agent1: 45.0
idv_policy eval idv catch total num of agent1: 21
idv_policy eval team catch total num: 18
idv_policy eval average step individual rewards of agent2: 0.37738208024483993
idv_policy eval average team episode rewards of agent2: 45.0
idv_policy eval idv catch total num of agent2: 17
idv_policy eval team catch total num: 18
idv_policy eval average step individual rewards of agent3: 0.3834443333614584
idv_policy eval average team episode rewards of agent3: 45.0
idv_policy eval idv catch total num of agent3: 17
idv_policy eval team catch total num: 18
idv_policy eval average step individual rewards of agent4: 0.19841635705927146
idv_policy eval average team episode rewards of agent4: 45.0
idv_policy eval idv catch total num of agent4: 10
idv_policy eval team catch total num: 18

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 776/10000 episodes, total num timesteps 155400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 777/10000 episodes, total num timesteps 155600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 778/10000 episodes, total num timesteps 155800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 779/10000 episodes, total num timesteps 156000/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 780/10000 episodes, total num timesteps 156200/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 781/10000 episodes, total num timesteps 156400/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 782/10000 episodes, total num timesteps 156600/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 783/10000 episodes, total num timesteps 156800/2000000, FPS 333.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 784/10000 episodes, total num timesteps 157000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 785/10000 episodes, total num timesteps 157200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 786/10000 episodes, total num timesteps 157400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 787/10000 episodes, total num timesteps 157600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 788/10000 episodes, total num timesteps 157800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 789/10000 episodes, total num timesteps 158000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 790/10000 episodes, total num timesteps 158200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 791/10000 episodes, total num timesteps 158400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 792/10000 episodes, total num timesteps 158600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 793/10000 episodes, total num timesteps 158800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 794/10000 episodes, total num timesteps 159000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 795/10000 episodes, total num timesteps 159200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 796/10000 episodes, total num timesteps 159400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 797/10000 episodes, total num timesteps 159600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 798/10000 episodes, total num timesteps 159800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 799/10000 episodes, total num timesteps 160000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 800/10000 episodes, total num timesteps 160200/2000000, FPS 332.

team_policy eval average step individual rewards of agent0: 0.2962030923069216
team_policy eval average team episode rewards of agent0: 42.5
team_policy eval idv catch total num of agent0: 14
team_policy eval team catch total num: 17
team_policy eval average step individual rewards of agent1: 0.17201484168987602
team_policy eval average team episode rewards of agent1: 42.5
team_policy eval idv catch total num of agent1: 9
team_policy eval team catch total num: 17
team_policy eval average step individual rewards of agent2: 0.23054934098560215
team_policy eval average team episode rewards of agent2: 42.5
team_policy eval idv catch total num of agent2: 11
team_policy eval team catch total num: 17
team_policy eval average step individual rewards of agent3: 0.3340180338848179
team_policy eval average team episode rewards of agent3: 42.5
team_policy eval idv catch total num of agent3: 15
team_policy eval team catch total num: 17
team_policy eval average step individual rewards of agent4: 0.2753549256723416
team_policy eval average team episode rewards of agent4: 42.5
team_policy eval idv catch total num of agent4: 13
team_policy eval team catch total num: 17
idv_policy eval average step individual rewards of agent0: 0.06992317839477058
idv_policy eval average team episode rewards of agent0: 25.0
idv_policy eval idv catch total num of agent0: 5
idv_policy eval team catch total num: 10
idv_policy eval average step individual rewards of agent1: 0.25274308602465795
idv_policy eval average team episode rewards of agent1: 25.0
idv_policy eval idv catch total num of agent1: 12
idv_policy eval team catch total num: 10
idv_policy eval average step individual rewards of agent2: 0.19541491234363903
idv_policy eval average team episode rewards of agent2: 25.0
idv_policy eval idv catch total num of agent2: 10
idv_policy eval team catch total num: 10
idv_policy eval average step individual rewards of agent3: 0.19924365595594262
idv_policy eval average team episode rewards of agent3: 25.0
idv_policy eval idv catch total num of agent3: 10
idv_policy eval team catch total num: 10
idv_policy eval average step individual rewards of agent4: 0.07021222459072511
idv_policy eval average team episode rewards of agent4: 25.0
idv_policy eval idv catch total num of agent4: 5
idv_policy eval team catch total num: 10

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 801/10000 episodes, total num timesteps 160400/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 802/10000 episodes, total num timesteps 160600/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 803/10000 episodes, total num timesteps 160800/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 804/10000 episodes, total num timesteps 161000/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 805/10000 episodes, total num timesteps 161200/2000000, FPS 332.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 806/10000 episodes, total num timesteps 161400/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 807/10000 episodes, total num timesteps 161600/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 808/10000 episodes, total num timesteps 161800/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 809/10000 episodes, total num timesteps 162000/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 810/10000 episodes, total num timesteps 162200/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 811/10000 episodes, total num timesteps 162400/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 812/10000 episodes, total num timesteps 162600/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 813/10000 episodes, total num timesteps 162800/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 814/10000 episodes, total num timesteps 163000/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 815/10000 episodes, total num timesteps 163200/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 816/10000 episodes, total num timesteps 163400/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 817/10000 episodes, total num timesteps 163600/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 818/10000 episodes, total num timesteps 163800/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 819/10000 episodes, total num timesteps 164000/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 820/10000 episodes, total num timesteps 164200/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 821/10000 episodes, total num timesteps 164400/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 822/10000 episodes, total num timesteps 164600/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 823/10000 episodes, total num timesteps 164800/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 824/10000 episodes, total num timesteps 165000/2000000, FPS 331.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 825/10000 episodes, total num timesteps 165200/2000000, FPS 330.

team_policy eval average step individual rewards of agent0: 0.4109889261349089
team_policy eval average team episode rewards of agent0: 65.0
team_policy eval idv catch total num of agent0: 18
team_policy eval team catch total num: 26
team_policy eval average step individual rewards of agent1: 0.5637802830543922
team_policy eval average team episode rewards of agent1: 65.0
team_policy eval idv catch total num of agent1: 24
team_policy eval team catch total num: 26
team_policy eval average step individual rewards of agent2: 0.48437482481180866
team_policy eval average team episode rewards of agent2: 65.0
team_policy eval idv catch total num of agent2: 21
team_policy eval team catch total num: 26
team_policy eval average step individual rewards of agent3: 0.4878119350436981
team_policy eval average team episode rewards of agent3: 65.0
team_policy eval idv catch total num of agent3: 21
team_policy eval team catch total num: 26
team_policy eval average step individual rewards of agent4: 0.4329811409849374
team_policy eval average team episode rewards of agent4: 65.0
team_policy eval idv catch total num of agent4: 19
team_policy eval team catch total num: 26
idv_policy eval average step individual rewards of agent0: 0.4060064201935005
idv_policy eval average team episode rewards of agent0: 70.0
idv_policy eval idv catch total num of agent0: 18
idv_policy eval team catch total num: 28
idv_policy eval average step individual rewards of agent1: 0.4550954824594827
idv_policy eval average team episode rewards of agent1: 70.0
idv_policy eval idv catch total num of agent1: 20
idv_policy eval team catch total num: 28
idv_policy eval average step individual rewards of agent2: 0.40419236748366294
idv_policy eval average team episode rewards of agent2: 70.0
idv_policy eval idv catch total num of agent2: 18
idv_policy eval team catch total num: 28
idv_policy eval average step individual rewards of agent3: 0.5872561961636371
idv_policy eval average team episode rewards of agent3: 70.0
idv_policy eval idv catch total num of agent3: 25
idv_policy eval team catch total num: 28
idv_policy eval average step individual rewards of agent4: 0.42413384730439657
idv_policy eval average team episode rewards of agent4: 70.0
idv_policy eval idv catch total num of agent4: 19
idv_policy eval team catch total num: 28

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 826/10000 episodes, total num timesteps 165400/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 827/10000 episodes, total num timesteps 165600/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 828/10000 episodes, total num timesteps 165800/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 829/10000 episodes, total num timesteps 166000/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 830/10000 episodes, total num timesteps 166200/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 831/10000 episodes, total num timesteps 166400/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 832/10000 episodes, total num timesteps 166600/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 833/10000 episodes, total num timesteps 166800/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 834/10000 episodes, total num timesteps 167000/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 835/10000 episodes, total num timesteps 167200/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 836/10000 episodes, total num timesteps 167400/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 837/10000 episodes, total num timesteps 167600/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 838/10000 episodes, total num timesteps 167800/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 839/10000 episodes, total num timesteps 168000/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 840/10000 episodes, total num timesteps 168200/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 841/10000 episodes, total num timesteps 168400/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 842/10000 episodes, total num timesteps 168600/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 843/10000 episodes, total num timesteps 168800/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 844/10000 episodes, total num timesteps 169000/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 845/10000 episodes, total num timesteps 169200/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 846/10000 episodes, total num timesteps 169400/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 847/10000 episodes, total num timesteps 169600/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 848/10000 episodes, total num timesteps 169800/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 849/10000 episodes, total num timesteps 170000/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 850/10000 episodes, total num timesteps 170200/2000000, FPS 330.

team_policy eval average step individual rewards of agent0: 0.6734133088003622
team_policy eval average team episode rewards of agent0: 72.5
team_policy eval idv catch total num of agent0: 29
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent1: 0.3534567621444121
team_policy eval average team episode rewards of agent1: 72.5
team_policy eval idv catch total num of agent1: 16
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent2: 0.4591597844333062
team_policy eval average team episode rewards of agent2: 72.5
team_policy eval idv catch total num of agent2: 20
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent3: 0.5574508055480348
team_policy eval average team episode rewards of agent3: 72.5
team_policy eval idv catch total num of agent3: 24
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent4: 0.4992667646759309
team_policy eval average team episode rewards of agent4: 72.5
team_policy eval idv catch total num of agent4: 22
team_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent0: 0.6883007209801965
idv_policy eval average team episode rewards of agent0: 52.5
idv_policy eval idv catch total num of agent0: 29
idv_policy eval team catch total num: 21
idv_policy eval average step individual rewards of agent1: 0.37954968598965527
idv_policy eval average team episode rewards of agent1: 52.5
idv_policy eval idv catch total num of agent1: 17
idv_policy eval team catch total num: 21
idv_policy eval average step individual rewards of agent2: 0.32855689052212866
idv_policy eval average team episode rewards of agent2: 52.5
idv_policy eval idv catch total num of agent2: 15
idv_policy eval team catch total num: 21
idv_policy eval average step individual rewards of agent3: 0.36087628219709034
idv_policy eval average team episode rewards of agent3: 52.5
idv_policy eval idv catch total num of agent3: 16
idv_policy eval team catch total num: 21
idv_policy eval average step individual rewards of agent4: 0.3980075782767013
idv_policy eval average team episode rewards of agent4: 52.5
idv_policy eval idv catch total num of agent4: 18
idv_policy eval team catch total num: 21

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 851/10000 episodes, total num timesteps 170400/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 852/10000 episodes, total num timesteps 170600/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 853/10000 episodes, total num timesteps 170800/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 854/10000 episodes, total num timesteps 171000/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 855/10000 episodes, total num timesteps 171200/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 856/10000 episodes, total num timesteps 171400/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 857/10000 episodes, total num timesteps 171600/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 858/10000 episodes, total num timesteps 171800/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 859/10000 episodes, total num timesteps 172000/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 860/10000 episodes, total num timesteps 172200/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 861/10000 episodes, total num timesteps 172400/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 862/10000 episodes, total num timesteps 172600/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 863/10000 episodes, total num timesteps 172800/2000000, FPS 330.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 864/10000 episodes, total num timesteps 173000/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 865/10000 episodes, total num timesteps 173200/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 866/10000 episodes, total num timesteps 173400/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 867/10000 episodes, total num timesteps 173600/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 868/10000 episodes, total num timesteps 173800/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 869/10000 episodes, total num timesteps 174000/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 870/10000 episodes, total num timesteps 174200/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 871/10000 episodes, total num timesteps 174400/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 872/10000 episodes, total num timesteps 174600/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 873/10000 episodes, total num timesteps 174800/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 874/10000 episodes, total num timesteps 175000/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 875/10000 episodes, total num timesteps 175200/2000000, FPS 329.

team_policy eval average step individual rewards of agent0: 0.5545755121103154
team_policy eval average team episode rewards of agent0: 77.5
team_policy eval idv catch total num of agent0: 24
team_policy eval team catch total num: 31
team_policy eval average step individual rewards of agent1: 0.6418007385581975
team_policy eval average team episode rewards of agent1: 77.5
team_policy eval idv catch total num of agent1: 27
team_policy eval team catch total num: 31
team_policy eval average step individual rewards of agent2: 0.5109230620620283
team_policy eval average team episode rewards of agent2: 77.5
team_policy eval idv catch total num of agent2: 22
team_policy eval team catch total num: 31
team_policy eval average step individual rewards of agent3: 0.45988185684602056
team_policy eval average team episode rewards of agent3: 77.5
team_policy eval idv catch total num of agent3: 20
team_policy eval team catch total num: 31
team_policy eval average step individual rewards of agent4: 0.47450897856280433
team_policy eval average team episode rewards of agent4: 77.5
team_policy eval idv catch total num of agent4: 21
team_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent0: 0.5360064696309116
idv_policy eval average team episode rewards of agent0: 55.0
idv_policy eval idv catch total num of agent0: 23
idv_policy eval team catch total num: 22
idv_policy eval average step individual rewards of agent1: 0.24507612712161972
idv_policy eval average team episode rewards of agent1: 55.0
idv_policy eval idv catch total num of agent1: 12
idv_policy eval team catch total num: 22
idv_policy eval average step individual rewards of agent2: 0.5398501635928387
idv_policy eval average team episode rewards of agent2: 55.0
idv_policy eval idv catch total num of agent2: 23
idv_policy eval team catch total num: 22
idv_policy eval average step individual rewards of agent3: 0.4875014208550338
idv_policy eval average team episode rewards of agent3: 55.0
idv_policy eval idv catch total num of agent3: 21
idv_policy eval team catch total num: 22
idv_policy eval average step individual rewards of agent4: 0.5876599130898174
idv_policy eval average team episode rewards of agent4: 55.0
idv_policy eval idv catch total num of agent4: 25
idv_policy eval team catch total num: 22

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 876/10000 episodes, total num timesteps 175400/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 877/10000 episodes, total num timesteps 175600/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 878/10000 episodes, total num timesteps 175800/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 879/10000 episodes, total num timesteps 176000/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 880/10000 episodes, total num timesteps 176200/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 881/10000 episodes, total num timesteps 176400/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 882/10000 episodes, total num timesteps 176600/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 883/10000 episodes, total num timesteps 176800/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 884/10000 episodes, total num timesteps 177000/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 885/10000 episodes, total num timesteps 177200/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 886/10000 episodes, total num timesteps 177400/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 887/10000 episodes, total num timesteps 177600/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 888/10000 episodes, total num timesteps 177800/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 889/10000 episodes, total num timesteps 178000/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 890/10000 episodes, total num timesteps 178200/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 891/10000 episodes, total num timesteps 178400/2000000, FPS 329.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 892/10000 episodes, total num timesteps 178600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 893/10000 episodes, total num timesteps 178800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 894/10000 episodes, total num timesteps 179000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 895/10000 episodes, total num timesteps 179200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 896/10000 episodes, total num timesteps 179400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 897/10000 episodes, total num timesteps 179600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 898/10000 episodes, total num timesteps 179800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 899/10000 episodes, total num timesteps 180000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 900/10000 episodes, total num timesteps 180200/2000000, FPS 328.

team_policy eval average step individual rewards of agent0: 0.5843260429874747
team_policy eval average team episode rewards of agent0: 52.5
team_policy eval idv catch total num of agent0: 25
team_policy eval team catch total num: 21
team_policy eval average step individual rewards of agent1: 0.4876450166816245
team_policy eval average team episode rewards of agent1: 52.5
team_policy eval idv catch total num of agent1: 21
team_policy eval team catch total num: 21
team_policy eval average step individual rewards of agent2: 0.31437868193926727
team_policy eval average team episode rewards of agent2: 52.5
team_policy eval idv catch total num of agent2: 14
team_policy eval team catch total num: 21
team_policy eval average step individual rewards of agent3: 0.5656872846052473
team_policy eval average team episode rewards of agent3: 52.5
team_policy eval idv catch total num of agent3: 24
team_policy eval team catch total num: 21
team_policy eval average step individual rewards of agent4: 0.27130200200643606
team_policy eval average team episode rewards of agent4: 52.5
team_policy eval idv catch total num of agent4: 13
team_policy eval team catch total num: 21
idv_policy eval average step individual rewards of agent0: 0.4255204964285355
idv_policy eval average team episode rewards of agent0: 50.0
idv_policy eval idv catch total num of agent0: 19
idv_policy eval team catch total num: 20
idv_policy eval average step individual rewards of agent1: 0.2975366027283883
idv_policy eval average team episode rewards of agent1: 50.0
idv_policy eval idv catch total num of agent1: 14
idv_policy eval team catch total num: 20
idv_policy eval average step individual rewards of agent2: 0.3018544540976691
idv_policy eval average team episode rewards of agent2: 50.0
idv_policy eval idv catch total num of agent2: 14
idv_policy eval team catch total num: 20
idv_policy eval average step individual rewards of agent3: 0.24418364545965177
idv_policy eval average team episode rewards of agent3: 50.0
idv_policy eval idv catch total num of agent3: 12
idv_policy eval team catch total num: 20
idv_policy eval average step individual rewards of agent4: 0.30376753414106905
idv_policy eval average team episode rewards of agent4: 50.0
idv_policy eval idv catch total num of agent4: 14
idv_policy eval team catch total num: 20

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 901/10000 episodes, total num timesteps 180400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 902/10000 episodes, total num timesteps 180600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 903/10000 episodes, total num timesteps 180800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 904/10000 episodes, total num timesteps 181000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 905/10000 episodes, total num timesteps 181200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 906/10000 episodes, total num timesteps 181400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 907/10000 episodes, total num timesteps 181600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 908/10000 episodes, total num timesteps 181800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 909/10000 episodes, total num timesteps 182000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 910/10000 episodes, total num timesteps 182200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 911/10000 episodes, total num timesteps 182400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 912/10000 episodes, total num timesteps 182600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 913/10000 episodes, total num timesteps 182800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 914/10000 episodes, total num timesteps 183000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 915/10000 episodes, total num timesteps 183200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 916/10000 episodes, total num timesteps 183400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 917/10000 episodes, total num timesteps 183600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 918/10000 episodes, total num timesteps 183800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 919/10000 episodes, total num timesteps 184000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 920/10000 episodes, total num timesteps 184200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 921/10000 episodes, total num timesteps 184400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 922/10000 episodes, total num timesteps 184600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 923/10000 episodes, total num timesteps 184800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 924/10000 episodes, total num timesteps 185000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 925/10000 episodes, total num timesteps 185200/2000000, FPS 328.

team_policy eval average step individual rewards of agent0: 0.43184252238423676
team_policy eval average team episode rewards of agent0: 35.0
team_policy eval idv catch total num of agent0: 19
team_policy eval team catch total num: 14
team_policy eval average step individual rewards of agent1: 0.513608211193882
team_policy eval average team episode rewards of agent1: 35.0
team_policy eval idv catch total num of agent1: 22
team_policy eval team catch total num: 14
team_policy eval average step individual rewards of agent2: 0.14425633391673956
team_policy eval average team episode rewards of agent2: 35.0
team_policy eval idv catch total num of agent2: 8
team_policy eval team catch total num: 14
team_policy eval average step individual rewards of agent3: 0.16849349539717529
team_policy eval average team episode rewards of agent3: 35.0
team_policy eval idv catch total num of agent3: 9
team_policy eval team catch total num: 14
team_policy eval average step individual rewards of agent4: 0.14459705697892844
team_policy eval average team episode rewards of agent4: 35.0
team_policy eval idv catch total num of agent4: 8
team_policy eval team catch total num: 14
idv_policy eval average step individual rewards of agent0: 0.4754070010517826
idv_policy eval average team episode rewards of agent0: 82.5
idv_policy eval idv catch total num of agent0: 21
idv_policy eval team catch total num: 33
idv_policy eval average step individual rewards of agent1: 0.47977736869180077
idv_policy eval average team episode rewards of agent1: 82.5
idv_policy eval idv catch total num of agent1: 21
idv_policy eval team catch total num: 33
idv_policy eval average step individual rewards of agent2: 0.588337265000993
idv_policy eval average team episode rewards of agent2: 82.5
idv_policy eval idv catch total num of agent2: 25
idv_policy eval team catch total num: 33
idv_policy eval average step individual rewards of agent3: 0.8134370904139402
idv_policy eval average team episode rewards of agent3: 82.5
idv_policy eval idv catch total num of agent3: 34
idv_policy eval team catch total num: 33
idv_policy eval average step individual rewards of agent4: 0.5568372678848018
idv_policy eval average team episode rewards of agent4: 82.5
idv_policy eval idv catch total num of agent4: 24
idv_policy eval team catch total num: 33

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 926/10000 episodes, total num timesteps 185400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 927/10000 episodes, total num timesteps 185600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 928/10000 episodes, total num timesteps 185800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 929/10000 episodes, total num timesteps 186000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 930/10000 episodes, total num timesteps 186200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 931/10000 episodes, total num timesteps 186400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 932/10000 episodes, total num timesteps 186600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 933/10000 episodes, total num timesteps 186800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 934/10000 episodes, total num timesteps 187000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 935/10000 episodes, total num timesteps 187200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 936/10000 episodes, total num timesteps 187400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 937/10000 episodes, total num timesteps 187600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 938/10000 episodes, total num timesteps 187800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 939/10000 episodes, total num timesteps 188000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 940/10000 episodes, total num timesteps 188200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 941/10000 episodes, total num timesteps 188400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 942/10000 episodes, total num timesteps 188600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 943/10000 episodes, total num timesteps 188800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 944/10000 episodes, total num timesteps 189000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 945/10000 episodes, total num timesteps 189200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 946/10000 episodes, total num timesteps 189400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 947/10000 episodes, total num timesteps 189600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 948/10000 episodes, total num timesteps 189800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 949/10000 episodes, total num timesteps 190000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 950/10000 episodes, total num timesteps 190200/2000000, FPS 327.

team_policy eval average step individual rewards of agent0: 0.6149327576291147
team_policy eval average team episode rewards of agent0: 72.5
team_policy eval idv catch total num of agent0: 26
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent1: 0.489749755690585
team_policy eval average team episode rewards of agent1: 72.5
team_policy eval idv catch total num of agent1: 21
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent2: 0.6160427921512576
team_policy eval average team episode rewards of agent2: 72.5
team_policy eval idv catch total num of agent2: 26
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent3: 0.5182758155375047
team_policy eval average team episode rewards of agent3: 72.5
team_policy eval idv catch total num of agent3: 22
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent4: 0.5608434416115111
team_policy eval average team episode rewards of agent4: 72.5
team_policy eval idv catch total num of agent4: 24
team_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent0: 0.7333071131825072
idv_policy eval average team episode rewards of agent0: 55.0
idv_policy eval idv catch total num of agent0: 31
idv_policy eval team catch total num: 22
idv_policy eval average step individual rewards of agent1: 0.537229495427421
idv_policy eval average team episode rewards of agent1: 55.0
idv_policy eval idv catch total num of agent1: 23
idv_policy eval team catch total num: 22
idv_policy eval average step individual rewards of agent2: 0.26109989116582694
idv_policy eval average team episode rewards of agent2: 55.0
idv_policy eval idv catch total num of agent2: 12
idv_policy eval team catch total num: 22
idv_policy eval average step individual rewards of agent3: 0.4664512534511925
idv_policy eval average team episode rewards of agent3: 55.0
idv_policy eval idv catch total num of agent3: 20
idv_policy eval team catch total num: 22
idv_policy eval average step individual rewards of agent4: 0.48686851911079043
idv_policy eval average team episode rewards of agent4: 55.0
idv_policy eval idv catch total num of agent4: 21
idv_policy eval team catch total num: 22

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 951/10000 episodes, total num timesteps 190400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 952/10000 episodes, total num timesteps 190600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 953/10000 episodes, total num timesteps 190800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 954/10000 episodes, total num timesteps 191000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 955/10000 episodes, total num timesteps 191200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 956/10000 episodes, total num timesteps 191400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 957/10000 episodes, total num timesteps 191600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 958/10000 episodes, total num timesteps 191800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 959/10000 episodes, total num timesteps 192000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 960/10000 episodes, total num timesteps 192200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 961/10000 episodes, total num timesteps 192400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 962/10000 episodes, total num timesteps 192600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 963/10000 episodes, total num timesteps 192800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 964/10000 episodes, total num timesteps 193000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 965/10000 episodes, total num timesteps 193200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 966/10000 episodes, total num timesteps 193400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 967/10000 episodes, total num timesteps 193600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 968/10000 episodes, total num timesteps 193800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 969/10000 episodes, total num timesteps 194000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 970/10000 episodes, total num timesteps 194200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 971/10000 episodes, total num timesteps 194400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 972/10000 episodes, total num timesteps 194600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 973/10000 episodes, total num timesteps 194800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 974/10000 episodes, total num timesteps 195000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 975/10000 episodes, total num timesteps 195200/2000000, FPS 328.

team_policy eval average step individual rewards of agent0: 0.4882311865690197
team_policy eval average team episode rewards of agent0: 52.5
team_policy eval idv catch total num of agent0: 21
team_policy eval team catch total num: 21
team_policy eval average step individual rewards of agent1: 0.6376534490461426
team_policy eval average team episode rewards of agent1: 52.5
team_policy eval idv catch total num of agent1: 27
team_policy eval team catch total num: 21
team_policy eval average step individual rewards of agent2: 0.4333173313911685
team_policy eval average team episode rewards of agent2: 52.5
team_policy eval idv catch total num of agent2: 19
team_policy eval team catch total num: 21
team_policy eval average step individual rewards of agent3: 0.25729936153221367
team_policy eval average team episode rewards of agent3: 52.5
team_policy eval idv catch total num of agent3: 12
team_policy eval team catch total num: 21
team_policy eval average step individual rewards of agent4: 0.46226249367063127
team_policy eval average team episode rewards of agent4: 52.5
team_policy eval idv catch total num of agent4: 20
team_policy eval team catch total num: 21
idv_policy eval average step individual rewards of agent0: 0.4362808456944432
idv_policy eval average team episode rewards of agent0: 57.5
idv_policy eval idv catch total num of agent0: 19
idv_policy eval team catch total num: 23
idv_policy eval average step individual rewards of agent1: 0.2528412980245075
idv_policy eval average team episode rewards of agent1: 57.5
idv_policy eval idv catch total num of agent1: 12
idv_policy eval team catch total num: 23
idv_policy eval average step individual rewards of agent2: 0.5554988831744853
idv_policy eval average team episode rewards of agent2: 57.5
idv_policy eval idv catch total num of agent2: 24
idv_policy eval team catch total num: 23
idv_policy eval average step individual rewards of agent3: 0.7783760049049571
idv_policy eval average team episode rewards of agent3: 57.5
idv_policy eval idv catch total num of agent3: 33
idv_policy eval team catch total num: 23
idv_policy eval average step individual rewards of agent4: 0.562149276822883
idv_policy eval average team episode rewards of agent4: 57.5
idv_policy eval idv catch total num of agent4: 24
idv_policy eval team catch total num: 23

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 976/10000 episodes, total num timesteps 195400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 977/10000 episodes, total num timesteps 195600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 978/10000 episodes, total num timesteps 195800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 979/10000 episodes, total num timesteps 196000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 980/10000 episodes, total num timesteps 196200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 981/10000 episodes, total num timesteps 196400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 982/10000 episodes, total num timesteps 196600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 983/10000 episodes, total num timesteps 196800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 984/10000 episodes, total num timesteps 197000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 985/10000 episodes, total num timesteps 197200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 986/10000 episodes, total num timesteps 197400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 987/10000 episodes, total num timesteps 197600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 988/10000 episodes, total num timesteps 197800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 989/10000 episodes, total num timesteps 198000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 990/10000 episodes, total num timesteps 198200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 991/10000 episodes, total num timesteps 198400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 992/10000 episodes, total num timesteps 198600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 993/10000 episodes, total num timesteps 198800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 994/10000 episodes, total num timesteps 199000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 995/10000 episodes, total num timesteps 199200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 996/10000 episodes, total num timesteps 199400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 997/10000 episodes, total num timesteps 199600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 998/10000 episodes, total num timesteps 199800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 999/10000 episodes, total num timesteps 200000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1000/10000 episodes, total num timesteps 200200/2000000, FPS 328.

team_policy eval average step individual rewards of agent0: 0.4241971885248562
team_policy eval average team episode rewards of agent0: 72.5
team_policy eval idv catch total num of agent0: 19
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent1: 0.46177768606785363
team_policy eval average team episode rewards of agent1: 72.5
team_policy eval idv catch total num of agent1: 20
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent2: 0.45287439886899
team_policy eval average team episode rewards of agent2: 72.5
team_policy eval idv catch total num of agent2: 20
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent3: 0.4806458862939867
team_policy eval average team episode rewards of agent3: 72.5
team_policy eval idv catch total num of agent3: 21
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent4: 0.5001498096192609
team_policy eval average team episode rewards of agent4: 72.5
team_policy eval idv catch total num of agent4: 22
team_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent0: 0.36180220159454657
idv_policy eval average team episode rewards of agent0: 75.0
idv_policy eval idv catch total num of agent0: 16
idv_policy eval team catch total num: 30
idv_policy eval average step individual rewards of agent1: 0.4588460169034038
idv_policy eval average team episode rewards of agent1: 75.0
idv_policy eval idv catch total num of agent1: 20
idv_policy eval team catch total num: 30
idv_policy eval average step individual rewards of agent2: 0.45623293259162984
idv_policy eval average team episode rewards of agent2: 75.0
idv_policy eval idv catch total num of agent2: 20
idv_policy eval team catch total num: 30
idv_policy eval average step individual rewards of agent3: 0.7422695050730923
idv_policy eval average team episode rewards of agent3: 75.0
idv_policy eval idv catch total num of agent3: 31
idv_policy eval team catch total num: 30
idv_policy eval average step individual rewards of agent4: 0.40921225800720584
idv_policy eval average team episode rewards of agent4: 75.0
idv_policy eval idv catch total num of agent4: 18
idv_policy eval team catch total num: 30

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1001/10000 episodes, total num timesteps 200400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1002/10000 episodes, total num timesteps 200600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1003/10000 episodes, total num timesteps 200800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1004/10000 episodes, total num timesteps 201000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1005/10000 episodes, total num timesteps 201200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1006/10000 episodes, total num timesteps 201400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1007/10000 episodes, total num timesteps 201600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1008/10000 episodes, total num timesteps 201800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1009/10000 episodes, total num timesteps 202000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1010/10000 episodes, total num timesteps 202200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1011/10000 episodes, total num timesteps 202400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1012/10000 episodes, total num timesteps 202600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1013/10000 episodes, total num timesteps 202800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1014/10000 episodes, total num timesteps 203000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1015/10000 episodes, total num timesteps 203200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1016/10000 episodes, total num timesteps 203400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1017/10000 episodes, total num timesteps 203600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1018/10000 episodes, total num timesteps 203800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1019/10000 episodes, total num timesteps 204000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1020/10000 episodes, total num timesteps 204200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1021/10000 episodes, total num timesteps 204400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1022/10000 episodes, total num timesteps 204600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1023/10000 episodes, total num timesteps 204800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1024/10000 episodes, total num timesteps 205000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1025/10000 episodes, total num timesteps 205200/2000000, FPS 328.

team_policy eval average step individual rewards of agent0: 0.5630970535342182
team_policy eval average team episode rewards of agent0: 100.0
team_policy eval idv catch total num of agent0: 24
team_policy eval team catch total num: 40
team_policy eval average step individual rewards of agent1: 0.7839230805161854
team_policy eval average team episode rewards of agent1: 100.0
team_policy eval idv catch total num of agent1: 33
team_policy eval team catch total num: 40
team_policy eval average step individual rewards of agent2: 0.8701758238648882
team_policy eval average team episode rewards of agent2: 100.0
team_policy eval idv catch total num of agent2: 36
team_policy eval team catch total num: 40
team_policy eval average step individual rewards of agent3: 0.7641596192423534
team_policy eval average team episode rewards of agent3: 100.0
team_policy eval idv catch total num of agent3: 32
team_policy eval team catch total num: 40
team_policy eval average step individual rewards of agent4: 0.6426117232897594
team_policy eval average team episode rewards of agent4: 100.0
team_policy eval idv catch total num of agent4: 27
team_policy eval team catch total num: 40
idv_policy eval average step individual rewards of agent0: 0.5898213441824431
idv_policy eval average team episode rewards of agent0: 77.5
idv_policy eval idv catch total num of agent0: 25
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent1: 0.6385813089009837
idv_policy eval average team episode rewards of agent1: 77.5
idv_policy eval idv catch total num of agent1: 27
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent2: 0.3269197379884198
idv_policy eval average team episode rewards of agent2: 77.5
idv_policy eval idv catch total num of agent2: 15
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent3: 0.6641686769877297
idv_policy eval average team episode rewards of agent3: 77.5
idv_policy eval idv catch total num of agent3: 28
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent4: 0.3094391683693681
idv_policy eval average team episode rewards of agent4: 77.5
idv_policy eval idv catch total num of agent4: 14
idv_policy eval team catch total num: 31

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1026/10000 episodes, total num timesteps 205400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1027/10000 episodes, total num timesteps 205600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1028/10000 episodes, total num timesteps 205800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1029/10000 episodes, total num timesteps 206000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1030/10000 episodes, total num timesteps 206200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1031/10000 episodes, total num timesteps 206400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1032/10000 episodes, total num timesteps 206600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1033/10000 episodes, total num timesteps 206800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1034/10000 episodes, total num timesteps 207000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1035/10000 episodes, total num timesteps 207200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1036/10000 episodes, total num timesteps 207400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1037/10000 episodes, total num timesteps 207600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1038/10000 episodes, total num timesteps 207800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1039/10000 episodes, total num timesteps 208000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1040/10000 episodes, total num timesteps 208200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1041/10000 episodes, total num timesteps 208400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1042/10000 episodes, total num timesteps 208600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1043/10000 episodes, total num timesteps 208800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1044/10000 episodes, total num timesteps 209000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1045/10000 episodes, total num timesteps 209200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1046/10000 episodes, total num timesteps 209400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1047/10000 episodes, total num timesteps 209600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1048/10000 episodes, total num timesteps 209800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1049/10000 episodes, total num timesteps 210000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1050/10000 episodes, total num timesteps 210200/2000000, FPS 328.

team_policy eval average step individual rewards of agent0: 0.9367008446704713
team_policy eval average team episode rewards of agent0: 100.0
team_policy eval idv catch total num of agent0: 39
team_policy eval team catch total num: 40
team_policy eval average step individual rewards of agent1: 0.9493974288156478
team_policy eval average team episode rewards of agent1: 100.0
team_policy eval idv catch total num of agent1: 39
team_policy eval team catch total num: 40
team_policy eval average step individual rewards of agent2: 0.913408796687747
team_policy eval average team episode rewards of agent2: 100.0
team_policy eval idv catch total num of agent2: 38
team_policy eval team catch total num: 40
team_policy eval average step individual rewards of agent3: 0.500403436365704
team_policy eval average team episode rewards of agent3: 100.0
team_policy eval idv catch total num of agent3: 22
team_policy eval team catch total num: 40
team_policy eval average step individual rewards of agent4: 0.5852542088870356
team_policy eval average team episode rewards of agent4: 100.0
team_policy eval idv catch total num of agent4: 25
team_policy eval team catch total num: 40
idv_policy eval average step individual rewards of agent0: 0.5636163858395375
idv_policy eval average team episode rewards of agent0: 77.5
idv_policy eval idv catch total num of agent0: 24
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent1: 0.5444145459375045
idv_policy eval average team episode rewards of agent1: 77.5
idv_policy eval idv catch total num of agent1: 23
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent2: 0.8429772524557001
idv_policy eval average team episode rewards of agent2: 77.5
idv_policy eval idv catch total num of agent2: 35
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent3: 0.4911331192061019
idv_policy eval average team episode rewards of agent3: 77.5
idv_policy eval idv catch total num of agent3: 21
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent4: 0.6352106939345076
idv_policy eval average team episode rewards of agent4: 77.5
idv_policy eval idv catch total num of agent4: 27
idv_policy eval team catch total num: 31

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1051/10000 episodes, total num timesteps 210400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1052/10000 episodes, total num timesteps 210600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1053/10000 episodes, total num timesteps 210800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1054/10000 episodes, total num timesteps 211000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1055/10000 episodes, total num timesteps 211200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1056/10000 episodes, total num timesteps 211400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1057/10000 episodes, total num timesteps 211600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1058/10000 episodes, total num timesteps 211800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1059/10000 episodes, total num timesteps 212000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1060/10000 episodes, total num timesteps 212200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1061/10000 episodes, total num timesteps 212400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1062/10000 episodes, total num timesteps 212600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1063/10000 episodes, total num timesteps 212800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1064/10000 episodes, total num timesteps 213000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1065/10000 episodes, total num timesteps 213200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1066/10000 episodes, total num timesteps 213400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1067/10000 episodes, total num timesteps 213600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1068/10000 episodes, total num timesteps 213800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1069/10000 episodes, total num timesteps 214000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1070/10000 episodes, total num timesteps 214200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1071/10000 episodes, total num timesteps 214400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1072/10000 episodes, total num timesteps 214600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1073/10000 episodes, total num timesteps 214800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1074/10000 episodes, total num timesteps 215000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1075/10000 episodes, total num timesteps 215200/2000000, FPS 328.

team_policy eval average step individual rewards of agent0: 0.711912282637404
team_policy eval average team episode rewards of agent0: 85.0
team_policy eval idv catch total num of agent0: 30
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent1: 0.4115850371610151
team_policy eval average team episode rewards of agent1: 85.0
team_policy eval idv catch total num of agent1: 18
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent2: 0.38991028864707616
team_policy eval average team episode rewards of agent2: 85.0
team_policy eval idv catch total num of agent2: 17
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent3: 0.5837268717067726
team_policy eval average team episode rewards of agent3: 85.0
team_policy eval idv catch total num of agent3: 25
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent4: 0.7835481861334856
team_policy eval average team episode rewards of agent4: 85.0
team_policy eval idv catch total num of agent4: 33
team_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent0: 0.8630219021056729
idv_policy eval average team episode rewards of agent0: 122.5
idv_policy eval idv catch total num of agent0: 36
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent1: 1.2184780576835281
idv_policy eval average team episode rewards of agent1: 122.5
idv_policy eval idv catch total num of agent1: 50
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent2: 0.9155978377013644
idv_policy eval average team episode rewards of agent2: 122.5
idv_policy eval idv catch total num of agent2: 38
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent3: 0.737027384563668
idv_policy eval average team episode rewards of agent3: 122.5
idv_policy eval idv catch total num of agent3: 31
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent4: 1.0384587499496893
idv_policy eval average team episode rewards of agent4: 122.5
idv_policy eval idv catch total num of agent4: 43
idv_policy eval team catch total num: 49

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1076/10000 episodes, total num timesteps 215400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1077/10000 episodes, total num timesteps 215600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1078/10000 episodes, total num timesteps 215800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1079/10000 episodes, total num timesteps 216000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1080/10000 episodes, total num timesteps 216200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1081/10000 episodes, total num timesteps 216400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1082/10000 episodes, total num timesteps 216600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1083/10000 episodes, total num timesteps 216800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1084/10000 episodes, total num timesteps 217000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1085/10000 episodes, total num timesteps 217200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1086/10000 episodes, total num timesteps 217400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1087/10000 episodes, total num timesteps 217600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1088/10000 episodes, total num timesteps 217800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1089/10000 episodes, total num timesteps 218000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1090/10000 episodes, total num timesteps 218200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1091/10000 episodes, total num timesteps 218400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1092/10000 episodes, total num timesteps 218600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1093/10000 episodes, total num timesteps 218800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1094/10000 episodes, total num timesteps 219000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1095/10000 episodes, total num timesteps 219200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1096/10000 episodes, total num timesteps 219400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1097/10000 episodes, total num timesteps 219600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1098/10000 episodes, total num timesteps 219800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1099/10000 episodes, total num timesteps 220000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1100/10000 episodes, total num timesteps 220200/2000000, FPS 328.

team_policy eval average step individual rewards of agent0: 0.37958222741753794
team_policy eval average team episode rewards of agent0: 72.5
team_policy eval idv catch total num of agent0: 17
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent1: 0.3063920733028553
team_policy eval average team episode rewards of agent1: 72.5
team_policy eval idv catch total num of agent1: 14
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent2: 0.4906434183120665
team_policy eval average team episode rewards of agent2: 72.5
team_policy eval idv catch total num of agent2: 21
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent3: 0.7419050195721067
team_policy eval average team episode rewards of agent3: 72.5
team_policy eval idv catch total num of agent3: 31
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent4: 0.7187216735768672
team_policy eval average team episode rewards of agent4: 72.5
team_policy eval idv catch total num of agent4: 30
team_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent0: 0.4736742265299775
idv_policy eval average team episode rewards of agent0: 77.5
idv_policy eval idv catch total num of agent0: 21
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent1: 0.6135289038822309
idv_policy eval average team episode rewards of agent1: 77.5
idv_policy eval idv catch total num of agent1: 26
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent2: 0.6794210422311002
idv_policy eval average team episode rewards of agent2: 77.5
idv_policy eval idv catch total num of agent2: 29
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent3: 0.5513413826942297
idv_policy eval average team episode rewards of agent3: 77.5
idv_policy eval idv catch total num of agent3: 24
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent4: 0.3239268260066147
idv_policy eval average team episode rewards of agent4: 77.5
idv_policy eval idv catch total num of agent4: 15
idv_policy eval team catch total num: 31

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1101/10000 episodes, total num timesteps 220400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1102/10000 episodes, total num timesteps 220600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1103/10000 episodes, total num timesteps 220800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1104/10000 episodes, total num timesteps 221000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1105/10000 episodes, total num timesteps 221200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1106/10000 episodes, total num timesteps 221400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1107/10000 episodes, total num timesteps 221600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1108/10000 episodes, total num timesteps 221800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1109/10000 episodes, total num timesteps 222000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1110/10000 episodes, total num timesteps 222200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1111/10000 episodes, total num timesteps 222400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1112/10000 episodes, total num timesteps 222600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1113/10000 episodes, total num timesteps 222800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1114/10000 episodes, total num timesteps 223000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1115/10000 episodes, total num timesteps 223200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1116/10000 episodes, total num timesteps 223400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1117/10000 episodes, total num timesteps 223600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1118/10000 episodes, total num timesteps 223800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1119/10000 episodes, total num timesteps 224000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1120/10000 episodes, total num timesteps 224200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1121/10000 episodes, total num timesteps 224400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1122/10000 episodes, total num timesteps 224600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1123/10000 episodes, total num timesteps 224800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1124/10000 episodes, total num timesteps 225000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1125/10000 episodes, total num timesteps 225200/2000000, FPS 328.

team_policy eval average step individual rewards of agent0: 0.517667823713275
team_policy eval average team episode rewards of agent0: 80.0
team_policy eval idv catch total num of agent0: 22
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent1: 0.4666381660915144
team_policy eval average team episode rewards of agent1: 80.0
team_policy eval idv catch total num of agent1: 20
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent2: 0.7421256473156282
team_policy eval average team episode rewards of agent2: 80.0
team_policy eval idv catch total num of agent2: 31
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent3: 0.6100129746129425
team_policy eval average team episode rewards of agent3: 80.0
team_policy eval idv catch total num of agent3: 26
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent4: 0.7925908370564039
team_policy eval average team episode rewards of agent4: 80.0
team_policy eval idv catch total num of agent4: 33
team_policy eval team catch total num: 32
idv_policy eval average step individual rewards of agent0: 0.7444150371250852
idv_policy eval average team episode rewards of agent0: 72.5
idv_policy eval idv catch total num of agent0: 31
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent1: 0.5119127105695579
idv_policy eval average team episode rewards of agent1: 72.5
idv_policy eval idv catch total num of agent1: 22
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent2: 0.6675967014805062
idv_policy eval average team episode rewards of agent2: 72.5
idv_policy eval idv catch total num of agent2: 28
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent3: 0.3536023986192946
idv_policy eval average team episode rewards of agent3: 72.5
idv_policy eval idv catch total num of agent3: 16
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent4: 0.5069431464820677
idv_policy eval average team episode rewards of agent4: 72.5
idv_policy eval idv catch total num of agent4: 22
idv_policy eval team catch total num: 29

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1126/10000 episodes, total num timesteps 225400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1127/10000 episodes, total num timesteps 225600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1128/10000 episodes, total num timesteps 225800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1129/10000 episodes, total num timesteps 226000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1130/10000 episodes, total num timesteps 226200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1131/10000 episodes, total num timesteps 226400/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1132/10000 episodes, total num timesteps 226600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1133/10000 episodes, total num timesteps 226800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1134/10000 episodes, total num timesteps 227000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1135/10000 episodes, total num timesteps 227200/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1136/10000 episodes, total num timesteps 227400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1137/10000 episodes, total num timesteps 227600/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1138/10000 episodes, total num timesteps 227800/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1139/10000 episodes, total num timesteps 228000/2000000, FPS 328.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1140/10000 episodes, total num timesteps 228200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1141/10000 episodes, total num timesteps 228400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1142/10000 episodes, total num timesteps 228600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1143/10000 episodes, total num timesteps 228800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1144/10000 episodes, total num timesteps 229000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1145/10000 episodes, total num timesteps 229200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1146/10000 episodes, total num timesteps 229400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1147/10000 episodes, total num timesteps 229600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1148/10000 episodes, total num timesteps 229800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1149/10000 episodes, total num timesteps 230000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1150/10000 episodes, total num timesteps 230200/2000000, FPS 327.

team_policy eval average step individual rewards of agent0: 0.8197442982142873
team_policy eval average team episode rewards of agent0: 110.0
team_policy eval idv catch total num of agent0: 34
team_policy eval team catch total num: 44
team_policy eval average step individual rewards of agent1: 1.0429353192285642
team_policy eval average team episode rewards of agent1: 110.0
team_policy eval idv catch total num of agent1: 43
team_policy eval team catch total num: 44
team_policy eval average step individual rewards of agent2: 1.2692259277177032
team_policy eval average team episode rewards of agent2: 110.0
team_policy eval idv catch total num of agent2: 52
team_policy eval team catch total num: 44
team_policy eval average step individual rewards of agent3: 0.5667110869388076
team_policy eval average team episode rewards of agent3: 110.0
team_policy eval idv catch total num of agent3: 24
team_policy eval team catch total num: 44
team_policy eval average step individual rewards of agent4: 0.2563186873091684
team_policy eval average team episode rewards of agent4: 110.0
team_policy eval idv catch total num of agent4: 12
team_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent0: 0.6928894379962819
idv_policy eval average team episode rewards of agent0: 120.0
idv_policy eval idv catch total num of agent0: 29
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent1: 0.894994424303315
idv_policy eval average team episode rewards of agent1: 120.0
idv_policy eval idv catch total num of agent1: 37
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent2: 0.8157370895446013
idv_policy eval average team episode rewards of agent2: 120.0
idv_policy eval idv catch total num of agent2: 34
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent3: 0.5631976055568896
idv_policy eval average team episode rewards of agent3: 120.0
idv_policy eval idv catch total num of agent3: 24
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent4: 0.9218178531828255
idv_policy eval average team episode rewards of agent4: 120.0
idv_policy eval idv catch total num of agent4: 38
idv_policy eval team catch total num: 48

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1151/10000 episodes, total num timesteps 230400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1152/10000 episodes, total num timesteps 230600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1153/10000 episodes, total num timesteps 230800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1154/10000 episodes, total num timesteps 231000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1155/10000 episodes, total num timesteps 231200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1156/10000 episodes, total num timesteps 231400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1157/10000 episodes, total num timesteps 231600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1158/10000 episodes, total num timesteps 231800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1159/10000 episodes, total num timesteps 232000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1160/10000 episodes, total num timesteps 232200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1161/10000 episodes, total num timesteps 232400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1162/10000 episodes, total num timesteps 232600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1163/10000 episodes, total num timesteps 232800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1164/10000 episodes, total num timesteps 233000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1165/10000 episodes, total num timesteps 233200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1166/10000 episodes, total num timesteps 233400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1167/10000 episodes, total num timesteps 233600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1168/10000 episodes, total num timesteps 233800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1169/10000 episodes, total num timesteps 234000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1170/10000 episodes, total num timesteps 234200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1171/10000 episodes, total num timesteps 234400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1172/10000 episodes, total num timesteps 234600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1173/10000 episodes, total num timesteps 234800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1174/10000 episodes, total num timesteps 235000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1175/10000 episodes, total num timesteps 235200/2000000, FPS 327.

team_policy eval average step individual rewards of agent0: 0.6871021045266608
team_policy eval average team episode rewards of agent0: 102.5
team_policy eval idv catch total num of agent0: 29
team_policy eval team catch total num: 41
team_policy eval average step individual rewards of agent1: 0.8944648650120706
team_policy eval average team episode rewards of agent1: 102.5
team_policy eval idv catch total num of agent1: 37
team_policy eval team catch total num: 41
team_policy eval average step individual rewards of agent2: 0.7948734990363233
team_policy eval average team episode rewards of agent2: 102.5
team_policy eval idv catch total num of agent2: 33
team_policy eval team catch total num: 41
team_policy eval average step individual rewards of agent3: 0.48515822637565637
team_policy eval average team episode rewards of agent3: 102.5
team_policy eval idv catch total num of agent3: 21
team_policy eval team catch total num: 41
team_policy eval average step individual rewards of agent4: 0.5603158803972462
team_policy eval average team episode rewards of agent4: 102.5
team_policy eval idv catch total num of agent4: 24
team_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent0: 0.7898749249055554
idv_policy eval average team episode rewards of agent0: 105.0
idv_policy eval idv catch total num of agent0: 33
idv_policy eval team catch total num: 42
idv_policy eval average step individual rewards of agent1: 0.817108221985711
idv_policy eval average team episode rewards of agent1: 105.0
idv_policy eval idv catch total num of agent1: 34
idv_policy eval team catch total num: 42
idv_policy eval average step individual rewards of agent2: 0.7869358601777298
idv_policy eval average team episode rewards of agent2: 105.0
idv_policy eval idv catch total num of agent2: 33
idv_policy eval team catch total num: 42
idv_policy eval average step individual rewards of agent3: 1.0731414740654859
idv_policy eval average team episode rewards of agent3: 105.0
idv_policy eval idv catch total num of agent3: 44
idv_policy eval team catch total num: 42
idv_policy eval average step individual rewards of agent4: 0.6618045967643471
idv_policy eval average team episode rewards of agent4: 105.0
idv_policy eval idv catch total num of agent4: 28
idv_policy eval team catch total num: 42

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1176/10000 episodes, total num timesteps 235400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1177/10000 episodes, total num timesteps 235600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1178/10000 episodes, total num timesteps 235800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1179/10000 episodes, total num timesteps 236000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1180/10000 episodes, total num timesteps 236200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1181/10000 episodes, total num timesteps 236400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1182/10000 episodes, total num timesteps 236600/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1183/10000 episodes, total num timesteps 236800/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1184/10000 episodes, total num timesteps 237000/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1185/10000 episodes, total num timesteps 237200/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1186/10000 episodes, total num timesteps 237400/2000000, FPS 327.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1187/10000 episodes, total num timesteps 237600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1188/10000 episodes, total num timesteps 237800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1189/10000 episodes, total num timesteps 238000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1190/10000 episodes, total num timesteps 238200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1191/10000 episodes, total num timesteps 238400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1192/10000 episodes, total num timesteps 238600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1193/10000 episodes, total num timesteps 238800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1194/10000 episodes, total num timesteps 239000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1195/10000 episodes, total num timesteps 239200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1196/10000 episodes, total num timesteps 239400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1197/10000 episodes, total num timesteps 239600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1198/10000 episodes, total num timesteps 239800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1199/10000 episodes, total num timesteps 240000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1200/10000 episodes, total num timesteps 240200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.4565110397215843
team_policy eval average team episode rewards of agent0: 122.5
team_policy eval idv catch total num of agent0: 20
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent1: 0.8458545728919643
team_policy eval average team episode rewards of agent1: 122.5
team_policy eval idv catch total num of agent1: 35
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent2: 0.7411974206188782
team_policy eval average team episode rewards of agent2: 122.5
team_policy eval idv catch total num of agent2: 31
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent3: 1.0267139076979952
team_policy eval average team episode rewards of agent3: 122.5
team_policy eval idv catch total num of agent3: 42
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent4: 0.5636026547072657
team_policy eval average team episode rewards of agent4: 122.5
team_policy eval idv catch total num of agent4: 24
team_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent0: 0.6866623863845186
idv_policy eval average team episode rewards of agent0: 97.5
idv_policy eval idv catch total num of agent0: 29
idv_policy eval team catch total num: 39
idv_policy eval average step individual rewards of agent1: 0.6339491134817148
idv_policy eval average team episode rewards of agent1: 97.5
idv_policy eval idv catch total num of agent1: 27
idv_policy eval team catch total num: 39
idv_policy eval average step individual rewards of agent2: 0.7895078767292338
idv_policy eval average team episode rewards of agent2: 97.5
idv_policy eval idv catch total num of agent2: 33
idv_policy eval team catch total num: 39
idv_policy eval average step individual rewards of agent3: 0.5181178056033328
idv_policy eval average team episode rewards of agent3: 97.5
idv_policy eval idv catch total num of agent3: 22
idv_policy eval team catch total num: 39
idv_policy eval average step individual rewards of agent4: 0.8067738797588043
idv_policy eval average team episode rewards of agent4: 97.5
idv_policy eval idv catch total num of agent4: 34
idv_policy eval team catch total num: 39

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1201/10000 episodes, total num timesteps 240400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1202/10000 episodes, total num timesteps 240600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1203/10000 episodes, total num timesteps 240800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1204/10000 episodes, total num timesteps 241000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1205/10000 episodes, total num timesteps 241200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1206/10000 episodes, total num timesteps 241400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1207/10000 episodes, total num timesteps 241600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1208/10000 episodes, total num timesteps 241800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1209/10000 episodes, total num timesteps 242000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1210/10000 episodes, total num timesteps 242200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1211/10000 episodes, total num timesteps 242400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1212/10000 episodes, total num timesteps 242600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1213/10000 episodes, total num timesteps 242800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1214/10000 episodes, total num timesteps 243000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1215/10000 episodes, total num timesteps 243200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1216/10000 episodes, total num timesteps 243400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1217/10000 episodes, total num timesteps 243600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1218/10000 episodes, total num timesteps 243800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1219/10000 episodes, total num timesteps 244000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1220/10000 episodes, total num timesteps 244200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1221/10000 episodes, total num timesteps 244400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1222/10000 episodes, total num timesteps 244600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1223/10000 episodes, total num timesteps 244800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1224/10000 episodes, total num timesteps 245000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1225/10000 episodes, total num timesteps 245200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.6085446966696092
team_policy eval average team episode rewards of agent0: 95.0
team_policy eval idv catch total num of agent0: 26
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent1: 0.7389024182861063
team_policy eval average team episode rewards of agent1: 95.0
team_policy eval idv catch total num of agent1: 31
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent2: 0.6906411985396577
team_policy eval average team episode rewards of agent2: 95.0
team_policy eval idv catch total num of agent2: 29
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent3: 0.3210715385497997
team_policy eval average team episode rewards of agent3: 95.0
team_policy eval idv catch total num of agent3: 15
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent4: 0.811650022725357
team_policy eval average team episode rewards of agent4: 95.0
team_policy eval idv catch total num of agent4: 34
team_policy eval team catch total num: 38
idv_policy eval average step individual rewards of agent0: 0.5114018838794869
idv_policy eval average team episode rewards of agent0: 90.0
idv_policy eval idv catch total num of agent0: 22
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent1: 0.4915078662776785
idv_policy eval average team episode rewards of agent1: 90.0
idv_policy eval idv catch total num of agent1: 21
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent2: 0.6695479877851972
idv_policy eval average team episode rewards of agent2: 90.0
idv_policy eval idv catch total num of agent2: 28
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent3: 0.6890586334115095
idv_policy eval average team episode rewards of agent3: 90.0
idv_policy eval idv catch total num of agent3: 29
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent4: 0.9929591839927803
idv_policy eval average team episode rewards of agent4: 90.0
idv_policy eval idv catch total num of agent4: 41
idv_policy eval team catch total num: 36

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1226/10000 episodes, total num timesteps 245400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1227/10000 episodes, total num timesteps 245600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1228/10000 episodes, total num timesteps 245800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1229/10000 episodes, total num timesteps 246000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1230/10000 episodes, total num timesteps 246200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1231/10000 episodes, total num timesteps 246400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1232/10000 episodes, total num timesteps 246600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1233/10000 episodes, total num timesteps 246800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1234/10000 episodes, total num timesteps 247000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1235/10000 episodes, total num timesteps 247200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1236/10000 episodes, total num timesteps 247400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1237/10000 episodes, total num timesteps 247600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1238/10000 episodes, total num timesteps 247800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1239/10000 episodes, total num timesteps 248000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1240/10000 episodes, total num timesteps 248200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1241/10000 episodes, total num timesteps 248400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1242/10000 episodes, total num timesteps 248600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1243/10000 episodes, total num timesteps 248800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1244/10000 episodes, total num timesteps 249000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1245/10000 episodes, total num timesteps 249200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1246/10000 episodes, total num timesteps 249400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1247/10000 episodes, total num timesteps 249600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1248/10000 episodes, total num timesteps 249800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1249/10000 episodes, total num timesteps 250000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1250/10000 episodes, total num timesteps 250200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.7709722455019045
team_policy eval average team episode rewards of agent0: 85.0
team_policy eval idv catch total num of agent0: 32
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent1: 0.5325883152042914
team_policy eval average team episode rewards of agent1: 85.0
team_policy eval idv catch total num of agent1: 23
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent2: 0.3458610246325812
team_policy eval average team episode rewards of agent2: 85.0
team_policy eval idv catch total num of agent2: 16
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent3: 0.8064125000817498
team_policy eval average team episode rewards of agent3: 85.0
team_policy eval idv catch total num of agent3: 34
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent4: 0.6856658460515251
team_policy eval average team episode rewards of agent4: 85.0
team_policy eval idv catch total num of agent4: 29
team_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent0: 0.1812353894697619
idv_policy eval average team episode rewards of agent0: 75.0
idv_policy eval idv catch total num of agent0: 9
idv_policy eval team catch total num: 30
idv_policy eval average step individual rewards of agent1: 0.7372548412764525
idv_policy eval average team episode rewards of agent1: 75.0
idv_policy eval idv catch total num of agent1: 31
idv_policy eval team catch total num: 30
idv_policy eval average step individual rewards of agent2: 0.6634342180582253
idv_policy eval average team episode rewards of agent2: 75.0
idv_policy eval idv catch total num of agent2: 28
idv_policy eval team catch total num: 30
idv_policy eval average step individual rewards of agent3: 0.6859040828070663
idv_policy eval average team episode rewards of agent3: 75.0
idv_policy eval idv catch total num of agent3: 29
idv_policy eval team catch total num: 30
idv_policy eval average step individual rewards of agent4: 0.8639452679987957
idv_policy eval average team episode rewards of agent4: 75.0
idv_policy eval idv catch total num of agent4: 36
idv_policy eval team catch total num: 30

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1251/10000 episodes, total num timesteps 250400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1252/10000 episodes, total num timesteps 250600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1253/10000 episodes, total num timesteps 250800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1254/10000 episodes, total num timesteps 251000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1255/10000 episodes, total num timesteps 251200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1256/10000 episodes, total num timesteps 251400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1257/10000 episodes, total num timesteps 251600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1258/10000 episodes, total num timesteps 251800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1259/10000 episodes, total num timesteps 252000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1260/10000 episodes, total num timesteps 252200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1261/10000 episodes, total num timesteps 252400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1262/10000 episodes, total num timesteps 252600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1263/10000 episodes, total num timesteps 252800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1264/10000 episodes, total num timesteps 253000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1265/10000 episodes, total num timesteps 253200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1266/10000 episodes, total num timesteps 253400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1267/10000 episodes, total num timesteps 253600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1268/10000 episodes, total num timesteps 253800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1269/10000 episodes, total num timesteps 254000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1270/10000 episodes, total num timesteps 254200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1271/10000 episodes, total num timesteps 254400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1272/10000 episodes, total num timesteps 254600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1273/10000 episodes, total num timesteps 254800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1274/10000 episodes, total num timesteps 255000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1275/10000 episodes, total num timesteps 255200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.8063089993916159
team_policy eval average team episode rewards of agent0: 102.5
team_policy eval idv catch total num of agent0: 34
team_policy eval team catch total num: 41
team_policy eval average step individual rewards of agent1: 0.43818157709020444
team_policy eval average team episode rewards of agent1: 102.5
team_policy eval idv catch total num of agent1: 19
team_policy eval team catch total num: 41
team_policy eval average step individual rewards of agent2: 0.5148524793792375
team_policy eval average team episode rewards of agent2: 102.5
team_policy eval idv catch total num of agent2: 23
team_policy eval team catch total num: 41
team_policy eval average step individual rewards of agent3: 0.6599622521497268
team_policy eval average team episode rewards of agent3: 102.5
team_policy eval idv catch total num of agent3: 28
team_policy eval team catch total num: 41
team_policy eval average step individual rewards of agent4: 0.6294918765084251
team_policy eval average team episode rewards of agent4: 102.5
team_policy eval idv catch total num of agent4: 27
team_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent0: 0.5160024860261885
idv_policy eval average team episode rewards of agent0: 82.5
idv_policy eval idv catch total num of agent0: 22
idv_policy eval team catch total num: 33
idv_policy eval average step individual rewards of agent1: 0.8057615885433879
idv_policy eval average team episode rewards of agent1: 82.5
idv_policy eval idv catch total num of agent1: 34
idv_policy eval team catch total num: 33
idv_policy eval average step individual rewards of agent2: 0.3141509517448499
idv_policy eval average team episode rewards of agent2: 82.5
idv_policy eval idv catch total num of agent2: 14
idv_policy eval team catch total num: 33
idv_policy eval average step individual rewards of agent3: 0.8383068839664335
idv_policy eval average team episode rewards of agent3: 82.5
idv_policy eval idv catch total num of agent3: 35
idv_policy eval team catch total num: 33
idv_policy eval average step individual rewards of agent4: 0.9154631799341587
idv_policy eval average team episode rewards of agent4: 82.5
idv_policy eval idv catch total num of agent4: 38
idv_policy eval team catch total num: 33

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1276/10000 episodes, total num timesteps 255400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1277/10000 episodes, total num timesteps 255600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1278/10000 episodes, total num timesteps 255800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1279/10000 episodes, total num timesteps 256000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1280/10000 episodes, total num timesteps 256200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1281/10000 episodes, total num timesteps 256400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1282/10000 episodes, total num timesteps 256600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1283/10000 episodes, total num timesteps 256800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1284/10000 episodes, total num timesteps 257000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1285/10000 episodes, total num timesteps 257200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1286/10000 episodes, total num timesteps 257400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1287/10000 episodes, total num timesteps 257600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1288/10000 episodes, total num timesteps 257800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1289/10000 episodes, total num timesteps 258000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1290/10000 episodes, total num timesteps 258200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1291/10000 episodes, total num timesteps 258400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1292/10000 episodes, total num timesteps 258600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1293/10000 episodes, total num timesteps 258800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1294/10000 episodes, total num timesteps 259000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1295/10000 episodes, total num timesteps 259200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1296/10000 episodes, total num timesteps 259400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1297/10000 episodes, total num timesteps 259600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1298/10000 episodes, total num timesteps 259800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1299/10000 episodes, total num timesteps 260000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1300/10000 episodes, total num timesteps 260200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.261804603799858
team_policy eval average team episode rewards of agent0: 40.0
team_policy eval idv catch total num of agent0: 12
team_policy eval team catch total num: 16
team_policy eval average step individual rewards of agent1: 0.5109616746678713
team_policy eval average team episode rewards of agent1: 40.0
team_policy eval idv catch total num of agent1: 22
team_policy eval team catch total num: 16
team_policy eval average step individual rewards of agent2: 0.16766522252927055
team_policy eval average team episode rewards of agent2: 40.0
team_policy eval idv catch total num of agent2: 9
team_policy eval team catch total num: 16
team_policy eval average step individual rewards of agent3: 0.4635730932229079
team_policy eval average team episode rewards of agent3: 40.0
team_policy eval idv catch total num of agent3: 20
team_policy eval team catch total num: 16
team_policy eval average step individual rewards of agent4: 0.5939393129771399
team_policy eval average team episode rewards of agent4: 40.0
team_policy eval idv catch total num of agent4: 26
team_policy eval team catch total num: 16
idv_policy eval average step individual rewards of agent0: 0.7094389483008253
idv_policy eval average team episode rewards of agent0: 97.5
idv_policy eval idv catch total num of agent0: 30
idv_policy eval team catch total num: 39
idv_policy eval average step individual rewards of agent1: 0.8361166911832741
idv_policy eval average team episode rewards of agent1: 97.5
idv_policy eval idv catch total num of agent1: 35
idv_policy eval team catch total num: 39
idv_policy eval average step individual rewards of agent2: 0.709686434424721
idv_policy eval average team episode rewards of agent2: 97.5
idv_policy eval idv catch total num of agent2: 30
idv_policy eval team catch total num: 39
idv_policy eval average step individual rewards of agent3: 0.9860865994023703
idv_policy eval average team episode rewards of agent3: 97.5
idv_policy eval idv catch total num of agent3: 41
idv_policy eval team catch total num: 39
idv_policy eval average step individual rewards of agent4: 0.46900304843445667
idv_policy eval average team episode rewards of agent4: 97.5
idv_policy eval idv catch total num of agent4: 21
idv_policy eval team catch total num: 39

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1301/10000 episodes, total num timesteps 260400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1302/10000 episodes, total num timesteps 260600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1303/10000 episodes, total num timesteps 260800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1304/10000 episodes, total num timesteps 261000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1305/10000 episodes, total num timesteps 261200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1306/10000 episodes, total num timesteps 261400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1307/10000 episodes, total num timesteps 261600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1308/10000 episodes, total num timesteps 261800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1309/10000 episodes, total num timesteps 262000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1310/10000 episodes, total num timesteps 262200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1311/10000 episodes, total num timesteps 262400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1312/10000 episodes, total num timesteps 262600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1313/10000 episodes, total num timesteps 262800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1314/10000 episodes, total num timesteps 263000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1315/10000 episodes, total num timesteps 263200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1316/10000 episodes, total num timesteps 263400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1317/10000 episodes, total num timesteps 263600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1318/10000 episodes, total num timesteps 263800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1319/10000 episodes, total num timesteps 264000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1320/10000 episodes, total num timesteps 264200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1321/10000 episodes, total num timesteps 264400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1322/10000 episodes, total num timesteps 264600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1323/10000 episodes, total num timesteps 264800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1324/10000 episodes, total num timesteps 265000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1325/10000 episodes, total num timesteps 265200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.7428614442877979
team_policy eval average team episode rewards of agent0: 127.5
team_policy eval idv catch total num of agent0: 31
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent1: 0.6545913429370862
team_policy eval average team episode rewards of agent1: 127.5
team_policy eval idv catch total num of agent1: 28
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent2: 0.8643418648742811
team_policy eval average team episode rewards of agent2: 127.5
team_policy eval idv catch total num of agent2: 36
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent3: 0.9700931132744732
team_policy eval average team episode rewards of agent3: 127.5
team_policy eval idv catch total num of agent3: 40
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent4: 1.063367946956337
team_policy eval average team episode rewards of agent4: 127.5
team_policy eval idv catch total num of agent4: 44
team_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent0: 0.7590336149983911
idv_policy eval average team episode rewards of agent0: 87.5
idv_policy eval idv catch total num of agent0: 32
idv_policy eval team catch total num: 35
idv_policy eval average step individual rewards of agent1: 0.4873095317069132
idv_policy eval average team episode rewards of agent1: 87.5
idv_policy eval idv catch total num of agent1: 21
idv_policy eval team catch total num: 35
idv_policy eval average step individual rewards of agent2: 0.8483903606353919
idv_policy eval average team episode rewards of agent2: 87.5
idv_policy eval idv catch total num of agent2: 35
idv_policy eval team catch total num: 35
idv_policy eval average step individual rewards of agent3: 0.4937739964323032
idv_policy eval average team episode rewards of agent3: 87.5
idv_policy eval idv catch total num of agent3: 21
idv_policy eval team catch total num: 35
idv_policy eval average step individual rewards of agent4: 0.8944195626065085
idv_policy eval average team episode rewards of agent4: 87.5
idv_policy eval idv catch total num of agent4: 37
idv_policy eval team catch total num: 35

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1326/10000 episodes, total num timesteps 265400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1327/10000 episodes, total num timesteps 265600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1328/10000 episodes, total num timesteps 265800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1329/10000 episodes, total num timesteps 266000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1330/10000 episodes, total num timesteps 266200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1331/10000 episodes, total num timesteps 266400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1332/10000 episodes, total num timesteps 266600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1333/10000 episodes, total num timesteps 266800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1334/10000 episodes, total num timesteps 267000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1335/10000 episodes, total num timesteps 267200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1336/10000 episodes, total num timesteps 267400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1337/10000 episodes, total num timesteps 267600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1338/10000 episodes, total num timesteps 267800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1339/10000 episodes, total num timesteps 268000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1340/10000 episodes, total num timesteps 268200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1341/10000 episodes, total num timesteps 268400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1342/10000 episodes, total num timesteps 268600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1343/10000 episodes, total num timesteps 268800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1344/10000 episodes, total num timesteps 269000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1345/10000 episodes, total num timesteps 269200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1346/10000 episodes, total num timesteps 269400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1347/10000 episodes, total num timesteps 269600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1348/10000 episodes, total num timesteps 269800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1349/10000 episodes, total num timesteps 270000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1350/10000 episodes, total num timesteps 270200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.3809136051399311
team_policy eval average team episode rewards of agent0: 67.5
team_policy eval idv catch total num of agent0: 17
team_policy eval team catch total num: 27
team_policy eval average step individual rewards of agent1: 0.5113119730016967
team_policy eval average team episode rewards of agent1: 67.5
team_policy eval idv catch total num of agent1: 22
team_policy eval team catch total num: 27
team_policy eval average step individual rewards of agent2: 0.8469614620094886
team_policy eval average team episode rewards of agent2: 67.5
team_policy eval idv catch total num of agent2: 35
team_policy eval team catch total num: 27
team_policy eval average step individual rewards of agent3: 0.3810685140123122
team_policy eval average team episode rewards of agent3: 67.5
team_policy eval idv catch total num of agent3: 17
team_policy eval team catch total num: 27
team_policy eval average step individual rewards of agent4: 0.5630868067263022
team_policy eval average team episode rewards of agent4: 67.5
team_policy eval idv catch total num of agent4: 24
team_policy eval team catch total num: 27
idv_policy eval average step individual rewards of agent0: 0.4888338116209168
idv_policy eval average team episode rewards of agent0: 90.0
idv_policy eval idv catch total num of agent0: 21
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent1: 0.7650838207764454
idv_policy eval average team episode rewards of agent1: 90.0
idv_policy eval idv catch total num of agent1: 32
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent2: 0.5372516972531727
idv_policy eval average team episode rewards of agent2: 90.0
idv_policy eval idv catch total num of agent2: 23
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent3: 0.5684174199310583
idv_policy eval average team episode rewards of agent3: 90.0
idv_policy eval idv catch total num of agent3: 24
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent4: 0.946852495590731
idv_policy eval average team episode rewards of agent4: 90.0
idv_policy eval idv catch total num of agent4: 39
idv_policy eval team catch total num: 36

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1351/10000 episodes, total num timesteps 270400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1352/10000 episodes, total num timesteps 270600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1353/10000 episodes, total num timesteps 270800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1354/10000 episodes, total num timesteps 271000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1355/10000 episodes, total num timesteps 271200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1356/10000 episodes, total num timesteps 271400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1357/10000 episodes, total num timesteps 271600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1358/10000 episodes, total num timesteps 271800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1359/10000 episodes, total num timesteps 272000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1360/10000 episodes, total num timesteps 272200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1361/10000 episodes, total num timesteps 272400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1362/10000 episodes, total num timesteps 272600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1363/10000 episodes, total num timesteps 272800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1364/10000 episodes, total num timesteps 273000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1365/10000 episodes, total num timesteps 273200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1366/10000 episodes, total num timesteps 273400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1367/10000 episodes, total num timesteps 273600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1368/10000 episodes, total num timesteps 273800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1369/10000 episodes, total num timesteps 274000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1370/10000 episodes, total num timesteps 274200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1371/10000 episodes, total num timesteps 274400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1372/10000 episodes, total num timesteps 274600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1373/10000 episodes, total num timesteps 274800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1374/10000 episodes, total num timesteps 275000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1375/10000 episodes, total num timesteps 275200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.5848294797629783
team_policy eval average team episode rewards of agent0: 115.0
team_policy eval idv catch total num of agent0: 25
team_policy eval team catch total num: 46
team_policy eval average step individual rewards of agent1: 0.5567127373933954
team_policy eval average team episode rewards of agent1: 115.0
team_policy eval idv catch total num of agent1: 24
team_policy eval team catch total num: 46
team_policy eval average step individual rewards of agent2: 0.6906126825366201
team_policy eval average team episode rewards of agent2: 115.0
team_policy eval idv catch total num of agent2: 29
team_policy eval team catch total num: 46
team_policy eval average step individual rewards of agent3: 1.1483878547703974
team_policy eval average team episode rewards of agent3: 115.0
team_policy eval idv catch total num of agent3: 47
team_policy eval team catch total num: 46
team_policy eval average step individual rewards of agent4: 1.0428798556399301
team_policy eval average team episode rewards of agent4: 115.0
team_policy eval idv catch total num of agent4: 43
team_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent0: 0.9343597586079716
idv_policy eval average team episode rewards of agent0: 117.5
idv_policy eval idv catch total num of agent0: 39
idv_policy eval team catch total num: 47
idv_policy eval average step individual rewards of agent1: 0.5274137899630947
idv_policy eval average team episode rewards of agent1: 117.5
idv_policy eval idv catch total num of agent1: 23
idv_policy eval team catch total num: 47
idv_policy eval average step individual rewards of agent2: 0.76766530345148
idv_policy eval average team episode rewards of agent2: 117.5
idv_policy eval idv catch total num of agent2: 32
idv_policy eval team catch total num: 47
idv_policy eval average step individual rewards of agent3: 1.0362705625820106
idv_policy eval average team episode rewards of agent3: 117.5
idv_policy eval idv catch total num of agent3: 43
idv_policy eval team catch total num: 47
idv_policy eval average step individual rewards of agent4: 0.8415553090132424
idv_policy eval average team episode rewards of agent4: 117.5
idv_policy eval idv catch total num of agent4: 35
idv_policy eval team catch total num: 47

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1376/10000 episodes, total num timesteps 275400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1377/10000 episodes, total num timesteps 275600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1378/10000 episodes, total num timesteps 275800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1379/10000 episodes, total num timesteps 276000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1380/10000 episodes, total num timesteps 276200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1381/10000 episodes, total num timesteps 276400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1382/10000 episodes, total num timesteps 276600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1383/10000 episodes, total num timesteps 276800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1384/10000 episodes, total num timesteps 277000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1385/10000 episodes, total num timesteps 277200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1386/10000 episodes, total num timesteps 277400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1387/10000 episodes, total num timesteps 277600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1388/10000 episodes, total num timesteps 277800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1389/10000 episodes, total num timesteps 278000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1390/10000 episodes, total num timesteps 278200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1391/10000 episodes, total num timesteps 278400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1392/10000 episodes, total num timesteps 278600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1393/10000 episodes, total num timesteps 278800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1394/10000 episodes, total num timesteps 279000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1395/10000 episodes, total num timesteps 279200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1396/10000 episodes, total num timesteps 279400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1397/10000 episodes, total num timesteps 279600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1398/10000 episodes, total num timesteps 279800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1399/10000 episodes, total num timesteps 280000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1400/10000 episodes, total num timesteps 280200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.7682237507546266
team_policy eval average team episode rewards of agent0: 122.5
team_policy eval idv catch total num of agent0: 32
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent1: 0.7404359909817855
team_policy eval average team episode rewards of agent1: 122.5
team_policy eval idv catch total num of agent1: 31
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent2: 0.943288665852406
team_policy eval average team episode rewards of agent2: 122.5
team_policy eval idv catch total num of agent2: 39
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent3: 0.829131435898508
team_policy eval average team episode rewards of agent3: 122.5
team_policy eval idv catch total num of agent3: 35
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent4: 0.8151918386299606
team_policy eval average team episode rewards of agent4: 122.5
team_policy eval idv catch total num of agent4: 34
team_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent0: 0.5318180266660293
idv_policy eval average team episode rewards of agent0: 72.5
idv_policy eval idv catch total num of agent0: 23
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent1: 0.22621144400636703
idv_policy eval average team episode rewards of agent1: 72.5
idv_policy eval idv catch total num of agent1: 11
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent2: 0.7220637354577208
idv_policy eval average team episode rewards of agent2: 72.5
idv_policy eval idv catch total num of agent2: 30
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent3: 0.42101599163621317
idv_policy eval average team episode rewards of agent3: 72.5
idv_policy eval idv catch total num of agent3: 19
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent4: 0.840031915916135
idv_policy eval average team episode rewards of agent4: 72.5
idv_policy eval idv catch total num of agent4: 35
idv_policy eval team catch total num: 29

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1401/10000 episodes, total num timesteps 280400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1402/10000 episodes, total num timesteps 280600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1403/10000 episodes, total num timesteps 280800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1404/10000 episodes, total num timesteps 281000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1405/10000 episodes, total num timesteps 281200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1406/10000 episodes, total num timesteps 281400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1407/10000 episodes, total num timesteps 281600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1408/10000 episodes, total num timesteps 281800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1409/10000 episodes, total num timesteps 282000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1410/10000 episodes, total num timesteps 282200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1411/10000 episodes, total num timesteps 282400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1412/10000 episodes, total num timesteps 282600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1413/10000 episodes, total num timesteps 282800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1414/10000 episodes, total num timesteps 283000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1415/10000 episodes, total num timesteps 283200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1416/10000 episodes, total num timesteps 283400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1417/10000 episodes, total num timesteps 283600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1418/10000 episodes, total num timesteps 283800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1419/10000 episodes, total num timesteps 284000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1420/10000 episodes, total num timesteps 284200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1421/10000 episodes, total num timesteps 284400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1422/10000 episodes, total num timesteps 284600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1423/10000 episodes, total num timesteps 284800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1424/10000 episodes, total num timesteps 285000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1425/10000 episodes, total num timesteps 285200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.6007358164116752
team_policy eval average team episode rewards of agent0: 112.5
team_policy eval idv catch total num of agent0: 26
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent1: 0.660061483369316
team_policy eval average team episode rewards of agent1: 112.5
team_policy eval idv catch total num of agent1: 28
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent2: 1.1323072695618577
team_policy eval average team episode rewards of agent2: 112.5
team_policy eval idv catch total num of agent2: 47
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent3: 0.5594301990842574
team_policy eval average team episode rewards of agent3: 112.5
team_policy eval idv catch total num of agent3: 24
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent4: 1.1207107346304195
team_policy eval average team episode rewards of agent4: 112.5
team_policy eval idv catch total num of agent4: 46
team_policy eval team catch total num: 45
idv_policy eval average step individual rewards of agent0: 0.6094654386578935
idv_policy eval average team episode rewards of agent0: 80.0
idv_policy eval idv catch total num of agent0: 26
idv_policy eval team catch total num: 32
idv_policy eval average step individual rewards of agent1: 0.783074821373093
idv_policy eval average team episode rewards of agent1: 80.0
idv_policy eval idv catch total num of agent1: 33
idv_policy eval team catch total num: 32
idv_policy eval average step individual rewards of agent2: 0.511340896049707
idv_policy eval average team episode rewards of agent2: 80.0
idv_policy eval idv catch total num of agent2: 22
idv_policy eval team catch total num: 32
idv_policy eval average step individual rewards of agent3: 0.6566970909876855
idv_policy eval average team episode rewards of agent3: 80.0
idv_policy eval idv catch total num of agent3: 28
idv_policy eval team catch total num: 32
idv_policy eval average step individual rewards of agent4: 0.7029398105892761
idv_policy eval average team episode rewards of agent4: 80.0
idv_policy eval idv catch total num of agent4: 30
idv_policy eval team catch total num: 32

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1426/10000 episodes, total num timesteps 285400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1427/10000 episodes, total num timesteps 285600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1428/10000 episodes, total num timesteps 285800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1429/10000 episodes, total num timesteps 286000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1430/10000 episodes, total num timesteps 286200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1431/10000 episodes, total num timesteps 286400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1432/10000 episodes, total num timesteps 286600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1433/10000 episodes, total num timesteps 286800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1434/10000 episodes, total num timesteps 287000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1435/10000 episodes, total num timesteps 287200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1436/10000 episodes, total num timesteps 287400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1437/10000 episodes, total num timesteps 287600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1438/10000 episodes, total num timesteps 287800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1439/10000 episodes, total num timesteps 288000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1440/10000 episodes, total num timesteps 288200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1441/10000 episodes, total num timesteps 288400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1442/10000 episodes, total num timesteps 288600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1443/10000 episodes, total num timesteps 288800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1444/10000 episodes, total num timesteps 289000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1445/10000 episodes, total num timesteps 289200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1446/10000 episodes, total num timesteps 289400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1447/10000 episodes, total num timesteps 289600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1448/10000 episodes, total num timesteps 289800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1449/10000 episodes, total num timesteps 290000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1450/10000 episodes, total num timesteps 290200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 1.1691243863969742
team_policy eval average team episode rewards of agent0: 90.0
team_policy eval idv catch total num of agent0: 48
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent1: 0.6901305801406212
team_policy eval average team episode rewards of agent1: 90.0
team_policy eval idv catch total num of agent1: 29
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent2: 0.5337928167312718
team_policy eval average team episode rewards of agent2: 90.0
team_policy eval idv catch total num of agent2: 23
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent3: 0.4820317298550227
team_policy eval average team episode rewards of agent3: 90.0
team_policy eval idv catch total num of agent3: 21
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent4: 0.6789315748842284
team_policy eval average team episode rewards of agent4: 90.0
team_policy eval idv catch total num of agent4: 29
team_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent0: 0.8313145546636274
idv_policy eval average team episode rewards of agent0: 112.5
idv_policy eval idv catch total num of agent0: 35
idv_policy eval team catch total num: 45
idv_policy eval average step individual rewards of agent1: 0.8458539862380001
idv_policy eval average team episode rewards of agent1: 112.5
idv_policy eval idv catch total num of agent1: 35
idv_policy eval team catch total num: 45
idv_policy eval average step individual rewards of agent2: 0.8690047188820206
idv_policy eval average team episode rewards of agent2: 112.5
idv_policy eval idv catch total num of agent2: 36
idv_policy eval team catch total num: 45
idv_policy eval average step individual rewards of agent3: 0.8398054599679838
idv_policy eval average team episode rewards of agent3: 112.5
idv_policy eval idv catch total num of agent3: 35
idv_policy eval team catch total num: 45
idv_policy eval average step individual rewards of agent4: 0.7719496212423061
idv_policy eval average team episode rewards of agent4: 112.5
idv_policy eval idv catch total num of agent4: 32
idv_policy eval team catch total num: 45

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1451/10000 episodes, total num timesteps 290400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1452/10000 episodes, total num timesteps 290600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1453/10000 episodes, total num timesteps 290800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1454/10000 episodes, total num timesteps 291000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1455/10000 episodes, total num timesteps 291200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1456/10000 episodes, total num timesteps 291400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1457/10000 episodes, total num timesteps 291600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1458/10000 episodes, total num timesteps 291800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1459/10000 episodes, total num timesteps 292000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1460/10000 episodes, total num timesteps 292200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1461/10000 episodes, total num timesteps 292400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1462/10000 episodes, total num timesteps 292600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1463/10000 episodes, total num timesteps 292800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1464/10000 episodes, total num timesteps 293000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1465/10000 episodes, total num timesteps 293200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1466/10000 episodes, total num timesteps 293400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1467/10000 episodes, total num timesteps 293600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1468/10000 episodes, total num timesteps 293800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1469/10000 episodes, total num timesteps 294000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1470/10000 episodes, total num timesteps 294200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1471/10000 episodes, total num timesteps 294400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1472/10000 episodes, total num timesteps 294600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1473/10000 episodes, total num timesteps 294800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1474/10000 episodes, total num timesteps 295000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1475/10000 episodes, total num timesteps 295200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.5825585022212201
team_policy eval average team episode rewards of agent0: 77.5
team_policy eval idv catch total num of agent0: 25
team_policy eval team catch total num: 31
team_policy eval average step individual rewards of agent1: 0.7375833510776302
team_policy eval average team episode rewards of agent1: 77.5
team_policy eval idv catch total num of agent1: 31
team_policy eval team catch total num: 31
team_policy eval average step individual rewards of agent2: 0.83552474464366
team_policy eval average team episode rewards of agent2: 77.5
team_policy eval idv catch total num of agent2: 35
team_policy eval team catch total num: 31
team_policy eval average step individual rewards of agent3: 0.4037153329977429
team_policy eval average team episode rewards of agent3: 77.5
team_policy eval idv catch total num of agent3: 18
team_policy eval team catch total num: 31
team_policy eval average step individual rewards of agent4: 0.5389321492773029
team_policy eval average team episode rewards of agent4: 77.5
team_policy eval idv catch total num of agent4: 23
team_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent0: 0.9407848582616083
idv_policy eval average team episode rewards of agent0: 100.0
idv_policy eval idv catch total num of agent0: 39
idv_policy eval team catch total num: 40
idv_policy eval average step individual rewards of agent1: 0.7657800214864685
idv_policy eval average team episode rewards of agent1: 100.0
idv_policy eval idv catch total num of agent1: 32
idv_policy eval team catch total num: 40
idv_policy eval average step individual rewards of agent2: 0.48798055273625324
idv_policy eval average team episode rewards of agent2: 100.0
idv_policy eval idv catch total num of agent2: 21
idv_policy eval team catch total num: 40
idv_policy eval average step individual rewards of agent3: 0.8597593084425208
idv_policy eval average team episode rewards of agent3: 100.0
idv_policy eval idv catch total num of agent3: 36
idv_policy eval team catch total num: 40
idv_policy eval average step individual rewards of agent4: 0.6936159904268878
idv_policy eval average team episode rewards of agent4: 100.0
idv_policy eval idv catch total num of agent4: 29
idv_policy eval team catch total num: 40

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1476/10000 episodes, total num timesteps 295400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1477/10000 episodes, total num timesteps 295600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1478/10000 episodes, total num timesteps 295800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1479/10000 episodes, total num timesteps 296000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1480/10000 episodes, total num timesteps 296200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1481/10000 episodes, total num timesteps 296400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1482/10000 episodes, total num timesteps 296600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1483/10000 episodes, total num timesteps 296800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1484/10000 episodes, total num timesteps 297000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1485/10000 episodes, total num timesteps 297200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1486/10000 episodes, total num timesteps 297400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1487/10000 episodes, total num timesteps 297600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1488/10000 episodes, total num timesteps 297800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1489/10000 episodes, total num timesteps 298000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1490/10000 episodes, total num timesteps 298200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1491/10000 episodes, total num timesteps 298400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1492/10000 episodes, total num timesteps 298600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1493/10000 episodes, total num timesteps 298800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1494/10000 episodes, total num timesteps 299000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1495/10000 episodes, total num timesteps 299200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1496/10000 episodes, total num timesteps 299400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1497/10000 episodes, total num timesteps 299600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1498/10000 episodes, total num timesteps 299800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1499/10000 episodes, total num timesteps 300000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1500/10000 episodes, total num timesteps 300200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 1.0983133416631168
team_policy eval average team episode rewards of agent0: 130.0
team_policy eval idv catch total num of agent0: 45
team_policy eval team catch total num: 52
team_policy eval average step individual rewards of agent1: 0.6840470683805226
team_policy eval average team episode rewards of agent1: 130.0
team_policy eval idv catch total num of agent1: 29
team_policy eval team catch total num: 52
team_policy eval average step individual rewards of agent2: 0.8698605799970028
team_policy eval average team episode rewards of agent2: 130.0
team_policy eval idv catch total num of agent2: 36
team_policy eval team catch total num: 52
team_policy eval average step individual rewards of agent3: 1.0922141945495663
team_policy eval average team episode rewards of agent3: 130.0
team_policy eval idv catch total num of agent3: 45
team_policy eval team catch total num: 52
team_policy eval average step individual rewards of agent4: 0.7369471900034981
team_policy eval average team episode rewards of agent4: 130.0
team_policy eval idv catch total num of agent4: 31
team_policy eval team catch total num: 52
idv_policy eval average step individual rewards of agent0: 0.4853514242308752
idv_policy eval average team episode rewards of agent0: 137.5
idv_policy eval idv catch total num of agent0: 21
idv_policy eval team catch total num: 55
idv_policy eval average step individual rewards of agent1: 1.293911693996773
idv_policy eval average team episode rewards of agent1: 137.5
idv_policy eval idv catch total num of agent1: 53
idv_policy eval team catch total num: 55
idv_policy eval average step individual rewards of agent2: 1.0913203161477885
idv_policy eval average team episode rewards of agent2: 137.5
idv_policy eval idv catch total num of agent2: 45
idv_policy eval team catch total num: 55
idv_policy eval average step individual rewards of agent3: 1.0435709667127673
idv_policy eval average team episode rewards of agent3: 137.5
idv_policy eval idv catch total num of agent3: 43
idv_policy eval team catch total num: 55
idv_policy eval average step individual rewards of agent4: 0.8433711691364176
idv_policy eval average team episode rewards of agent4: 137.5
idv_policy eval idv catch total num of agent4: 35
idv_policy eval team catch total num: 55

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1501/10000 episodes, total num timesteps 300400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1502/10000 episodes, total num timesteps 300600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1503/10000 episodes, total num timesteps 300800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1504/10000 episodes, total num timesteps 301000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1505/10000 episodes, total num timesteps 301200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1506/10000 episodes, total num timesteps 301400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1507/10000 episodes, total num timesteps 301600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1508/10000 episodes, total num timesteps 301800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1509/10000 episodes, total num timesteps 302000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1510/10000 episodes, total num timesteps 302200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1511/10000 episodes, total num timesteps 302400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1512/10000 episodes, total num timesteps 302600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1513/10000 episodes, total num timesteps 302800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1514/10000 episodes, total num timesteps 303000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1515/10000 episodes, total num timesteps 303200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1516/10000 episodes, total num timesteps 303400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1517/10000 episodes, total num timesteps 303600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1518/10000 episodes, total num timesteps 303800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1519/10000 episodes, total num timesteps 304000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1520/10000 episodes, total num timesteps 304200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1521/10000 episodes, total num timesteps 304400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1522/10000 episodes, total num timesteps 304600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1523/10000 episodes, total num timesteps 304800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1524/10000 episodes, total num timesteps 305000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1525/10000 episodes, total num timesteps 305200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.7690723524315356
team_policy eval average team episode rewards of agent0: 122.5
team_policy eval idv catch total num of agent0: 32
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent1: 0.635608428588484
team_policy eval average team episode rewards of agent1: 122.5
team_policy eval idv catch total num of agent1: 27
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent2: 0.9420402308832161
team_policy eval average team episode rewards of agent2: 122.5
team_policy eval idv catch total num of agent2: 39
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent3: 0.7393724313484653
team_policy eval average team episode rewards of agent3: 122.5
team_policy eval idv catch total num of agent3: 31
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent4: 1.111547384866723
team_policy eval average team episode rewards of agent4: 122.5
team_policy eval idv catch total num of agent4: 46
team_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent0: 0.9650688718760461
idv_policy eval average team episode rewards of agent0: 132.5
idv_policy eval idv catch total num of agent0: 40
idv_policy eval team catch total num: 53
idv_policy eval average step individual rewards of agent1: 0.9609321030024717
idv_policy eval average team episode rewards of agent1: 132.5
idv_policy eval idv catch total num of agent1: 40
idv_policy eval team catch total num: 53
idv_policy eval average step individual rewards of agent2: 0.8860060523048321
idv_policy eval average team episode rewards of agent2: 132.5
idv_policy eval idv catch total num of agent2: 37
idv_policy eval team catch total num: 53
idv_policy eval average step individual rewards of agent3: 1.169493043678893
idv_policy eval average team episode rewards of agent3: 132.5
idv_policy eval idv catch total num of agent3: 48
idv_policy eval team catch total num: 53
idv_policy eval average step individual rewards of agent4: 0.9636897403705456
idv_policy eval average team episode rewards of agent4: 132.5
idv_policy eval idv catch total num of agent4: 40
idv_policy eval team catch total num: 53

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1526/10000 episodes, total num timesteps 305400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1527/10000 episodes, total num timesteps 305600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1528/10000 episodes, total num timesteps 305800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1529/10000 episodes, total num timesteps 306000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1530/10000 episodes, total num timesteps 306200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1531/10000 episodes, total num timesteps 306400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1532/10000 episodes, total num timesteps 306600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1533/10000 episodes, total num timesteps 306800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1534/10000 episodes, total num timesteps 307000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1535/10000 episodes, total num timesteps 307200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1536/10000 episodes, total num timesteps 307400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1537/10000 episodes, total num timesteps 307600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1538/10000 episodes, total num timesteps 307800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1539/10000 episodes, total num timesteps 308000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1540/10000 episodes, total num timesteps 308200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1541/10000 episodes, total num timesteps 308400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1542/10000 episodes, total num timesteps 308600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1543/10000 episodes, total num timesteps 308800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1544/10000 episodes, total num timesteps 309000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1545/10000 episodes, total num timesteps 309200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1546/10000 episodes, total num timesteps 309400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1547/10000 episodes, total num timesteps 309600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1548/10000 episodes, total num timesteps 309800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1549/10000 episodes, total num timesteps 310000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1550/10000 episodes, total num timesteps 310200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.3061198649083993
team_policy eval average team episode rewards of agent0: 45.0
team_policy eval idv catch total num of agent0: 14
team_policy eval team catch total num: 18
team_policy eval average step individual rewards of agent1: 0.16507420201722364
team_policy eval average team episode rewards of agent1: 45.0
team_policy eval idv catch total num of agent1: 9
team_policy eval team catch total num: 18
team_policy eval average step individual rewards of agent2: 0.37845623801995004
team_policy eval average team episode rewards of agent2: 45.0
team_policy eval idv catch total num of agent2: 17
team_policy eval team catch total num: 18
team_policy eval average step individual rewards of agent3: 0.3090972803752855
team_policy eval average team episode rewards of agent3: 45.0
team_policy eval idv catch total num of agent3: 14
team_policy eval team catch total num: 18
team_policy eval average step individual rewards of agent4: 0.25101943661790765
team_policy eval average team episode rewards of agent4: 45.0
team_policy eval idv catch total num of agent4: 12
team_policy eval team catch total num: 18
idv_policy eval average step individual rewards of agent0: 0.6618633188467834
idv_policy eval average team episode rewards of agent0: 122.5
idv_policy eval idv catch total num of agent0: 28
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent1: 0.6028553229645007
idv_policy eval average team episode rewards of agent1: 122.5
idv_policy eval idv catch total num of agent1: 26
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent2: 0.635366021454499
idv_policy eval average team episode rewards of agent2: 122.5
idv_policy eval idv catch total num of agent2: 27
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent3: 1.1679076049490515
idv_policy eval average team episode rewards of agent3: 122.5
idv_policy eval idv catch total num of agent3: 48
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent4: 0.9566425791356461
idv_policy eval average team episode rewards of agent4: 122.5
idv_policy eval idv catch total num of agent4: 40
idv_policy eval team catch total num: 49

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1551/10000 episodes, total num timesteps 310400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1552/10000 episodes, total num timesteps 310600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1553/10000 episodes, total num timesteps 310800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1554/10000 episodes, total num timesteps 311000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1555/10000 episodes, total num timesteps 311200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1556/10000 episodes, total num timesteps 311400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1557/10000 episodes, total num timesteps 311600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1558/10000 episodes, total num timesteps 311800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1559/10000 episodes, total num timesteps 312000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1560/10000 episodes, total num timesteps 312200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1561/10000 episodes, total num timesteps 312400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1562/10000 episodes, total num timesteps 312600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1563/10000 episodes, total num timesteps 312800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1564/10000 episodes, total num timesteps 313000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1565/10000 episodes, total num timesteps 313200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1566/10000 episodes, total num timesteps 313400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1567/10000 episodes, total num timesteps 313600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1568/10000 episodes, total num timesteps 313800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1569/10000 episodes, total num timesteps 314000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1570/10000 episodes, total num timesteps 314200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1571/10000 episodes, total num timesteps 314400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1572/10000 episodes, total num timesteps 314600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1573/10000 episodes, total num timesteps 314800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1574/10000 episodes, total num timesteps 315000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1575/10000 episodes, total num timesteps 315200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.7594096109868346
team_policy eval average team episode rewards of agent0: 127.5
team_policy eval idv catch total num of agent0: 32
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent1: 0.8409263809797224
team_policy eval average team episode rewards of agent1: 127.5
team_policy eval idv catch total num of agent1: 35
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent2: 1.0412910549954872
team_policy eval average team episode rewards of agent2: 127.5
team_policy eval idv catch total num of agent2: 43
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent3: 0.888741238339647
team_policy eval average team episode rewards of agent3: 127.5
team_policy eval idv catch total num of agent3: 37
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent4: 0.6798651774641173
team_policy eval average team episode rewards of agent4: 127.5
team_policy eval idv catch total num of agent4: 29
team_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent0: 0.7264592659413823
idv_policy eval average team episode rewards of agent0: 72.5
idv_policy eval idv catch total num of agent0: 31
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent1: 0.6372708188674631
idv_policy eval average team episode rewards of agent1: 72.5
idv_policy eval idv catch total num of agent1: 27
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent2: 0.15192261961745435
idv_policy eval average team episode rewards of agent2: 72.5
idv_policy eval idv catch total num of agent2: 8
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent3: 0.7917307610536014
idv_policy eval average team episode rewards of agent3: 72.5
idv_policy eval idv catch total num of agent3: 33
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent4: 0.5086109693625329
idv_policy eval average team episode rewards of agent4: 72.5
idv_policy eval idv catch total num of agent4: 22
idv_policy eval team catch total num: 29

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1576/10000 episodes, total num timesteps 315400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1577/10000 episodes, total num timesteps 315600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1578/10000 episodes, total num timesteps 315800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1579/10000 episodes, total num timesteps 316000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1580/10000 episodes, total num timesteps 316200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1581/10000 episodes, total num timesteps 316400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1582/10000 episodes, total num timesteps 316600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1583/10000 episodes, total num timesteps 316800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1584/10000 episodes, total num timesteps 317000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1585/10000 episodes, total num timesteps 317200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1586/10000 episodes, total num timesteps 317400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1587/10000 episodes, total num timesteps 317600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1588/10000 episodes, total num timesteps 317800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1589/10000 episodes, total num timesteps 318000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1590/10000 episodes, total num timesteps 318200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1591/10000 episodes, total num timesteps 318400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1592/10000 episodes, total num timesteps 318600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1593/10000 episodes, total num timesteps 318800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1594/10000 episodes, total num timesteps 319000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1595/10000 episodes, total num timesteps 319200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1596/10000 episodes, total num timesteps 319400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1597/10000 episodes, total num timesteps 319600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1598/10000 episodes, total num timesteps 319800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1599/10000 episodes, total num timesteps 320000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1600/10000 episodes, total num timesteps 320200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.8194415554758859
team_policy eval average team episode rewards of agent0: 97.5
team_policy eval idv catch total num of agent0: 34
team_policy eval team catch total num: 39
team_policy eval average step individual rewards of agent1: 0.5113560451047359
team_policy eval average team episode rewards of agent1: 97.5
team_policy eval idv catch total num of agent1: 22
team_policy eval team catch total num: 39
team_policy eval average step individual rewards of agent2: 0.7600231415031898
team_policy eval average team episode rewards of agent2: 97.5
team_policy eval idv catch total num of agent2: 32
team_policy eval team catch total num: 39
team_policy eval average step individual rewards of agent3: 0.7172330958835061
team_policy eval average team episode rewards of agent3: 97.5
team_policy eval idv catch total num of agent3: 30
team_policy eval team catch total num: 39
team_policy eval average step individual rewards of agent4: 0.9458969319342206
team_policy eval average team episode rewards of agent4: 97.5
team_policy eval idv catch total num of agent4: 39
team_policy eval team catch total num: 39
idv_policy eval average step individual rewards of agent0: 1.1881240348750615
idv_policy eval average team episode rewards of agent0: 115.0
idv_policy eval idv catch total num of agent0: 49
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent1: 1.019617895476688
idv_policy eval average team episode rewards of agent1: 115.0
idv_policy eval idv catch total num of agent1: 42
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent2: 0.7389629689224292
idv_policy eval average team episode rewards of agent2: 115.0
idv_policy eval idv catch total num of agent2: 31
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent3: 0.836562781887455
idv_policy eval average team episode rewards of agent3: 115.0
idv_policy eval idv catch total num of agent3: 35
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent4: 0.681485877541694
idv_policy eval average team episode rewards of agent4: 115.0
idv_policy eval idv catch total num of agent4: 29
idv_policy eval team catch total num: 46

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1601/10000 episodes, total num timesteps 320400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1602/10000 episodes, total num timesteps 320600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1603/10000 episodes, total num timesteps 320800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1604/10000 episodes, total num timesteps 321000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1605/10000 episodes, total num timesteps 321200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1606/10000 episodes, total num timesteps 321400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1607/10000 episodes, total num timesteps 321600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1608/10000 episodes, total num timesteps 321800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1609/10000 episodes, total num timesteps 322000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1610/10000 episodes, total num timesteps 322200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1611/10000 episodes, total num timesteps 322400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1612/10000 episodes, total num timesteps 322600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1613/10000 episodes, total num timesteps 322800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1614/10000 episodes, total num timesteps 323000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1615/10000 episodes, total num timesteps 323200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1616/10000 episodes, total num timesteps 323400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1617/10000 episodes, total num timesteps 323600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1618/10000 episodes, total num timesteps 323800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1619/10000 episodes, total num timesteps 324000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1620/10000 episodes, total num timesteps 324200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1621/10000 episodes, total num timesteps 324400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1622/10000 episodes, total num timesteps 324600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1623/10000 episodes, total num timesteps 324800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1624/10000 episodes, total num timesteps 325000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1625/10000 episodes, total num timesteps 325200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 1.0935935390472518
team_policy eval average team episode rewards of agent0: 130.0
team_policy eval idv catch total num of agent0: 45
team_policy eval team catch total num: 52
team_policy eval average step individual rewards of agent1: 0.6686098381646477
team_policy eval average team episode rewards of agent1: 130.0
team_policy eval idv catch total num of agent1: 28
team_policy eval team catch total num: 52
team_policy eval average step individual rewards of agent2: 0.8682840495895016
team_policy eval average team episode rewards of agent2: 130.0
team_policy eval idv catch total num of agent2: 36
team_policy eval team catch total num: 52
team_policy eval average step individual rewards of agent3: 1.1941642274578725
team_policy eval average team episode rewards of agent3: 130.0
team_policy eval idv catch total num of agent3: 49
team_policy eval team catch total num: 52
team_policy eval average step individual rewards of agent4: 0.792499760631888
team_policy eval average team episode rewards of agent4: 130.0
team_policy eval idv catch total num of agent4: 33
team_policy eval team catch total num: 52
idv_policy eval average step individual rewards of agent0: 0.8370569551433306
idv_policy eval average team episode rewards of agent0: 72.5
idv_policy eval idv catch total num of agent0: 35
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent1: 0.63602379337069
idv_policy eval average team episode rewards of agent1: 72.5
idv_policy eval idv catch total num of agent1: 27
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent2: 0.5038222550530222
idv_policy eval average team episode rewards of agent2: 72.5
idv_policy eval idv catch total num of agent2: 22
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent3: 0.6776252006994236
idv_policy eval average team episode rewards of agent3: 72.5
idv_policy eval idv catch total num of agent3: 29
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent4: 0.508625388682524
idv_policy eval average team episode rewards of agent4: 72.5
idv_policy eval idv catch total num of agent4: 22
idv_policy eval team catch total num: 29

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1626/10000 episodes, total num timesteps 325400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1627/10000 episodes, total num timesteps 325600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1628/10000 episodes, total num timesteps 325800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1629/10000 episodes, total num timesteps 326000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1630/10000 episodes, total num timesteps 326200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1631/10000 episodes, total num timesteps 326400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1632/10000 episodes, total num timesteps 326600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1633/10000 episodes, total num timesteps 326800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1634/10000 episodes, total num timesteps 327000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1635/10000 episodes, total num timesteps 327200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1636/10000 episodes, total num timesteps 327400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1637/10000 episodes, total num timesteps 327600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1638/10000 episodes, total num timesteps 327800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1639/10000 episodes, total num timesteps 328000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1640/10000 episodes, total num timesteps 328200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1641/10000 episodes, total num timesteps 328400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1642/10000 episodes, total num timesteps 328600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1643/10000 episodes, total num timesteps 328800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1644/10000 episodes, total num timesteps 329000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1645/10000 episodes, total num timesteps 329200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1646/10000 episodes, total num timesteps 329400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1647/10000 episodes, total num timesteps 329600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1648/10000 episodes, total num timesteps 329800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1649/10000 episodes, total num timesteps 330000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1650/10000 episodes, total num timesteps 330200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.8910970799801567
team_policy eval average team episode rewards of agent0: 97.5
team_policy eval idv catch total num of agent0: 37
team_policy eval team catch total num: 39
team_policy eval average step individual rewards of agent1: 0.6100671544780429
team_policy eval average team episode rewards of agent1: 97.5
team_policy eval idv catch total num of agent1: 26
team_policy eval team catch total num: 39
team_policy eval average step individual rewards of agent2: 0.8778878642330891
team_policy eval average team episode rewards of agent2: 97.5
team_policy eval idv catch total num of agent2: 37
team_policy eval team catch total num: 39
team_policy eval average step individual rewards of agent3: 0.4798494269271606
team_policy eval average team episode rewards of agent3: 97.5
team_policy eval idv catch total num of agent3: 21
team_policy eval team catch total num: 39
team_policy eval average step individual rewards of agent4: 0.6273437345102642
team_policy eval average team episode rewards of agent4: 97.5
team_policy eval idv catch total num of agent4: 27
team_policy eval team catch total num: 39
idv_policy eval average step individual rewards of agent0: 1.1161358107401538
idv_policy eval average team episode rewards of agent0: 102.5
idv_policy eval idv catch total num of agent0: 46
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent1: 0.9937530598748626
idv_policy eval average team episode rewards of agent1: 102.5
idv_policy eval idv catch total num of agent1: 41
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent2: 0.5360768244987368
idv_policy eval average team episode rewards of agent2: 102.5
idv_policy eval idv catch total num of agent2: 23
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent3: 0.5618199673035241
idv_policy eval average team episode rewards of agent3: 102.5
idv_policy eval idv catch total num of agent3: 24
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent4: 0.6538522859645609
idv_policy eval average team episode rewards of agent4: 102.5
idv_policy eval idv catch total num of agent4: 28
idv_policy eval team catch total num: 41

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1651/10000 episodes, total num timesteps 330400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1652/10000 episodes, total num timesteps 330600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1653/10000 episodes, total num timesteps 330800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1654/10000 episodes, total num timesteps 331000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1655/10000 episodes, total num timesteps 331200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1656/10000 episodes, total num timesteps 331400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1657/10000 episodes, total num timesteps 331600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1658/10000 episodes, total num timesteps 331800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1659/10000 episodes, total num timesteps 332000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1660/10000 episodes, total num timesteps 332200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1661/10000 episodes, total num timesteps 332400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1662/10000 episodes, total num timesteps 332600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1663/10000 episodes, total num timesteps 332800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1664/10000 episodes, total num timesteps 333000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1665/10000 episodes, total num timesteps 333200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1666/10000 episodes, total num timesteps 333400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1667/10000 episodes, total num timesteps 333600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1668/10000 episodes, total num timesteps 333800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1669/10000 episodes, total num timesteps 334000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1670/10000 episodes, total num timesteps 334200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1671/10000 episodes, total num timesteps 334400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1672/10000 episodes, total num timesteps 334600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1673/10000 episodes, total num timesteps 334800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1674/10000 episodes, total num timesteps 335000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1675/10000 episodes, total num timesteps 335200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 1.1438999510377625
team_policy eval average team episode rewards of agent0: 95.0
team_policy eval idv catch total num of agent0: 47
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent1: 0.6780375570520113
team_policy eval average team episode rewards of agent1: 95.0
team_policy eval idv catch total num of agent1: 29
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent2: 0.6505887179633322
team_policy eval average team episode rewards of agent2: 95.0
team_policy eval idv catch total num of agent2: 28
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent3: 0.4561115000462469
team_policy eval average team episode rewards of agent3: 95.0
team_policy eval idv catch total num of agent3: 20
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent4: 0.7128093941766285
team_policy eval average team episode rewards of agent4: 95.0
team_policy eval idv catch total num of agent4: 30
team_policy eval team catch total num: 38
idv_policy eval average step individual rewards of agent0: 0.7675447387294608
idv_policy eval average team episode rewards of agent0: 100.0
idv_policy eval idv catch total num of agent0: 32
idv_policy eval team catch total num: 40
idv_policy eval average step individual rewards of agent1: 0.4802307532189868
idv_policy eval average team episode rewards of agent1: 100.0
idv_policy eval idv catch total num of agent1: 21
idv_policy eval team catch total num: 40
idv_policy eval average step individual rewards of agent2: 0.992586344856309
idv_policy eval average team episode rewards of agent2: 100.0
idv_policy eval idv catch total num of agent2: 41
idv_policy eval team catch total num: 40
idv_policy eval average step individual rewards of agent3: 0.5910993429149367
idv_policy eval average team episode rewards of agent3: 100.0
idv_policy eval idv catch total num of agent3: 25
idv_policy eval team catch total num: 40
idv_policy eval average step individual rewards of agent4: 0.6575260411751995
idv_policy eval average team episode rewards of agent4: 100.0
idv_policy eval idv catch total num of agent4: 28
idv_policy eval team catch total num: 40

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1676/10000 episodes, total num timesteps 335400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1677/10000 episodes, total num timesteps 335600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1678/10000 episodes, total num timesteps 335800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1679/10000 episodes, total num timesteps 336000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1680/10000 episodes, total num timesteps 336200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1681/10000 episodes, total num timesteps 336400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1682/10000 episodes, total num timesteps 336600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1683/10000 episodes, total num timesteps 336800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1684/10000 episodes, total num timesteps 337000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1685/10000 episodes, total num timesteps 337200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1686/10000 episodes, total num timesteps 337400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1687/10000 episodes, total num timesteps 337600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1688/10000 episodes, total num timesteps 337800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1689/10000 episodes, total num timesteps 338000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1690/10000 episodes, total num timesteps 338200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1691/10000 episodes, total num timesteps 338400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1692/10000 episodes, total num timesteps 338600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1693/10000 episodes, total num timesteps 338800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1694/10000 episodes, total num timesteps 339000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1695/10000 episodes, total num timesteps 339200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1696/10000 episodes, total num timesteps 339400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1697/10000 episodes, total num timesteps 339600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1698/10000 episodes, total num timesteps 339800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1699/10000 episodes, total num timesteps 340000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1700/10000 episodes, total num timesteps 340200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.617329443836638
team_policy eval average team episode rewards of agent0: 62.5
team_policy eval idv catch total num of agent0: 26
team_policy eval team catch total num: 25
team_policy eval average step individual rewards of agent1: 0.46049806734929744
team_policy eval average team episode rewards of agent1: 62.5
team_policy eval idv catch total num of agent1: 20
team_policy eval team catch total num: 25
team_policy eval average step individual rewards of agent2: 0.5876886093600685
team_policy eval average team episode rewards of agent2: 62.5
team_policy eval idv catch total num of agent2: 25
team_policy eval team catch total num: 25
team_policy eval average step individual rewards of agent3: 0.6204929230831588
team_policy eval average team episode rewards of agent3: 62.5
team_policy eval idv catch total num of agent3: 26
team_policy eval team catch total num: 25
team_policy eval average step individual rewards of agent4: 0.3803149835255802
team_policy eval average team episode rewards of agent4: 62.5
team_policy eval idv catch total num of agent4: 17
team_policy eval team catch total num: 25
idv_policy eval average step individual rewards of agent0: 0.9357081159227764
idv_policy eval average team episode rewards of agent0: 92.5
idv_policy eval idv catch total num of agent0: 39
idv_policy eval team catch total num: 37
idv_policy eval average step individual rewards of agent1: 0.790434314772524
idv_policy eval average team episode rewards of agent1: 92.5
idv_policy eval idv catch total num of agent1: 34
idv_policy eval team catch total num: 37
idv_policy eval average step individual rewards of agent2: 0.6474518082726145
idv_policy eval average team episode rewards of agent2: 92.5
idv_policy eval idv catch total num of agent2: 28
idv_policy eval team catch total num: 37
idv_policy eval average step individual rewards of agent3: 0.5293049190861882
idv_policy eval average team episode rewards of agent3: 92.5
idv_policy eval idv catch total num of agent3: 23
idv_policy eval team catch total num: 37
idv_policy eval average step individual rewards of agent4: 0.5326541222953942
idv_policy eval average team episode rewards of agent4: 92.5
idv_policy eval idv catch total num of agent4: 23
idv_policy eval team catch total num: 37

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1701/10000 episodes, total num timesteps 340400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1702/10000 episodes, total num timesteps 340600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1703/10000 episodes, total num timesteps 340800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1704/10000 episodes, total num timesteps 341000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1705/10000 episodes, total num timesteps 341200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1706/10000 episodes, total num timesteps 341400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1707/10000 episodes, total num timesteps 341600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1708/10000 episodes, total num timesteps 341800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1709/10000 episodes, total num timesteps 342000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1710/10000 episodes, total num timesteps 342200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1711/10000 episodes, total num timesteps 342400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1712/10000 episodes, total num timesteps 342600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1713/10000 episodes, total num timesteps 342800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1714/10000 episodes, total num timesteps 343000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1715/10000 episodes, total num timesteps 343200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1716/10000 episodes, total num timesteps 343400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1717/10000 episodes, total num timesteps 343600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1718/10000 episodes, total num timesteps 343800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1719/10000 episodes, total num timesteps 344000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1720/10000 episodes, total num timesteps 344200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1721/10000 episodes, total num timesteps 344400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1722/10000 episodes, total num timesteps 344600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1723/10000 episodes, total num timesteps 344800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1724/10000 episodes, total num timesteps 345000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1725/10000 episodes, total num timesteps 345200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.9978231436219293
team_policy eval average team episode rewards of agent0: 90.0
team_policy eval idv catch total num of agent0: 41
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent1: 0.5321054737305593
team_policy eval average team episode rewards of agent1: 90.0
team_policy eval idv catch total num of agent1: 23
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent2: 0.8634921159875395
team_policy eval average team episode rewards of agent2: 90.0
team_policy eval idv catch total num of agent2: 36
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent3: 0.7130365031602963
team_policy eval average team episode rewards of agent3: 90.0
team_policy eval idv catch total num of agent3: 30
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent4: 0.5599276302653639
team_policy eval average team episode rewards of agent4: 90.0
team_policy eval idv catch total num of agent4: 24
team_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent0: 0.611742738991359
idv_policy eval average team episode rewards of agent0: 125.0
idv_policy eval idv catch total num of agent0: 26
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent1: 0.8342212030419496
idv_policy eval average team episode rewards of agent1: 125.0
idv_policy eval idv catch total num of agent1: 35
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent2: 1.1227695125233035
idv_policy eval average team episode rewards of agent2: 125.0
idv_policy eval idv catch total num of agent2: 46
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent3: 0.8891015789247483
idv_policy eval average team episode rewards of agent3: 125.0
idv_policy eval idv catch total num of agent3: 37
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent4: 0.9182582290140732
idv_policy eval average team episode rewards of agent4: 125.0
idv_policy eval idv catch total num of agent4: 38
idv_policy eval team catch total num: 50

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1726/10000 episodes, total num timesteps 345400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1727/10000 episodes, total num timesteps 345600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1728/10000 episodes, total num timesteps 345800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1729/10000 episodes, total num timesteps 346000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1730/10000 episodes, total num timesteps 346200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1731/10000 episodes, total num timesteps 346400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1732/10000 episodes, total num timesteps 346600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1733/10000 episodes, total num timesteps 346800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1734/10000 episodes, total num timesteps 347000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1735/10000 episodes, total num timesteps 347200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1736/10000 episodes, total num timesteps 347400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1737/10000 episodes, total num timesteps 347600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1738/10000 episodes, total num timesteps 347800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1739/10000 episodes, total num timesteps 348000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1740/10000 episodes, total num timesteps 348200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1741/10000 episodes, total num timesteps 348400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1742/10000 episodes, total num timesteps 348600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1743/10000 episodes, total num timesteps 348800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1744/10000 episodes, total num timesteps 349000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1745/10000 episodes, total num timesteps 349200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1746/10000 episodes, total num timesteps 349400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1747/10000 episodes, total num timesteps 349600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1748/10000 episodes, total num timesteps 349800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1749/10000 episodes, total num timesteps 350000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1750/10000 episodes, total num timesteps 350200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 1.222735617116349
team_policy eval average team episode rewards of agent0: 157.5
team_policy eval idv catch total num of agent0: 50
team_policy eval team catch total num: 63
team_policy eval average step individual rewards of agent1: 1.1446355573391376
team_policy eval average team episode rewards of agent1: 157.5
team_policy eval idv catch total num of agent1: 47
team_policy eval team catch total num: 63
team_policy eval average step individual rewards of agent2: 0.9931659620758555
team_policy eval average team episode rewards of agent2: 157.5
team_policy eval idv catch total num of agent2: 41
team_policy eval team catch total num: 63
team_policy eval average step individual rewards of agent3: 0.9929185588187791
team_policy eval average team episode rewards of agent3: 157.5
team_policy eval idv catch total num of agent3: 41
team_policy eval team catch total num: 63
team_policy eval average step individual rewards of agent4: 1.1614965578937153
team_policy eval average team episode rewards of agent4: 157.5
team_policy eval idv catch total num of agent4: 48
team_policy eval team catch total num: 63
idv_policy eval average step individual rewards of agent0: 0.7633488041694454
idv_policy eval average team episode rewards of agent0: 165.0
idv_policy eval idv catch total num of agent0: 32
idv_policy eval team catch total num: 66
idv_policy eval average step individual rewards of agent1: 0.8605782474035499
idv_policy eval average team episode rewards of agent1: 165.0
idv_policy eval idv catch total num of agent1: 36
idv_policy eval team catch total num: 66
idv_policy eval average step individual rewards of agent2: 1.4236170856258317
idv_policy eval average team episode rewards of agent2: 165.0
idv_policy eval idv catch total num of agent2: 58
idv_policy eval team catch total num: 66
idv_policy eval average step individual rewards of agent3: 0.9661389505140136
idv_policy eval average team episode rewards of agent3: 165.0
idv_policy eval idv catch total num of agent3: 40
idv_policy eval team catch total num: 66
idv_policy eval average step individual rewards of agent4: 1.24806322000217
idv_policy eval average team episode rewards of agent4: 165.0
idv_policy eval idv catch total num of agent4: 51
idv_policy eval team catch total num: 66

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1751/10000 episodes, total num timesteps 350400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1752/10000 episodes, total num timesteps 350600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1753/10000 episodes, total num timesteps 350800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1754/10000 episodes, total num timesteps 351000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1755/10000 episodes, total num timesteps 351200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1756/10000 episodes, total num timesteps 351400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1757/10000 episodes, total num timesteps 351600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1758/10000 episodes, total num timesteps 351800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1759/10000 episodes, total num timesteps 352000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1760/10000 episodes, total num timesteps 352200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1761/10000 episodes, total num timesteps 352400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1762/10000 episodes, total num timesteps 352600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1763/10000 episodes, total num timesteps 352800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1764/10000 episodes, total num timesteps 353000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1765/10000 episodes, total num timesteps 353200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1766/10000 episodes, total num timesteps 353400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1767/10000 episodes, total num timesteps 353600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1768/10000 episodes, total num timesteps 353800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1769/10000 episodes, total num timesteps 354000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1770/10000 episodes, total num timesteps 354200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1771/10000 episodes, total num timesteps 354400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1772/10000 episodes, total num timesteps 354600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1773/10000 episodes, total num timesteps 354800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1774/10000 episodes, total num timesteps 355000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1775/10000 episodes, total num timesteps 355200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.7941524307921753
team_policy eval average team episode rewards of agent0: 127.5
team_policy eval idv catch total num of agent0: 33
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent1: 0.8145126080620477
team_policy eval average team episode rewards of agent1: 127.5
team_policy eval idv catch total num of agent1: 34
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent2: 1.0168893489525965
team_policy eval average team episode rewards of agent2: 127.5
team_policy eval idv catch total num of agent2: 42
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent3: 1.1234496109065417
team_policy eval average team episode rewards of agent3: 127.5
team_policy eval idv catch total num of agent3: 46
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent4: 0.7835571872910779
team_policy eval average team episode rewards of agent4: 127.5
team_policy eval idv catch total num of agent4: 33
team_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent0: 0.7677988138407192
idv_policy eval average team episode rewards of agent0: 115.0
idv_policy eval idv catch total num of agent0: 32
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent1: 0.8144358433331423
idv_policy eval average team episode rewards of agent1: 115.0
idv_policy eval idv catch total num of agent1: 34
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent2: 1.124328238488208
idv_policy eval average team episode rewards of agent2: 115.0
idv_policy eval idv catch total num of agent2: 46
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent3: 1.1897698960244627
idv_policy eval average team episode rewards of agent3: 115.0
idv_policy eval idv catch total num of agent3: 49
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent4: 1.0668954516321207
idv_policy eval average team episode rewards of agent4: 115.0
idv_policy eval idv catch total num of agent4: 44
idv_policy eval team catch total num: 46

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1776/10000 episodes, total num timesteps 355400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1777/10000 episodes, total num timesteps 355600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1778/10000 episodes, total num timesteps 355800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1779/10000 episodes, total num timesteps 356000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1780/10000 episodes, total num timesteps 356200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1781/10000 episodes, total num timesteps 356400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1782/10000 episodes, total num timesteps 356600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1783/10000 episodes, total num timesteps 356800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1784/10000 episodes, total num timesteps 357000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1785/10000 episodes, total num timesteps 357200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1786/10000 episodes, total num timesteps 357400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1787/10000 episodes, total num timesteps 357600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1788/10000 episodes, total num timesteps 357800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1789/10000 episodes, total num timesteps 358000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1790/10000 episodes, total num timesteps 358200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1791/10000 episodes, total num timesteps 358400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1792/10000 episodes, total num timesteps 358600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1793/10000 episodes, total num timesteps 358800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1794/10000 episodes, total num timesteps 359000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1795/10000 episodes, total num timesteps 359200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1796/10000 episodes, total num timesteps 359400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1797/10000 episodes, total num timesteps 359600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1798/10000 episodes, total num timesteps 359800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1799/10000 episodes, total num timesteps 360000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1800/10000 episodes, total num timesteps 360200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.76204732849865
team_policy eval average team episode rewards of agent0: 97.5
team_policy eval idv catch total num of agent0: 32
team_policy eval team catch total num: 39
team_policy eval average step individual rewards of agent1: 0.8666755795673907
team_policy eval average team episode rewards of agent1: 97.5
team_policy eval idv catch total num of agent1: 36
team_policy eval team catch total num: 39
team_policy eval average step individual rewards of agent2: 0.589000948714062
team_policy eval average team episode rewards of agent2: 97.5
team_policy eval idv catch total num of agent2: 25
team_policy eval team catch total num: 39
team_policy eval average step individual rewards of agent3: 0.6142620545538097
team_policy eval average team episode rewards of agent3: 97.5
team_policy eval idv catch total num of agent3: 26
team_policy eval team catch total num: 39
team_policy eval average step individual rewards of agent4: 0.837591713229708
team_policy eval average team episode rewards of agent4: 97.5
team_policy eval idv catch total num of agent4: 35
team_policy eval team catch total num: 39
idv_policy eval average step individual rewards of agent0: 0.7130659906569988
idv_policy eval average team episode rewards of agent0: 70.0
idv_policy eval idv catch total num of agent0: 30
idv_policy eval team catch total num: 28
idv_policy eval average step individual rewards of agent1: 0.18439424265469312
idv_policy eval average team episode rewards of agent1: 70.0
idv_policy eval idv catch total num of agent1: 9
idv_policy eval team catch total num: 28
idv_policy eval average step individual rewards of agent2: 0.6605998568756423
idv_policy eval average team episode rewards of agent2: 70.0
idv_policy eval idv catch total num of agent2: 28
idv_policy eval team catch total num: 28
idv_policy eval average step individual rewards of agent3: 0.763111811122991
idv_policy eval average team episode rewards of agent3: 70.0
idv_policy eval idv catch total num of agent3: 32
idv_policy eval team catch total num: 28
idv_policy eval average step individual rewards of agent4: 0.4363263936270353
idv_policy eval average team episode rewards of agent4: 70.0
idv_policy eval idv catch total num of agent4: 19
idv_policy eval team catch total num: 28

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1801/10000 episodes, total num timesteps 360400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1802/10000 episodes, total num timesteps 360600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1803/10000 episodes, total num timesteps 360800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1804/10000 episodes, total num timesteps 361000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1805/10000 episodes, total num timesteps 361200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1806/10000 episodes, total num timesteps 361400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1807/10000 episodes, total num timesteps 361600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1808/10000 episodes, total num timesteps 361800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1809/10000 episodes, total num timesteps 362000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1810/10000 episodes, total num timesteps 362200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1811/10000 episodes, total num timesteps 362400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1812/10000 episodes, total num timesteps 362600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1813/10000 episodes, total num timesteps 362800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1814/10000 episodes, total num timesteps 363000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1815/10000 episodes, total num timesteps 363200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1816/10000 episodes, total num timesteps 363400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1817/10000 episodes, total num timesteps 363600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1818/10000 episodes, total num timesteps 363800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1819/10000 episodes, total num timesteps 364000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1820/10000 episodes, total num timesteps 364200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1821/10000 episodes, total num timesteps 364400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1822/10000 episodes, total num timesteps 364600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1823/10000 episodes, total num timesteps 364800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1824/10000 episodes, total num timesteps 365000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1825/10000 episodes, total num timesteps 365200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.40383436240333465
team_policy eval average team episode rewards of agent0: 112.5
team_policy eval idv catch total num of agent0: 18
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent1: 0.5553721479446346
team_policy eval average team episode rewards of agent1: 112.5
team_policy eval idv catch total num of agent1: 24
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent2: 0.5845439379609206
team_policy eval average team episode rewards of agent2: 112.5
team_policy eval idv catch total num of agent2: 25
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent3: 1.1399227438075112
team_policy eval average team episode rewards of agent3: 112.5
team_policy eval idv catch total num of agent3: 47
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent4: 0.9640881516018902
team_policy eval average team episode rewards of agent4: 112.5
team_policy eval idv catch total num of agent4: 40
team_policy eval team catch total num: 45
idv_policy eval average step individual rewards of agent0: 1.2468040841440884
idv_policy eval average team episode rewards of agent0: 140.0
idv_policy eval idv catch total num of agent0: 51
idv_policy eval team catch total num: 56
idv_policy eval average step individual rewards of agent1: 0.9863709550106392
idv_policy eval average team episode rewards of agent1: 140.0
idv_policy eval idv catch total num of agent1: 41
idv_policy eval team catch total num: 56
idv_policy eval average step individual rewards of agent2: 0.7908543478010139
idv_policy eval average team episode rewards of agent2: 140.0
idv_policy eval idv catch total num of agent2: 33
idv_policy eval team catch total num: 56
idv_policy eval average step individual rewards of agent3: 0.9430327485527702
idv_policy eval average team episode rewards of agent3: 140.0
idv_policy eval idv catch total num of agent3: 39
idv_policy eval team catch total num: 56
idv_policy eval average step individual rewards of agent4: 1.0372698876942499
idv_policy eval average team episode rewards of agent4: 140.0
idv_policy eval idv catch total num of agent4: 43
idv_policy eval team catch total num: 56

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1826/10000 episodes, total num timesteps 365400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1827/10000 episodes, total num timesteps 365600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1828/10000 episodes, total num timesteps 365800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1829/10000 episodes, total num timesteps 366000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1830/10000 episodes, total num timesteps 366200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1831/10000 episodes, total num timesteps 366400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1832/10000 episodes, total num timesteps 366600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1833/10000 episodes, total num timesteps 366800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1834/10000 episodes, total num timesteps 367000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1835/10000 episodes, total num timesteps 367200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1836/10000 episodes, total num timesteps 367400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1837/10000 episodes, total num timesteps 367600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1838/10000 episodes, total num timesteps 367800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1839/10000 episodes, total num timesteps 368000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1840/10000 episodes, total num timesteps 368200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1841/10000 episodes, total num timesteps 368400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1842/10000 episodes, total num timesteps 368600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1843/10000 episodes, total num timesteps 368800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1844/10000 episodes, total num timesteps 369000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1845/10000 episodes, total num timesteps 369200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1846/10000 episodes, total num timesteps 369400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1847/10000 episodes, total num timesteps 369600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1848/10000 episodes, total num timesteps 369800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1849/10000 episodes, total num timesteps 370000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1850/10000 episodes, total num timesteps 370200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.5098640094809551
team_policy eval average team episode rewards of agent0: 85.0
team_policy eval idv catch total num of agent0: 22
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent1: 0.5798357135339915
team_policy eval average team episode rewards of agent1: 85.0
team_policy eval idv catch total num of agent1: 25
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent2: 0.4271840812364057
team_policy eval average team episode rewards of agent2: 85.0
team_policy eval idv catch total num of agent2: 19
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent3: 0.659637443381341
team_policy eval average team episode rewards of agent3: 85.0
team_policy eval idv catch total num of agent3: 28
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent4: 0.40818184985964406
team_policy eval average team episode rewards of agent4: 85.0
team_policy eval idv catch total num of agent4: 18
team_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent0: 0.7906249858906963
idv_policy eval average team episode rewards of agent0: 115.0
idv_policy eval idv catch total num of agent0: 33
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent1: 0.5608296874488929
idv_policy eval average team episode rewards of agent1: 115.0
idv_policy eval idv catch total num of agent1: 24
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent2: 0.7391782388154318
idv_policy eval average team episode rewards of agent2: 115.0
idv_policy eval idv catch total num of agent2: 31
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent3: 0.733316351105811
idv_policy eval average team episode rewards of agent3: 115.0
idv_policy eval idv catch total num of agent3: 31
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent4: 1.0411135023313318
idv_policy eval average team episode rewards of agent4: 115.0
idv_policy eval idv catch total num of agent4: 43
idv_policy eval team catch total num: 46

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1851/10000 episodes, total num timesteps 370400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1852/10000 episodes, total num timesteps 370600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1853/10000 episodes, total num timesteps 370800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1854/10000 episodes, total num timesteps 371000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1855/10000 episodes, total num timesteps 371200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1856/10000 episodes, total num timesteps 371400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1857/10000 episodes, total num timesteps 371600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1858/10000 episodes, total num timesteps 371800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1859/10000 episodes, total num timesteps 372000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1860/10000 episodes, total num timesteps 372200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1861/10000 episodes, total num timesteps 372400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1862/10000 episodes, total num timesteps 372600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1863/10000 episodes, total num timesteps 372800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1864/10000 episodes, total num timesteps 373000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1865/10000 episodes, total num timesteps 373200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1866/10000 episodes, total num timesteps 373400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1867/10000 episodes, total num timesteps 373600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1868/10000 episodes, total num timesteps 373800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1869/10000 episodes, total num timesteps 374000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1870/10000 episodes, total num timesteps 374200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1871/10000 episodes, total num timesteps 374400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1872/10000 episodes, total num timesteps 374600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1873/10000 episodes, total num timesteps 374800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1874/10000 episodes, total num timesteps 375000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1875/10000 episodes, total num timesteps 375200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.920896789674839
team_policy eval average team episode rewards of agent0: 152.5
team_policy eval idv catch total num of agent0: 38
team_policy eval team catch total num: 61
team_policy eval average step individual rewards of agent1: 0.9482084815359133
team_policy eval average team episode rewards of agent1: 152.5
team_policy eval idv catch total num of agent1: 39
team_policy eval team catch total num: 61
team_policy eval average step individual rewards of agent2: 1.2270383282216037
team_policy eval average team episode rewards of agent2: 152.5
team_policy eval idv catch total num of agent2: 50
team_policy eval team catch total num: 61
team_policy eval average step individual rewards of agent3: 0.7903325292660184
team_policy eval average team episode rewards of agent3: 152.5
team_policy eval idv catch total num of agent3: 33
team_policy eval team catch total num: 61
team_policy eval average step individual rewards of agent4: 1.1247527755467441
team_policy eval average team episode rewards of agent4: 152.5
team_policy eval idv catch total num of agent4: 46
team_policy eval team catch total num: 61
idv_policy eval average step individual rewards of agent0: 0.9884064708434411
idv_policy eval average team episode rewards of agent0: 110.0
idv_policy eval idv catch total num of agent0: 41
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent1: 0.552092398664322
idv_policy eval average team episode rewards of agent1: 110.0
idv_policy eval idv catch total num of agent1: 24
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent2: 0.37520588621281364
idv_policy eval average team episode rewards of agent2: 110.0
idv_policy eval idv catch total num of agent2: 17
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent3: 0.834899594293039
idv_policy eval average team episode rewards of agent3: 110.0
idv_policy eval idv catch total num of agent3: 35
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent4: 1.1783189227965425
idv_policy eval average team episode rewards of agent4: 110.0
idv_policy eval idv catch total num of agent4: 49
idv_policy eval team catch total num: 44

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1876/10000 episodes, total num timesteps 375400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1877/10000 episodes, total num timesteps 375600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1878/10000 episodes, total num timesteps 375800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1879/10000 episodes, total num timesteps 376000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1880/10000 episodes, total num timesteps 376200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1881/10000 episodes, total num timesteps 376400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1882/10000 episodes, total num timesteps 376600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1883/10000 episodes, total num timesteps 376800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1884/10000 episodes, total num timesteps 377000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1885/10000 episodes, total num timesteps 377200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1886/10000 episodes, total num timesteps 377400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1887/10000 episodes, total num timesteps 377600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1888/10000 episodes, total num timesteps 377800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1889/10000 episodes, total num timesteps 378000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1890/10000 episodes, total num timesteps 378200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1891/10000 episodes, total num timesteps 378400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1892/10000 episodes, total num timesteps 378600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1893/10000 episodes, total num timesteps 378800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1894/10000 episodes, total num timesteps 379000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1895/10000 episodes, total num timesteps 379200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1896/10000 episodes, total num timesteps 379400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1897/10000 episodes, total num timesteps 379600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1898/10000 episodes, total num timesteps 379800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1899/10000 episodes, total num timesteps 380000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1900/10000 episodes, total num timesteps 380200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 1.1947643712017577
team_policy eval average team episode rewards of agent0: 122.5
team_policy eval idv catch total num of agent0: 49
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent1: 0.4619035520829111
team_policy eval average team episode rewards of agent1: 122.5
team_policy eval idv catch total num of agent1: 20
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent2: 1.1196348173329944
team_policy eval average team episode rewards of agent2: 122.5
team_policy eval idv catch total num of agent2: 46
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent3: 1.0161587116115989
team_policy eval average team episode rewards of agent3: 122.5
team_policy eval idv catch total num of agent3: 42
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent4: 0.5572610639828595
team_policy eval average team episode rewards of agent4: 122.5
team_policy eval idv catch total num of agent4: 24
team_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent0: 0.6853175183617463
idv_policy eval average team episode rewards of agent0: 120.0
idv_policy eval idv catch total num of agent0: 29
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent1: 0.8182619261354337
idv_policy eval average team episode rewards of agent1: 120.0
idv_policy eval idv catch total num of agent1: 34
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent2: 1.4746593795651242
idv_policy eval average team episode rewards of agent2: 120.0
idv_policy eval idv catch total num of agent2: 60
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent3: 0.8132961813545521
idv_policy eval average team episode rewards of agent3: 120.0
idv_policy eval idv catch total num of agent3: 34
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent4: 0.6677025150991824
idv_policy eval average team episode rewards of agent4: 120.0
idv_policy eval idv catch total num of agent4: 28
idv_policy eval team catch total num: 48

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1901/10000 episodes, total num timesteps 380400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1902/10000 episodes, total num timesteps 380600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1903/10000 episodes, total num timesteps 380800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1904/10000 episodes, total num timesteps 381000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1905/10000 episodes, total num timesteps 381200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1906/10000 episodes, total num timesteps 381400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1907/10000 episodes, total num timesteps 381600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1908/10000 episodes, total num timesteps 381800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1909/10000 episodes, total num timesteps 382000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1910/10000 episodes, total num timesteps 382200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1911/10000 episodes, total num timesteps 382400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1912/10000 episodes, total num timesteps 382600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1913/10000 episodes, total num timesteps 382800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1914/10000 episodes, total num timesteps 383000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1915/10000 episodes, total num timesteps 383200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1916/10000 episodes, total num timesteps 383400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1917/10000 episodes, total num timesteps 383600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1918/10000 episodes, total num timesteps 383800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1919/10000 episodes, total num timesteps 384000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1920/10000 episodes, total num timesteps 384200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1921/10000 episodes, total num timesteps 384400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1922/10000 episodes, total num timesteps 384600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1923/10000 episodes, total num timesteps 384800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1924/10000 episodes, total num timesteps 385000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1925/10000 episodes, total num timesteps 385200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 1.0445038588823221
team_policy eval average team episode rewards of agent0: 85.0
team_policy eval idv catch total num of agent0: 43
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent1: 0.5857043994078387
team_policy eval average team episode rewards of agent1: 85.0
team_policy eval idv catch total num of agent1: 25
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent2: 1.0947086148113623
team_policy eval average team episode rewards of agent2: 85.0
team_policy eval idv catch total num of agent2: 45
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent3: 0.31037149803647823
team_policy eval average team episode rewards of agent3: 85.0
team_policy eval idv catch total num of agent3: 14
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent4: 0.45193166052266215
team_policy eval average team episode rewards of agent4: 85.0
team_policy eval idv catch total num of agent4: 20
team_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent0: 0.9820692318998563
idv_policy eval average team episode rewards of agent0: 77.5
idv_policy eval idv catch total num of agent0: 41
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent1: 0.48300255741825393
idv_policy eval average team episode rewards of agent1: 77.5
idv_policy eval idv catch total num of agent1: 21
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent2: 0.6817869046937488
idv_policy eval average team episode rewards of agent2: 77.5
idv_policy eval idv catch total num of agent2: 29
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent3: 0.6081417757205019
idv_policy eval average team episode rewards of agent3: 77.5
idv_policy eval idv catch total num of agent3: 26
idv_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent4: 0.3344103814212018
idv_policy eval average team episode rewards of agent4: 77.5
idv_policy eval idv catch total num of agent4: 15
idv_policy eval team catch total num: 31

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1926/10000 episodes, total num timesteps 385400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1927/10000 episodes, total num timesteps 385600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1928/10000 episodes, total num timesteps 385800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1929/10000 episodes, total num timesteps 386000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1930/10000 episodes, total num timesteps 386200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1931/10000 episodes, total num timesteps 386400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1932/10000 episodes, total num timesteps 386600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1933/10000 episodes, total num timesteps 386800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1934/10000 episodes, total num timesteps 387000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1935/10000 episodes, total num timesteps 387200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1936/10000 episodes, total num timesteps 387400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1937/10000 episodes, total num timesteps 387600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1938/10000 episodes, total num timesteps 387800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1939/10000 episodes, total num timesteps 388000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1940/10000 episodes, total num timesteps 388200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1941/10000 episodes, total num timesteps 388400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1942/10000 episodes, total num timesteps 388600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1943/10000 episodes, total num timesteps 388800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1944/10000 episodes, total num timesteps 389000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1945/10000 episodes, total num timesteps 389200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1946/10000 episodes, total num timesteps 389400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1947/10000 episodes, total num timesteps 389600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1948/10000 episodes, total num timesteps 389800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1949/10000 episodes, total num timesteps 390000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1950/10000 episodes, total num timesteps 390200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.561911715324568
team_policy eval average team episode rewards of agent0: 85.0
team_policy eval idv catch total num of agent0: 24
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent1: 0.618441049045345
team_policy eval average team episode rewards of agent1: 85.0
team_policy eval idv catch total num of agent1: 26
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent2: 0.5595452686942852
team_policy eval average team episode rewards of agent2: 85.0
team_policy eval idv catch total num of agent2: 24
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent3: 0.7262242663144736
team_policy eval average team episode rewards of agent3: 85.0
team_policy eval idv catch total num of agent3: 31
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent4: 0.5056997290267105
team_policy eval average team episode rewards of agent4: 85.0
team_policy eval idv catch total num of agent4: 22
team_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent0: 1.2680544417353765
idv_policy eval average team episode rewards of agent0: 137.5
idv_policy eval idv catch total num of agent0: 52
idv_policy eval team catch total num: 55
idv_policy eval average step individual rewards of agent1: 1.0968484401008434
idv_policy eval average team episode rewards of agent1: 137.5
idv_policy eval idv catch total num of agent1: 45
idv_policy eval team catch total num: 55
idv_policy eval average step individual rewards of agent2: 1.2176306549867717
idv_policy eval average team episode rewards of agent2: 137.5
idv_policy eval idv catch total num of agent2: 50
idv_policy eval team catch total num: 55
idv_policy eval average step individual rewards of agent3: 0.5929811203074271
idv_policy eval average team episode rewards of agent3: 137.5
idv_policy eval idv catch total num of agent3: 25
idv_policy eval team catch total num: 55
idv_policy eval average step individual rewards of agent4: 0.9172707129129867
idv_policy eval average team episode rewards of agent4: 137.5
idv_policy eval idv catch total num of agent4: 38
idv_policy eval team catch total num: 55

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1951/10000 episodes, total num timesteps 390400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1952/10000 episodes, total num timesteps 390600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1953/10000 episodes, total num timesteps 390800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1954/10000 episodes, total num timesteps 391000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1955/10000 episodes, total num timesteps 391200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1956/10000 episodes, total num timesteps 391400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1957/10000 episodes, total num timesteps 391600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1958/10000 episodes, total num timesteps 391800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1959/10000 episodes, total num timesteps 392000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1960/10000 episodes, total num timesteps 392200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1961/10000 episodes, total num timesteps 392400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1962/10000 episodes, total num timesteps 392600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1963/10000 episodes, total num timesteps 392800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1964/10000 episodes, total num timesteps 393000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1965/10000 episodes, total num timesteps 393200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1966/10000 episodes, total num timesteps 393400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1967/10000 episodes, total num timesteps 393600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1968/10000 episodes, total num timesteps 393800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1969/10000 episodes, total num timesteps 394000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1970/10000 episodes, total num timesteps 394200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1971/10000 episodes, total num timesteps 394400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1972/10000 episodes, total num timesteps 394600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1973/10000 episodes, total num timesteps 394800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1974/10000 episodes, total num timesteps 395000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1975/10000 episodes, total num timesteps 395200/2000000, FPS 325.

team_policy eval average step individual rewards of agent0: 0.3575305987656489
team_policy eval average team episode rewards of agent0: 72.5
team_policy eval idv catch total num of agent0: 16
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent1: 0.715393542136904
team_policy eval average team episode rewards of agent1: 72.5
team_policy eval idv catch total num of agent1: 30
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent2: 0.7177874206615488
team_policy eval average team episode rewards of agent2: 72.5
team_policy eval idv catch total num of agent2: 30
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent3: 0.5646503732856778
team_policy eval average team episode rewards of agent3: 72.5
team_policy eval idv catch total num of agent3: 24
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent4: 0.32638938215896135
team_policy eval average team episode rewards of agent4: 72.5
team_policy eval idv catch total num of agent4: 15
team_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent0: 0.1186355899528967
idv_policy eval average team episode rewards of agent0: 50.0
idv_policy eval idv catch total num of agent0: 7
idv_policy eval team catch total num: 20
idv_policy eval average step individual rewards of agent1: 0.48759885511558365
idv_policy eval average team episode rewards of agent1: 50.0
idv_policy eval idv catch total num of agent1: 21
idv_policy eval team catch total num: 20
idv_policy eval average step individual rewards of agent2: 0.2798414705221061
idv_policy eval average team episode rewards of agent2: 50.0
idv_policy eval idv catch total num of agent2: 13
idv_policy eval team catch total num: 20
idv_policy eval average step individual rewards of agent3: 0.31980977033210495
idv_policy eval average team episode rewards of agent3: 50.0
idv_policy eval idv catch total num of agent3: 15
idv_policy eval team catch total num: 20
idv_policy eval average step individual rewards of agent4: 0.596359729734881
idv_policy eval average team episode rewards of agent4: 50.0
idv_policy eval idv catch total num of agent4: 25
idv_policy eval team catch total num: 20

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1976/10000 episodes, total num timesteps 395400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1977/10000 episodes, total num timesteps 395600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1978/10000 episodes, total num timesteps 395800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1979/10000 episodes, total num timesteps 396000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1980/10000 episodes, total num timesteps 396200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1981/10000 episodes, total num timesteps 396400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1982/10000 episodes, total num timesteps 396600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1983/10000 episodes, total num timesteps 396800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1984/10000 episodes, total num timesteps 397000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1985/10000 episodes, total num timesteps 397200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1986/10000 episodes, total num timesteps 397400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1987/10000 episodes, total num timesteps 397600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1988/10000 episodes, total num timesteps 397800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1989/10000 episodes, total num timesteps 398000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1990/10000 episodes, total num timesteps 398200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1991/10000 episodes, total num timesteps 398400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1992/10000 episodes, total num timesteps 398600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1993/10000 episodes, total num timesteps 398800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1994/10000 episodes, total num timesteps 399000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1995/10000 episodes, total num timesteps 399200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1996/10000 episodes, total num timesteps 399400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1997/10000 episodes, total num timesteps 399600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1998/10000 episodes, total num timesteps 399800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 1999/10000 episodes, total num timesteps 400000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2000/10000 episodes, total num timesteps 400200/2000000, FPS 325.

team_policy eval average step individual rewards of agent0: 0.5855662040029012
team_policy eval average team episode rewards of agent0: 92.5
team_policy eval idv catch total num of agent0: 25
team_policy eval team catch total num: 37
team_policy eval average step individual rewards of agent1: 0.6371090303810644
team_policy eval average team episode rewards of agent1: 92.5
team_policy eval idv catch total num of agent1: 27
team_policy eval team catch total num: 37
team_policy eval average step individual rewards of agent2: 0.659598325757396
team_policy eval average team episode rewards of agent2: 92.5
team_policy eval idv catch total num of agent2: 28
team_policy eval team catch total num: 37
team_policy eval average step individual rewards of agent3: 0.708346783738373
team_policy eval average team episode rewards of agent3: 92.5
team_policy eval idv catch total num of agent3: 30
team_policy eval team catch total num: 37
team_policy eval average step individual rewards of agent4: 0.7617012626144628
team_policy eval average team episode rewards of agent4: 92.5
team_policy eval idv catch total num of agent4: 32
team_policy eval team catch total num: 37
idv_policy eval average step individual rewards of agent0: 0.8801940162618053
idv_policy eval average team episode rewards of agent0: 107.5
idv_policy eval idv catch total num of agent0: 37
idv_policy eval team catch total num: 43
idv_policy eval average step individual rewards of agent1: 0.48433235601737123
idv_policy eval average team episode rewards of agent1: 107.5
idv_policy eval idv catch total num of agent1: 21
idv_policy eval team catch total num: 43
idv_policy eval average step individual rewards of agent2: 0.2872545067833203
idv_policy eval average team episode rewards of agent2: 107.5
idv_policy eval idv catch total num of agent2: 13
idv_policy eval team catch total num: 43
idv_policy eval average step individual rewards of agent3: 0.7302205861781951
idv_policy eval average team episode rewards of agent3: 107.5
idv_policy eval idv catch total num of agent3: 31
idv_policy eval team catch total num: 43
idv_policy eval average step individual rewards of agent4: 1.2353702843729286
idv_policy eval average team episode rewards of agent4: 107.5
idv_policy eval idv catch total num of agent4: 51
idv_policy eval team catch total num: 43

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2001/10000 episodes, total num timesteps 400400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2002/10000 episodes, total num timesteps 400600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2003/10000 episodes, total num timesteps 400800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2004/10000 episodes, total num timesteps 401000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2005/10000 episodes, total num timesteps 401200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2006/10000 episodes, total num timesteps 401400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2007/10000 episodes, total num timesteps 401600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2008/10000 episodes, total num timesteps 401800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2009/10000 episodes, total num timesteps 402000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2010/10000 episodes, total num timesteps 402200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2011/10000 episodes, total num timesteps 402400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2012/10000 episodes, total num timesteps 402600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2013/10000 episodes, total num timesteps 402800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2014/10000 episodes, total num timesteps 403000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2015/10000 episodes, total num timesteps 403200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2016/10000 episodes, total num timesteps 403400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2017/10000 episodes, total num timesteps 403600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2018/10000 episodes, total num timesteps 403800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2019/10000 episodes, total num timesteps 404000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2020/10000 episodes, total num timesteps 404200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2021/10000 episodes, total num timesteps 404400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2022/10000 episodes, total num timesteps 404600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2023/10000 episodes, total num timesteps 404800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2024/10000 episodes, total num timesteps 405000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2025/10000 episodes, total num timesteps 405200/2000000, FPS 325.

team_policy eval average step individual rewards of agent0: 0.8972399215881103
team_policy eval average team episode rewards of agent0: 85.0
team_policy eval idv catch total num of agent0: 37
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent1: 0.48049894842574536
team_policy eval average team episode rewards of agent1: 85.0
team_policy eval idv catch total num of agent1: 21
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent2: 0.6566285114618688
team_policy eval average team episode rewards of agent2: 85.0
team_policy eval idv catch total num of agent2: 28
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent3: 0.863447897902505
team_policy eval average team episode rewards of agent3: 85.0
team_policy eval idv catch total num of agent3: 36
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent4: 0.6044804261430636
team_policy eval average team episode rewards of agent4: 85.0
team_policy eval idv catch total num of agent4: 26
team_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent0: 0.8183933102496656
idv_policy eval average team episode rewards of agent0: 105.0
idv_policy eval idv catch total num of agent0: 34
idv_policy eval team catch total num: 42
idv_policy eval average step individual rewards of agent1: 0.8109724600046658
idv_policy eval average team episode rewards of agent1: 105.0
idv_policy eval idv catch total num of agent1: 34
idv_policy eval team catch total num: 42
idv_policy eval average step individual rewards of agent2: 0.4893382719019786
idv_policy eval average team episode rewards of agent2: 105.0
idv_policy eval idv catch total num of agent2: 21
idv_policy eval team catch total num: 42
idv_policy eval average step individual rewards of agent3: 0.6299956753882725
idv_policy eval average team episode rewards of agent3: 105.0
idv_policy eval idv catch total num of agent3: 27
idv_policy eval team catch total num: 42
idv_policy eval average step individual rewards of agent4: 0.8150150450185372
idv_policy eval average team episode rewards of agent4: 105.0
idv_policy eval idv catch total num of agent4: 34
idv_policy eval team catch total num: 42

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2026/10000 episodes, total num timesteps 405400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2027/10000 episodes, total num timesteps 405600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2028/10000 episodes, total num timesteps 405800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2029/10000 episodes, total num timesteps 406000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2030/10000 episodes, total num timesteps 406200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2031/10000 episodes, total num timesteps 406400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2032/10000 episodes, total num timesteps 406600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2033/10000 episodes, total num timesteps 406800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2034/10000 episodes, total num timesteps 407000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2035/10000 episodes, total num timesteps 407200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2036/10000 episodes, total num timesteps 407400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2037/10000 episodes, total num timesteps 407600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2038/10000 episodes, total num timesteps 407800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2039/10000 episodes, total num timesteps 408000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2040/10000 episodes, total num timesteps 408200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2041/10000 episodes, total num timesteps 408400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2042/10000 episodes, total num timesteps 408600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2043/10000 episodes, total num timesteps 408800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2044/10000 episodes, total num timesteps 409000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2045/10000 episodes, total num timesteps 409200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2046/10000 episodes, total num timesteps 409400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2047/10000 episodes, total num timesteps 409600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2048/10000 episodes, total num timesteps 409800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2049/10000 episodes, total num timesteps 410000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2050/10000 episodes, total num timesteps 410200/2000000, FPS 325.

team_policy eval average step individual rewards of agent0: 0.17537078296753422
team_policy eval average team episode rewards of agent0: 60.0
team_policy eval idv catch total num of agent0: 9
team_policy eval team catch total num: 24
team_policy eval average step individual rewards of agent1: 0.43067395469983794
team_policy eval average team episode rewards of agent1: 60.0
team_policy eval idv catch total num of agent1: 19
team_policy eval team catch total num: 24
team_policy eval average step individual rewards of agent2: 0.7935977814780572
team_policy eval average team episode rewards of agent2: 60.0
team_policy eval idv catch total num of agent2: 33
team_policy eval team catch total num: 24
team_policy eval average step individual rewards of agent3: 0.45813156511222247
team_policy eval average team episode rewards of agent3: 60.0
team_policy eval idv catch total num of agent3: 20
team_policy eval team catch total num: 24
team_policy eval average step individual rewards of agent4: 0.48694648748151714
team_policy eval average team episode rewards of agent4: 60.0
team_policy eval idv catch total num of agent4: 21
team_policy eval team catch total num: 24
idv_policy eval average step individual rewards of agent0: 0.3088771664924436
idv_policy eval average team episode rewards of agent0: 87.5
idv_policy eval idv catch total num of agent0: 14
idv_policy eval team catch total num: 35
idv_policy eval average step individual rewards of agent1: 0.8422624075408711
idv_policy eval average team episode rewards of agent1: 87.5
idv_policy eval idv catch total num of agent1: 35
idv_policy eval team catch total num: 35
idv_policy eval average step individual rewards of agent2: 0.43942982025412264
idv_policy eval average team episode rewards of agent2: 87.5
idv_policy eval idv catch total num of agent2: 19
idv_policy eval team catch total num: 35
idv_policy eval average step individual rewards of agent3: 0.6433203376919832
idv_policy eval average team episode rewards of agent3: 87.5
idv_policy eval idv catch total num of agent3: 27
idv_policy eval team catch total num: 35
idv_policy eval average step individual rewards of agent4: 0.7389791154474479
idv_policy eval average team episode rewards of agent4: 87.5
idv_policy eval idv catch total num of agent4: 31
idv_policy eval team catch total num: 35

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2051/10000 episodes, total num timesteps 410400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2052/10000 episodes, total num timesteps 410600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2053/10000 episodes, total num timesteps 410800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2054/10000 episodes, total num timesteps 411000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2055/10000 episodes, total num timesteps 411200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2056/10000 episodes, total num timesteps 411400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2057/10000 episodes, total num timesteps 411600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2058/10000 episodes, total num timesteps 411800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2059/10000 episodes, total num timesteps 412000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2060/10000 episodes, total num timesteps 412200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2061/10000 episodes, total num timesteps 412400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2062/10000 episodes, total num timesteps 412600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2063/10000 episodes, total num timesteps 412800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2064/10000 episodes, total num timesteps 413000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2065/10000 episodes, total num timesteps 413200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2066/10000 episodes, total num timesteps 413400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2067/10000 episodes, total num timesteps 413600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2068/10000 episodes, total num timesteps 413800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2069/10000 episodes, total num timesteps 414000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2070/10000 episodes, total num timesteps 414200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2071/10000 episodes, total num timesteps 414400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2072/10000 episodes, total num timesteps 414600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2073/10000 episodes, total num timesteps 414800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2074/10000 episodes, total num timesteps 415000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2075/10000 episodes, total num timesteps 415200/2000000, FPS 325.

team_policy eval average step individual rewards of agent0: 0.6765330535066143
team_policy eval average team episode rewards of agent0: 82.5
team_policy eval idv catch total num of agent0: 29
team_policy eval team catch total num: 33
team_policy eval average step individual rewards of agent1: 0.22112152705631222
team_policy eval average team episode rewards of agent1: 82.5
team_policy eval idv catch total num of agent1: 11
team_policy eval team catch total num: 33
team_policy eval average step individual rewards of agent2: 0.7845713503432669
team_policy eval average team episode rewards of agent2: 82.5
team_policy eval idv catch total num of agent2: 33
team_policy eval team catch total num: 33
team_policy eval average step individual rewards of agent3: 0.788221838472071
team_policy eval average team episode rewards of agent3: 82.5
team_policy eval idv catch total num of agent3: 33
team_policy eval team catch total num: 33
team_policy eval average step individual rewards of agent4: 0.8930058528385407
team_policy eval average team episode rewards of agent4: 82.5
team_policy eval idv catch total num of agent4: 37
team_policy eval team catch total num: 33
idv_policy eval average step individual rewards of agent0: 1.0228674256171697
idv_policy eval average team episode rewards of agent0: 117.5
idv_policy eval idv catch total num of agent0: 42
idv_policy eval team catch total num: 47
idv_policy eval average step individual rewards of agent1: 0.8460135797170599
idv_policy eval average team episode rewards of agent1: 117.5
idv_policy eval idv catch total num of agent1: 35
idv_policy eval team catch total num: 47
idv_policy eval average step individual rewards of agent2: 0.7897087417632153
idv_policy eval average team episode rewards of agent2: 117.5
idv_policy eval idv catch total num of agent2: 33
idv_policy eval team catch total num: 47
idv_policy eval average step individual rewards of agent3: 0.9428874725933144
idv_policy eval average team episode rewards of agent3: 117.5
idv_policy eval idv catch total num of agent3: 39
idv_policy eval team catch total num: 47
idv_policy eval average step individual rewards of agent4: 0.7879066332559138
idv_policy eval average team episode rewards of agent4: 117.5
idv_policy eval idv catch total num of agent4: 33
idv_policy eval team catch total num: 47

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2076/10000 episodes, total num timesteps 415400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2077/10000 episodes, total num timesteps 415600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2078/10000 episodes, total num timesteps 415800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2079/10000 episodes, total num timesteps 416000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2080/10000 episodes, total num timesteps 416200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2081/10000 episodes, total num timesteps 416400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2082/10000 episodes, total num timesteps 416600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2083/10000 episodes, total num timesteps 416800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2084/10000 episodes, total num timesteps 417000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2085/10000 episodes, total num timesteps 417200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2086/10000 episodes, total num timesteps 417400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2087/10000 episodes, total num timesteps 417600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2088/10000 episodes, total num timesteps 417800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2089/10000 episodes, total num timesteps 418000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2090/10000 episodes, total num timesteps 418200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2091/10000 episodes, total num timesteps 418400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2092/10000 episodes, total num timesteps 418600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2093/10000 episodes, total num timesteps 418800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2094/10000 episodes, total num timesteps 419000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2095/10000 episodes, total num timesteps 419200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2096/10000 episodes, total num timesteps 419400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2097/10000 episodes, total num timesteps 419600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2098/10000 episodes, total num timesteps 419800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2099/10000 episodes, total num timesteps 420000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2100/10000 episodes, total num timesteps 420200/2000000, FPS 325.

team_policy eval average step individual rewards of agent0: 0.8895296267933646
team_policy eval average team episode rewards of agent0: 132.5
team_policy eval idv catch total num of agent0: 37
team_policy eval team catch total num: 53
team_policy eval average step individual rewards of agent1: 0.5120596426518101
team_policy eval average team episode rewards of agent1: 132.5
team_policy eval idv catch total num of agent1: 22
team_policy eval team catch total num: 53
team_policy eval average step individual rewards of agent2: 1.1495056969029283
team_policy eval average team episode rewards of agent2: 132.5
team_policy eval idv catch total num of agent2: 47
team_policy eval team catch total num: 53
team_policy eval average step individual rewards of agent3: 1.0424696083639386
team_policy eval average team episode rewards of agent3: 132.5
team_policy eval idv catch total num of agent3: 43
team_policy eval team catch total num: 53
team_policy eval average step individual rewards of agent4: 1.0420551851133242
team_policy eval average team episode rewards of agent4: 132.5
team_policy eval idv catch total num of agent4: 43
team_policy eval team catch total num: 53
idv_policy eval average step individual rewards of agent0: 1.3555472847612737
idv_policy eval average team episode rewards of agent0: 72.5
idv_policy eval idv catch total num of agent0: 55
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent1: 0.2937975402484062
idv_policy eval average team episode rewards of agent1: 72.5
idv_policy eval idv catch total num of agent1: 14
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent2: 0.35330574413068677
idv_policy eval average team episode rewards of agent2: 72.5
idv_policy eval idv catch total num of agent2: 16
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent3: 0.6027464419740017
idv_policy eval average team episode rewards of agent3: 72.5
idv_policy eval idv catch total num of agent3: 26
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent4: 0.6392430963416653
idv_policy eval average team episode rewards of agent4: 72.5
idv_policy eval idv catch total num of agent4: 27
idv_policy eval team catch total num: 29

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2101/10000 episodes, total num timesteps 420400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2102/10000 episodes, total num timesteps 420600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2103/10000 episodes, total num timesteps 420800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2104/10000 episodes, total num timesteps 421000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2105/10000 episodes, total num timesteps 421200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2106/10000 episodes, total num timesteps 421400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2107/10000 episodes, total num timesteps 421600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2108/10000 episodes, total num timesteps 421800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2109/10000 episodes, total num timesteps 422000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2110/10000 episodes, total num timesteps 422200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2111/10000 episodes, total num timesteps 422400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2112/10000 episodes, total num timesteps 422600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2113/10000 episodes, total num timesteps 422800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2114/10000 episodes, total num timesteps 423000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2115/10000 episodes, total num timesteps 423200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2116/10000 episodes, total num timesteps 423400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2117/10000 episodes, total num timesteps 423600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2118/10000 episodes, total num timesteps 423800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2119/10000 episodes, total num timesteps 424000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2120/10000 episodes, total num timesteps 424200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2121/10000 episodes, total num timesteps 424400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2122/10000 episodes, total num timesteps 424600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2123/10000 episodes, total num timesteps 424800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2124/10000 episodes, total num timesteps 425000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2125/10000 episodes, total num timesteps 425200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 1.091747379513651
team_policy eval average team episode rewards of agent0: 165.0
team_policy eval idv catch total num of agent0: 45
team_policy eval team catch total num: 66
team_policy eval average step individual rewards of agent1: 1.2399010407162105
team_policy eval average team episode rewards of agent1: 165.0
team_policy eval idv catch total num of agent1: 51
team_policy eval team catch total num: 66
team_policy eval average step individual rewards of agent2: 0.7462246235484613
team_policy eval average team episode rewards of agent2: 165.0
team_policy eval idv catch total num of agent2: 31
team_policy eval team catch total num: 66
team_policy eval average step individual rewards of agent3: 1.4211882964254945
team_policy eval average team episode rewards of agent3: 165.0
team_policy eval idv catch total num of agent3: 58
team_policy eval team catch total num: 66
team_policy eval average step individual rewards of agent4: 1.1469815829699592
team_policy eval average team episode rewards of agent4: 165.0
team_policy eval idv catch total num of agent4: 47
team_policy eval team catch total num: 66
idv_policy eval average step individual rewards of agent0: 0.8926760026408047
idv_policy eval average team episode rewards of agent0: 67.5
idv_policy eval idv catch total num of agent0: 37
idv_policy eval team catch total num: 27
idv_policy eval average step individual rewards of agent1: 1.011040505903881
idv_policy eval average team episode rewards of agent1: 67.5
idv_policy eval idv catch total num of agent1: 42
idv_policy eval team catch total num: 27
idv_policy eval average step individual rewards of agent2: 0.247351418779404
idv_policy eval average team episode rewards of agent2: 67.5
idv_policy eval idv catch total num of agent2: 12
idv_policy eval team catch total num: 27
idv_policy eval average step individual rewards of agent3: 0.4838325317085795
idv_policy eval average team episode rewards of agent3: 67.5
idv_policy eval idv catch total num of agent3: 21
idv_policy eval team catch total num: 27
idv_policy eval average step individual rewards of agent4: 0.19782219987306163
idv_policy eval average team episode rewards of agent4: 67.5
idv_policy eval idv catch total num of agent4: 10
idv_policy eval team catch total num: 27

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2126/10000 episodes, total num timesteps 425400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2127/10000 episodes, total num timesteps 425600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2128/10000 episodes, total num timesteps 425800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2129/10000 episodes, total num timesteps 426000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2130/10000 episodes, total num timesteps 426200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2131/10000 episodes, total num timesteps 426400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2132/10000 episodes, total num timesteps 426600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2133/10000 episodes, total num timesteps 426800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2134/10000 episodes, total num timesteps 427000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2135/10000 episodes, total num timesteps 427200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2136/10000 episodes, total num timesteps 427400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2137/10000 episodes, total num timesteps 427600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2138/10000 episodes, total num timesteps 427800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2139/10000 episodes, total num timesteps 428000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2140/10000 episodes, total num timesteps 428200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2141/10000 episodes, total num timesteps 428400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2142/10000 episodes, total num timesteps 428600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2143/10000 episodes, total num timesteps 428800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2144/10000 episodes, total num timesteps 429000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2145/10000 episodes, total num timesteps 429200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2146/10000 episodes, total num timesteps 429400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2147/10000 episodes, total num timesteps 429600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2148/10000 episodes, total num timesteps 429800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2149/10000 episodes, total num timesteps 430000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2150/10000 episodes, total num timesteps 430200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.8068949986229775
team_policy eval average team episode rewards of agent0: 85.0
team_policy eval idv catch total num of agent0: 34
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent1: 0.6359047601009731
team_policy eval average team episode rewards of agent1: 85.0
team_policy eval idv catch total num of agent1: 27
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent2: 0.4572999147586525
team_policy eval average team episode rewards of agent2: 85.0
team_policy eval idv catch total num of agent2: 20
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent3: 0.5024219872148191
team_policy eval average team episode rewards of agent3: 85.0
team_policy eval idv catch total num of agent3: 22
team_policy eval team catch total num: 34
team_policy eval average step individual rewards of agent4: 1.0204660032651776
team_policy eval average team episode rewards of agent4: 85.0
team_policy eval idv catch total num of agent4: 42
team_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent0: 0.9936721778182823
idv_policy eval average team episode rewards of agent0: 125.0
idv_policy eval idv catch total num of agent0: 41
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent1: 1.3973888743260536
idv_policy eval average team episode rewards of agent1: 125.0
idv_policy eval idv catch total num of agent1: 57
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent2: 0.7600508166480177
idv_policy eval average team episode rewards of agent2: 125.0
idv_policy eval idv catch total num of agent2: 32
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent3: 0.6660604746000817
idv_policy eval average team episode rewards of agent3: 125.0
idv_policy eval idv catch total num of agent3: 28
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent4: 0.7686048767338758
idv_policy eval average team episode rewards of agent4: 125.0
idv_policy eval idv catch total num of agent4: 32
idv_policy eval team catch total num: 50

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2151/10000 episodes, total num timesteps 430400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2152/10000 episodes, total num timesteps 430600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2153/10000 episodes, total num timesteps 430800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2154/10000 episodes, total num timesteps 431000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2155/10000 episodes, total num timesteps 431200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2156/10000 episodes, total num timesteps 431400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2157/10000 episodes, total num timesteps 431600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2158/10000 episodes, total num timesteps 431800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2159/10000 episodes, total num timesteps 432000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2160/10000 episodes, total num timesteps 432200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2161/10000 episodes, total num timesteps 432400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2162/10000 episodes, total num timesteps 432600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2163/10000 episodes, total num timesteps 432800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2164/10000 episodes, total num timesteps 433000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2165/10000 episodes, total num timesteps 433200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2166/10000 episodes, total num timesteps 433400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2167/10000 episodes, total num timesteps 433600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2168/10000 episodes, total num timesteps 433800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2169/10000 episodes, total num timesteps 434000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2170/10000 episodes, total num timesteps 434200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2171/10000 episodes, total num timesteps 434400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2172/10000 episodes, total num timesteps 434600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2173/10000 episodes, total num timesteps 434800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2174/10000 episodes, total num timesteps 435000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2175/10000 episodes, total num timesteps 435200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.6405485553096218
team_policy eval average team episode rewards of agent0: 70.0
team_policy eval idv catch total num of agent0: 27
team_policy eval team catch total num: 28
team_policy eval average step individual rewards of agent1: 0.6308550535019235
team_policy eval average team episode rewards of agent1: 70.0
team_policy eval idv catch total num of agent1: 27
team_policy eval team catch total num: 28
team_policy eval average step individual rewards of agent2: 0.7712973624874059
team_policy eval average team episode rewards of agent2: 70.0
team_policy eval idv catch total num of agent2: 32
team_policy eval team catch total num: 28
team_policy eval average step individual rewards of agent3: 0.28781469915662217
team_policy eval average team episode rewards of agent3: 70.0
team_policy eval idv catch total num of agent3: 13
team_policy eval team catch total num: 28
team_policy eval average step individual rewards of agent4: 0.6369191936961973
team_policy eval average team episode rewards of agent4: 70.0
team_policy eval idv catch total num of agent4: 27
team_policy eval team catch total num: 28
idv_policy eval average step individual rewards of agent0: 0.8925419312421431
idv_policy eval average team episode rewards of agent0: 125.0
idv_policy eval idv catch total num of agent0: 37
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent1: 0.7653538882531385
idv_policy eval average team episode rewards of agent1: 125.0
idv_policy eval idv catch total num of agent1: 32
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent2: 0.8395634009183278
idv_policy eval average team episode rewards of agent2: 125.0
idv_policy eval idv catch total num of agent2: 35
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent3: 1.2410466219706342
idv_policy eval average team episode rewards of agent3: 125.0
idv_policy eval idv catch total num of agent3: 51
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent4: 0.6011105750388328
idv_policy eval average team episode rewards of agent4: 125.0
idv_policy eval idv catch total num of agent4: 26
idv_policy eval team catch total num: 50

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2176/10000 episodes, total num timesteps 435400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2177/10000 episodes, total num timesteps 435600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2178/10000 episodes, total num timesteps 435800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2179/10000 episodes, total num timesteps 436000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2180/10000 episodes, total num timesteps 436200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2181/10000 episodes, total num timesteps 436400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2182/10000 episodes, total num timesteps 436600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2183/10000 episodes, total num timesteps 436800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2184/10000 episodes, total num timesteps 437000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2185/10000 episodes, total num timesteps 437200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2186/10000 episodes, total num timesteps 437400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2187/10000 episodes, total num timesteps 437600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2188/10000 episodes, total num timesteps 437800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2189/10000 episodes, total num timesteps 438000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2190/10000 episodes, total num timesteps 438200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2191/10000 episodes, total num timesteps 438400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2192/10000 episodes, total num timesteps 438600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2193/10000 episodes, total num timesteps 438800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2194/10000 episodes, total num timesteps 439000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2195/10000 episodes, total num timesteps 439200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2196/10000 episodes, total num timesteps 439400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2197/10000 episodes, total num timesteps 439600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2198/10000 episodes, total num timesteps 439800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2199/10000 episodes, total num timesteps 440000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2200/10000 episodes, total num timesteps 440200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.7283627127027423
team_policy eval average team episode rewards of agent0: 80.0
team_policy eval idv catch total num of agent0: 31
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent1: 0.7849264854328664
team_policy eval average team episode rewards of agent1: 80.0
team_policy eval idv catch total num of agent1: 33
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent2: 0.4316575325464086
team_policy eval average team episode rewards of agent2: 80.0
team_policy eval idv catch total num of agent2: 19
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent3: 0.49451488047909664
team_policy eval average team episode rewards of agent3: 80.0
team_policy eval idv catch total num of agent3: 22
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent4: 0.6122378324855986
team_policy eval average team episode rewards of agent4: 80.0
team_policy eval idv catch total num of agent4: 26
team_policy eval team catch total num: 32
idv_policy eval average step individual rewards of agent0: 0.8106733816592347
idv_policy eval average team episode rewards of agent0: 127.5
idv_policy eval idv catch total num of agent0: 34
idv_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent1: 0.8178323768089587
idv_policy eval average team episode rewards of agent1: 127.5
idv_policy eval idv catch total num of agent1: 34
idv_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent2: 0.9620757733584824
idv_policy eval average team episode rewards of agent2: 127.5
idv_policy eval idv catch total num of agent2: 40
idv_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent3: 0.7079820650005949
idv_policy eval average team episode rewards of agent3: 127.5
idv_policy eval idv catch total num of agent3: 30
idv_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent4: 0.5820297404153468
idv_policy eval average team episode rewards of agent4: 127.5
idv_policy eval idv catch total num of agent4: 25
idv_policy eval team catch total num: 51

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2201/10000 episodes, total num timesteps 440400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2202/10000 episodes, total num timesteps 440600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2203/10000 episodes, total num timesteps 440800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2204/10000 episodes, total num timesteps 441000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2205/10000 episodes, total num timesteps 441200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2206/10000 episodes, total num timesteps 441400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2207/10000 episodes, total num timesteps 441600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2208/10000 episodes, total num timesteps 441800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2209/10000 episodes, total num timesteps 442000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2210/10000 episodes, total num timesteps 442200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2211/10000 episodes, total num timesteps 442400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2212/10000 episodes, total num timesteps 442600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2213/10000 episodes, total num timesteps 442800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2214/10000 episodes, total num timesteps 443000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2215/10000 episodes, total num timesteps 443200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2216/10000 episodes, total num timesteps 443400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2217/10000 episodes, total num timesteps 443600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2218/10000 episodes, total num timesteps 443800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2219/10000 episodes, total num timesteps 444000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2220/10000 episodes, total num timesteps 444200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2221/10000 episodes, total num timesteps 444400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2222/10000 episodes, total num timesteps 444600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2223/10000 episodes, total num timesteps 444800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2224/10000 episodes, total num timesteps 445000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2225/10000 episodes, total num timesteps 445200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.6889076186174827
team_policy eval average team episode rewards of agent0: 137.5
team_policy eval idv catch total num of agent0: 29
team_policy eval team catch total num: 55
team_policy eval average step individual rewards of agent1: 0.838996080713045
team_policy eval average team episode rewards of agent1: 137.5
team_policy eval idv catch total num of agent1: 35
team_policy eval team catch total num: 55
team_policy eval average step individual rewards of agent2: 1.2462060936686303
team_policy eval average team episode rewards of agent2: 137.5
team_policy eval idv catch total num of agent2: 51
team_policy eval team catch total num: 55
team_policy eval average step individual rewards of agent3: 1.1430704193578523
team_policy eval average team episode rewards of agent3: 137.5
team_policy eval idv catch total num of agent3: 47
team_policy eval team catch total num: 55
team_policy eval average step individual rewards of agent4: 0.5921679502481045
team_policy eval average team episode rewards of agent4: 137.5
team_policy eval idv catch total num of agent4: 25
team_policy eval team catch total num: 55
idv_policy eval average step individual rewards of agent0: 1.0128585800264995
idv_policy eval average team episode rewards of agent0: 90.0
idv_policy eval idv catch total num of agent0: 42
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent1: 0.8677294096460835
idv_policy eval average team episode rewards of agent1: 90.0
idv_policy eval idv catch total num of agent1: 36
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent2: 0.5075448044435242
idv_policy eval average team episode rewards of agent2: 90.0
idv_policy eval idv catch total num of agent2: 22
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent3: 0.9081337706869611
idv_policy eval average team episode rewards of agent3: 90.0
idv_policy eval idv catch total num of agent3: 38
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent4: 0.30358736379115575
idv_policy eval average team episode rewards of agent4: 90.0
idv_policy eval idv catch total num of agent4: 14
idv_policy eval team catch total num: 36

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2226/10000 episodes, total num timesteps 445400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2227/10000 episodes, total num timesteps 445600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2228/10000 episodes, total num timesteps 445800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2229/10000 episodes, total num timesteps 446000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2230/10000 episodes, total num timesteps 446200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2231/10000 episodes, total num timesteps 446400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2232/10000 episodes, total num timesteps 446600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2233/10000 episodes, total num timesteps 446800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2234/10000 episodes, total num timesteps 447000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2235/10000 episodes, total num timesteps 447200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2236/10000 episodes, total num timesteps 447400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2237/10000 episodes, total num timesteps 447600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2238/10000 episodes, total num timesteps 447800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2239/10000 episodes, total num timesteps 448000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2240/10000 episodes, total num timesteps 448200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2241/10000 episodes, total num timesteps 448400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2242/10000 episodes, total num timesteps 448600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2243/10000 episodes, total num timesteps 448800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2244/10000 episodes, total num timesteps 449000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2245/10000 episodes, total num timesteps 449200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2246/10000 episodes, total num timesteps 449400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2247/10000 episodes, total num timesteps 449600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2248/10000 episodes, total num timesteps 449800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2249/10000 episodes, total num timesteps 450000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2250/10000 episodes, total num timesteps 450200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.625118663210533
team_policy eval average team episode rewards of agent0: 112.5
team_policy eval idv catch total num of agent0: 27
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent1: 0.7414977004562077
team_policy eval average team episode rewards of agent1: 112.5
team_policy eval idv catch total num of agent1: 31
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent2: 0.7359939238565107
team_policy eval average team episode rewards of agent2: 112.5
team_policy eval idv catch total num of agent2: 31
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent3: 1.033555387288801
team_policy eval average team episode rewards of agent3: 112.5
team_policy eval idv catch total num of agent3: 43
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent4: 0.6387908272451381
team_policy eval average team episode rewards of agent4: 112.5
team_policy eval idv catch total num of agent4: 27
team_policy eval team catch total num: 45
idv_policy eval average step individual rewards of agent0: 0.457716169647477
idv_policy eval average team episode rewards of agent0: 85.0
idv_policy eval idv catch total num of agent0: 20
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent1: 0.430358690225777
idv_policy eval average team episode rewards of agent1: 85.0
idv_policy eval idv catch total num of agent1: 19
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent2: 0.9921373998943533
idv_policy eval average team episode rewards of agent2: 85.0
idv_policy eval idv catch total num of agent2: 41
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent3: 0.4573293277127734
idv_policy eval average team episode rewards of agent3: 85.0
idv_policy eval idv catch total num of agent3: 20
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent4: 0.9383453434255474
idv_policy eval average team episode rewards of agent4: 85.0
idv_policy eval idv catch total num of agent4: 39
idv_policy eval team catch total num: 34

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2251/10000 episodes, total num timesteps 450400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2252/10000 episodes, total num timesteps 450600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2253/10000 episodes, total num timesteps 450800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2254/10000 episodes, total num timesteps 451000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2255/10000 episodes, total num timesteps 451200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2256/10000 episodes, total num timesteps 451400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2257/10000 episodes, total num timesteps 451600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2258/10000 episodes, total num timesteps 451800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2259/10000 episodes, total num timesteps 452000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2260/10000 episodes, total num timesteps 452200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2261/10000 episodes, total num timesteps 452400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2262/10000 episodes, total num timesteps 452600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2263/10000 episodes, total num timesteps 452800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2264/10000 episodes, total num timesteps 453000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2265/10000 episodes, total num timesteps 453200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2266/10000 episodes, total num timesteps 453400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2267/10000 episodes, total num timesteps 453600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2268/10000 episodes, total num timesteps 453800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2269/10000 episodes, total num timesteps 454000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2270/10000 episodes, total num timesteps 454200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2271/10000 episodes, total num timesteps 454400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2272/10000 episodes, total num timesteps 454600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2273/10000 episodes, total num timesteps 454800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2274/10000 episodes, total num timesteps 455000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2275/10000 episodes, total num timesteps 455200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.5354082989387888
team_policy eval average team episode rewards of agent0: 82.5
team_policy eval idv catch total num of agent0: 23
team_policy eval team catch total num: 33
team_policy eval average step individual rewards of agent1: 0.6863897637630791
team_policy eval average team episode rewards of agent1: 82.5
team_policy eval idv catch total num of agent1: 29
team_policy eval team catch total num: 33
team_policy eval average step individual rewards of agent2: 0.689108659481029
team_policy eval average team episode rewards of agent2: 82.5
team_policy eval idv catch total num of agent2: 29
team_policy eval team catch total num: 33
team_policy eval average step individual rewards of agent3: 0.7899511224377332
team_policy eval average team episode rewards of agent3: 82.5
team_policy eval idv catch total num of agent3: 33
team_policy eval team catch total num: 33
team_policy eval average step individual rewards of agent4: 0.8614734566805268
team_policy eval average team episode rewards of agent4: 82.5
team_policy eval idv catch total num of agent4: 36
team_policy eval team catch total num: 33
idv_policy eval average step individual rewards of agent0: 0.704088955047043
idv_policy eval average team episode rewards of agent0: 122.5
idv_policy eval idv catch total num of agent0: 30
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent1: 0.9578112448935354
idv_policy eval average team episode rewards of agent1: 122.5
idv_policy eval idv catch total num of agent1: 40
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent2: 1.0388801462711184
idv_policy eval average team episode rewards of agent2: 122.5
idv_policy eval idv catch total num of agent2: 43
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent3: 0.531754855922067
idv_policy eval average team episode rewards of agent3: 122.5
idv_policy eval idv catch total num of agent3: 23
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent4: 0.8627631608630627
idv_policy eval average team episode rewards of agent4: 122.5
idv_policy eval idv catch total num of agent4: 36
idv_policy eval team catch total num: 49

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2276/10000 episodes, total num timesteps 455400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2277/10000 episodes, total num timesteps 455600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2278/10000 episodes, total num timesteps 455800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2279/10000 episodes, total num timesteps 456000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2280/10000 episodes, total num timesteps 456200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2281/10000 episodes, total num timesteps 456400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2282/10000 episodes, total num timesteps 456600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2283/10000 episodes, total num timesteps 456800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2284/10000 episodes, total num timesteps 457000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2285/10000 episodes, total num timesteps 457200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2286/10000 episodes, total num timesteps 457400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2287/10000 episodes, total num timesteps 457600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2288/10000 episodes, total num timesteps 457800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2289/10000 episodes, total num timesteps 458000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2290/10000 episodes, total num timesteps 458200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2291/10000 episodes, total num timesteps 458400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2292/10000 episodes, total num timesteps 458600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2293/10000 episodes, total num timesteps 458800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2294/10000 episodes, total num timesteps 459000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2295/10000 episodes, total num timesteps 459200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2296/10000 episodes, total num timesteps 459400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2297/10000 episodes, total num timesteps 459600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2298/10000 episodes, total num timesteps 459800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2299/10000 episodes, total num timesteps 460000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2300/10000 episodes, total num timesteps 460200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.6852457426916909
team_policy eval average team episode rewards of agent0: 90.0
team_policy eval idv catch total num of agent0: 29
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent1: 0.662281352051335
team_policy eval average team episode rewards of agent1: 90.0
team_policy eval idv catch total num of agent1: 28
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent2: 0.5578384047083174
team_policy eval average team episode rewards of agent2: 90.0
team_policy eval idv catch total num of agent2: 24
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent3: 0.45834888086188996
team_policy eval average team episode rewards of agent3: 90.0
team_policy eval idv catch total num of agent3: 20
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent4: 0.5842277821174033
team_policy eval average team episode rewards of agent4: 90.0
team_policy eval idv catch total num of agent4: 25
team_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent0: 0.9950935451324926
idv_policy eval average team episode rewards of agent0: 110.0
idv_policy eval idv catch total num of agent0: 41
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent1: 0.6059414049403
idv_policy eval average team episode rewards of agent1: 110.0
idv_policy eval idv catch total num of agent1: 26
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent2: 1.1174289641683337
idv_policy eval average team episode rewards of agent2: 110.0
idv_policy eval idv catch total num of agent2: 46
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent3: 0.5342262886202745
idv_policy eval average team episode rewards of agent3: 110.0
idv_policy eval idv catch total num of agent3: 23
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent4: 0.5926920268398063
idv_policy eval average team episode rewards of agent4: 110.0
idv_policy eval idv catch total num of agent4: 25
idv_policy eval team catch total num: 44

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2301/10000 episodes, total num timesteps 460400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2302/10000 episodes, total num timesteps 460600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2303/10000 episodes, total num timesteps 460800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2304/10000 episodes, total num timesteps 461000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2305/10000 episodes, total num timesteps 461200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2306/10000 episodes, total num timesteps 461400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2307/10000 episodes, total num timesteps 461600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2308/10000 episodes, total num timesteps 461800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2309/10000 episodes, total num timesteps 462000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2310/10000 episodes, total num timesteps 462200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2311/10000 episodes, total num timesteps 462400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2312/10000 episodes, total num timesteps 462600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2313/10000 episodes, total num timesteps 462800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2314/10000 episodes, total num timesteps 463000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2315/10000 episodes, total num timesteps 463200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2316/10000 episodes, total num timesteps 463400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2317/10000 episodes, total num timesteps 463600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2318/10000 episodes, total num timesteps 463800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2319/10000 episodes, total num timesteps 464000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2320/10000 episodes, total num timesteps 464200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2321/10000 episodes, total num timesteps 464400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2322/10000 episodes, total num timesteps 464600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2323/10000 episodes, total num timesteps 464800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2324/10000 episodes, total num timesteps 465000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2325/10000 episodes, total num timesteps 465200/2000000, FPS 326.

team_policy eval average step individual rewards of agent0: 0.5030810596400996
team_policy eval average team episode rewards of agent0: 80.0
team_policy eval idv catch total num of agent0: 22
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent1: 0.6736333806364744
team_policy eval average team episode rewards of agent1: 80.0
team_policy eval idv catch total num of agent1: 29
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent2: 0.5521893800238535
team_policy eval average team episode rewards of agent2: 80.0
team_policy eval idv catch total num of agent2: 24
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent3: 0.3977127323925829
team_policy eval average team episode rewards of agent3: 80.0
team_policy eval idv catch total num of agent3: 18
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent4: 0.7843525812678445
team_policy eval average team episode rewards of agent4: 80.0
team_policy eval idv catch total num of agent4: 33
team_policy eval team catch total num: 32
idv_policy eval average step individual rewards of agent0: 0.7636170379620725
idv_policy eval average team episode rewards of agent0: 122.5
idv_policy eval idv catch total num of agent0: 32
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent1: 1.0241775063724288
idv_policy eval average team episode rewards of agent1: 122.5
idv_policy eval idv catch total num of agent1: 42
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent2: 0.9690738644334738
idv_policy eval average team episode rewards of agent2: 122.5
idv_policy eval idv catch total num of agent2: 40
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent3: 0.9170469554543438
idv_policy eval average team episode rewards of agent3: 122.5
idv_policy eval idv catch total num of agent3: 38
idv_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent4: 1.0709842548809496
idv_policy eval average team episode rewards of agent4: 122.5
idv_policy eval idv catch total num of agent4: 44
idv_policy eval team catch total num: 49

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2326/10000 episodes, total num timesteps 465400/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2327/10000 episodes, total num timesteps 465600/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2328/10000 episodes, total num timesteps 465800/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2329/10000 episodes, total num timesteps 466000/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2330/10000 episodes, total num timesteps 466200/2000000, FPS 326.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2331/10000 episodes, total num timesteps 466400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2332/10000 episodes, total num timesteps 466600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2333/10000 episodes, total num timesteps 466800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2334/10000 episodes, total num timesteps 467000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2335/10000 episodes, total num timesteps 467200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2336/10000 episodes, total num timesteps 467400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2337/10000 episodes, total num timesteps 467600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2338/10000 episodes, total num timesteps 467800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2339/10000 episodes, total num timesteps 468000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2340/10000 episodes, total num timesteps 468200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2341/10000 episodes, total num timesteps 468400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2342/10000 episodes, total num timesteps 468600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2343/10000 episodes, total num timesteps 468800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2344/10000 episodes, total num timesteps 469000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2345/10000 episodes, total num timesteps 469200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2346/10000 episodes, total num timesteps 469400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2347/10000 episodes, total num timesteps 469600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2348/10000 episodes, total num timesteps 469800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2349/10000 episodes, total num timesteps 470000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2350/10000 episodes, total num timesteps 470200/2000000, FPS 325.

team_policy eval average step individual rewards of agent0: 0.6365372270177494
team_policy eval average team episode rewards of agent0: 112.5
team_policy eval idv catch total num of agent0: 27
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent1: 0.8639147799157338
team_policy eval average team episode rewards of agent1: 112.5
team_policy eval idv catch total num of agent1: 36
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent2: 0.9912589694917955
team_policy eval average team episode rewards of agent2: 112.5
team_policy eval idv catch total num of agent2: 41
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent3: 0.7422466582161905
team_policy eval average team episode rewards of agent3: 112.5
team_policy eval idv catch total num of agent3: 31
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent4: 0.6886659185020099
team_policy eval average team episode rewards of agent4: 112.5
team_policy eval idv catch total num of agent4: 29
team_policy eval team catch total num: 45
idv_policy eval average step individual rewards of agent0: 0.38305564416106025
idv_policy eval average team episode rewards of agent0: 85.0
idv_policy eval idv catch total num of agent0: 17
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent1: 0.7357513645679957
idv_policy eval average team episode rewards of agent1: 85.0
idv_policy eval idv catch total num of agent1: 31
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent2: 0.8328755119988435
idv_policy eval average team episode rewards of agent2: 85.0
idv_policy eval idv catch total num of agent2: 35
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent3: 0.4733443566549955
idv_policy eval average team episode rewards of agent3: 85.0
idv_policy eval idv catch total num of agent3: 21
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent4: 0.4882591085915392
idv_policy eval average team episode rewards of agent4: 85.0
idv_policy eval idv catch total num of agent4: 21
idv_policy eval team catch total num: 34

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2351/10000 episodes, total num timesteps 470400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2352/10000 episodes, total num timesteps 470600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2353/10000 episodes, total num timesteps 470800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2354/10000 episodes, total num timesteps 471000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2355/10000 episodes, total num timesteps 471200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2356/10000 episodes, total num timesteps 471400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2357/10000 episodes, total num timesteps 471600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2358/10000 episodes, total num timesteps 471800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2359/10000 episodes, total num timesteps 472000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2360/10000 episodes, total num timesteps 472200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2361/10000 episodes, total num timesteps 472400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2362/10000 episodes, total num timesteps 472600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2363/10000 episodes, total num timesteps 472800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2364/10000 episodes, total num timesteps 473000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2365/10000 episodes, total num timesteps 473200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2366/10000 episodes, total num timesteps 473400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2367/10000 episodes, total num timesteps 473600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2368/10000 episodes, total num timesteps 473800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2369/10000 episodes, total num timesteps 474000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2370/10000 episodes, total num timesteps 474200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2371/10000 episodes, total num timesteps 474400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2372/10000 episodes, total num timesteps 474600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2373/10000 episodes, total num timesteps 474800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2374/10000 episodes, total num timesteps 475000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2375/10000 episodes, total num timesteps 475200/2000000, FPS 325.

team_policy eval average step individual rewards of agent0: 1.0701721630998016
team_policy eval average team episode rewards of agent0: 122.5
team_policy eval idv catch total num of agent0: 44
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent1: 0.5364421950915318
team_policy eval average team episode rewards of agent1: 122.5
team_policy eval idv catch total num of agent1: 23
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent2: 0.73514292583066
team_policy eval average team episode rewards of agent2: 122.5
team_policy eval idv catch total num of agent2: 31
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent3: 1.0641138767652714
team_policy eval average team episode rewards of agent3: 122.5
team_policy eval idv catch total num of agent3: 44
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent4: 0.7595380308661871
team_policy eval average team episode rewards of agent4: 122.5
team_policy eval idv catch total num of agent4: 32
team_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent0: 0.7015356980912716
idv_policy eval average team episode rewards of agent0: 92.5
idv_policy eval idv catch total num of agent0: 30
idv_policy eval team catch total num: 37
idv_policy eval average step individual rewards of agent1: 0.3764766539226474
idv_policy eval average team episode rewards of agent1: 92.5
idv_policy eval idv catch total num of agent1: 17
idv_policy eval team catch total num: 37
idv_policy eval average step individual rewards of agent2: 1.224950664034734
idv_policy eval average team episode rewards of agent2: 92.5
idv_policy eval idv catch total num of agent2: 50
idv_policy eval team catch total num: 37
idv_policy eval average step individual rewards of agent3: 0.625052033599373
idv_policy eval average team episode rewards of agent3: 92.5
idv_policy eval idv catch total num of agent3: 27
idv_policy eval team catch total num: 37
idv_policy eval average step individual rewards of agent4: 0.5701689481740294
idv_policy eval average team episode rewards of agent4: 92.5
idv_policy eval idv catch total num of agent4: 24
idv_policy eval team catch total num: 37

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2376/10000 episodes, total num timesteps 475400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2377/10000 episodes, total num timesteps 475600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2378/10000 episodes, total num timesteps 475800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2379/10000 episodes, total num timesteps 476000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2380/10000 episodes, total num timesteps 476200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2381/10000 episodes, total num timesteps 476400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2382/10000 episodes, total num timesteps 476600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2383/10000 episodes, total num timesteps 476800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2384/10000 episodes, total num timesteps 477000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2385/10000 episodes, total num timesteps 477200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2386/10000 episodes, total num timesteps 477400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2387/10000 episodes, total num timesteps 477600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2388/10000 episodes, total num timesteps 477800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2389/10000 episodes, total num timesteps 478000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2390/10000 episodes, total num timesteps 478200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2391/10000 episodes, total num timesteps 478400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2392/10000 episodes, total num timesteps 478600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2393/10000 episodes, total num timesteps 478800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2394/10000 episodes, total num timesteps 479000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2395/10000 episodes, total num timesteps 479200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2396/10000 episodes, total num timesteps 479400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2397/10000 episodes, total num timesteps 479600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2398/10000 episodes, total num timesteps 479800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2399/10000 episodes, total num timesteps 480000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2400/10000 episodes, total num timesteps 480200/2000000, FPS 325.

team_policy eval average step individual rewards of agent0: 1.2958771906288313
team_policy eval average team episode rewards of agent0: 160.0
team_policy eval idv catch total num of agent0: 53
team_policy eval team catch total num: 64
team_policy eval average step individual rewards of agent1: 1.1706565570742336
team_policy eval average team episode rewards of agent1: 160.0
team_policy eval idv catch total num of agent1: 48
team_policy eval team catch total num: 64
team_policy eval average step individual rewards of agent2: 1.115636289940097
team_policy eval average team episode rewards of agent2: 160.0
team_policy eval idv catch total num of agent2: 46
team_policy eval team catch total num: 64
team_policy eval average step individual rewards of agent3: 0.6267386691068301
team_policy eval average team episode rewards of agent3: 160.0
team_policy eval idv catch total num of agent3: 27
team_policy eval team catch total num: 64
team_policy eval average step individual rewards of agent4: 1.369388361013171
team_policy eval average team episode rewards of agent4: 160.0
team_policy eval idv catch total num of agent4: 56
team_policy eval team catch total num: 64
idv_policy eval average step individual rewards of agent0: 0.7630697024055642
idv_policy eval average team episode rewards of agent0: 105.0
idv_policy eval idv catch total num of agent0: 32
idv_policy eval team catch total num: 42
idv_policy eval average step individual rewards of agent1: 0.6412824866736797
idv_policy eval average team episode rewards of agent1: 105.0
idv_policy eval idv catch total num of agent1: 27
idv_policy eval team catch total num: 42
idv_policy eval average step individual rewards of agent2: 0.5582802432940842
idv_policy eval average team episode rewards of agent2: 105.0
idv_policy eval idv catch total num of agent2: 24
idv_policy eval team catch total num: 42
idv_policy eval average step individual rewards of agent3: 0.6132366730373205
idv_policy eval average team episode rewards of agent3: 105.0
idv_policy eval idv catch total num of agent3: 26
idv_policy eval team catch total num: 42
idv_policy eval average step individual rewards of agent4: 0.9677724282885063
idv_policy eval average team episode rewards of agent4: 105.0
idv_policy eval idv catch total num of agent4: 40
idv_policy eval team catch total num: 42

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2401/10000 episodes, total num timesteps 480400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2402/10000 episodes, total num timesteps 480600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2403/10000 episodes, total num timesteps 480800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2404/10000 episodes, total num timesteps 481000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2405/10000 episodes, total num timesteps 481200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2406/10000 episodes, total num timesteps 481400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2407/10000 episodes, total num timesteps 481600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2408/10000 episodes, total num timesteps 481800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2409/10000 episodes, total num timesteps 482000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2410/10000 episodes, total num timesteps 482200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2411/10000 episodes, total num timesteps 482400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2412/10000 episodes, total num timesteps 482600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2413/10000 episodes, total num timesteps 482800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2414/10000 episodes, total num timesteps 483000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2415/10000 episodes, total num timesteps 483200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2416/10000 episodes, total num timesteps 483400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2417/10000 episodes, total num timesteps 483600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2418/10000 episodes, total num timesteps 483800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2419/10000 episodes, total num timesteps 484000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2420/10000 episodes, total num timesteps 484200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2421/10000 episodes, total num timesteps 484400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2422/10000 episodes, total num timesteps 484600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2423/10000 episodes, total num timesteps 484800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2424/10000 episodes, total num timesteps 485000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2425/10000 episodes, total num timesteps 485200/2000000, FPS 325.

team_policy eval average step individual rewards of agent0: 1.1904602993912392
team_policy eval average team episode rewards of agent0: 135.0
team_policy eval idv catch total num of agent0: 49
team_policy eval team catch total num: 54
team_policy eval average step individual rewards of agent1: 0.3510018918642713
team_policy eval average team episode rewards of agent1: 135.0
team_policy eval idv catch total num of agent1: 16
team_policy eval team catch total num: 54
team_policy eval average step individual rewards of agent2: 0.8638910365516556
team_policy eval average team episode rewards of agent2: 135.0
team_policy eval idv catch total num of agent2: 36
team_policy eval team catch total num: 54
team_policy eval average step individual rewards of agent3: 1.0718144877182783
team_policy eval average team episode rewards of agent3: 135.0
team_policy eval idv catch total num of agent3: 44
team_policy eval team catch total num: 54
team_policy eval average step individual rewards of agent4: 0.9144160667080081
team_policy eval average team episode rewards of agent4: 135.0
team_policy eval idv catch total num of agent4: 38
team_policy eval team catch total num: 54
idv_policy eval average step individual rewards of agent0: 0.8659763363978621
idv_policy eval average team episode rewards of agent0: 125.0
idv_policy eval idv catch total num of agent0: 36
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent1: 1.0981732767374854
idv_policy eval average team episode rewards of agent1: 125.0
idv_policy eval idv catch total num of agent1: 45
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent2: 0.5612587789079693
idv_policy eval average team episode rewards of agent2: 125.0
idv_policy eval idv catch total num of agent2: 24
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent3: 0.9441382283493064
idv_policy eval average team episode rewards of agent3: 125.0
idv_policy eval idv catch total num of agent3: 39
idv_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent4: 0.9425006807615189
idv_policy eval average team episode rewards of agent4: 125.0
idv_policy eval idv catch total num of agent4: 39
idv_policy eval team catch total num: 50

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2426/10000 episodes, total num timesteps 485400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2427/10000 episodes, total num timesteps 485600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2428/10000 episodes, total num timesteps 485800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2429/10000 episodes, total num timesteps 486000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2430/10000 episodes, total num timesteps 486200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2431/10000 episodes, total num timesteps 486400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2432/10000 episodes, total num timesteps 486600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2433/10000 episodes, total num timesteps 486800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2434/10000 episodes, total num timesteps 487000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2435/10000 episodes, total num timesteps 487200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2436/10000 episodes, total num timesteps 487400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2437/10000 episodes, total num timesteps 487600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2438/10000 episodes, total num timesteps 487800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2439/10000 episodes, total num timesteps 488000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2440/10000 episodes, total num timesteps 488200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2441/10000 episodes, total num timesteps 488400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2442/10000 episodes, total num timesteps 488600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2443/10000 episodes, total num timesteps 488800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2444/10000 episodes, total num timesteps 489000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2445/10000 episodes, total num timesteps 489200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2446/10000 episodes, total num timesteps 489400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2447/10000 episodes, total num timesteps 489600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2448/10000 episodes, total num timesteps 489800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2449/10000 episodes, total num timesteps 490000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2450/10000 episodes, total num timesteps 490200/2000000, FPS 325.

team_policy eval average step individual rewards of agent0: 0.7915355054906335
team_policy eval average team episode rewards of agent0: 95.0
team_policy eval idv catch total num of agent0: 33
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent1: 0.2633136674279729
team_policy eval average team episode rewards of agent1: 95.0
team_policy eval idv catch total num of agent1: 12
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent2: 0.7889262959234068
team_policy eval average team episode rewards of agent2: 95.0
team_policy eval idv catch total num of agent2: 33
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent3: 0.7393884592880791
team_policy eval average team episode rewards of agent3: 95.0
team_policy eval idv catch total num of agent3: 31
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent4: 0.8935707527564208
team_policy eval average team episode rewards of agent4: 95.0
team_policy eval idv catch total num of agent4: 37
team_policy eval team catch total num: 38
idv_policy eval average step individual rewards of agent0: 0.7977862122861638
idv_policy eval average team episode rewards of agent0: 110.0
idv_policy eval idv catch total num of agent0: 33
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent1: 0.5675731320801225
idv_policy eval average team episode rewards of agent1: 110.0
idv_policy eval idv catch total num of agent1: 24
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent2: 0.8428714968226367
idv_policy eval average team episode rewards of agent2: 110.0
idv_policy eval idv catch total num of agent2: 35
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent3: 0.7362285922466425
idv_policy eval average team episode rewards of agent3: 110.0
idv_policy eval idv catch total num of agent3: 31
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent4: 0.9958942329010548
idv_policy eval average team episode rewards of agent4: 110.0
idv_policy eval idv catch total num of agent4: 41
idv_policy eval team catch total num: 44

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2451/10000 episodes, total num timesteps 490400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2452/10000 episodes, total num timesteps 490600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2453/10000 episodes, total num timesteps 490800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2454/10000 episodes, total num timesteps 491000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2455/10000 episodes, total num timesteps 491200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2456/10000 episodes, total num timesteps 491400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2457/10000 episodes, total num timesteps 491600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2458/10000 episodes, total num timesteps 491800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2459/10000 episodes, total num timesteps 492000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2460/10000 episodes, total num timesteps 492200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2461/10000 episodes, total num timesteps 492400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2462/10000 episodes, total num timesteps 492600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2463/10000 episodes, total num timesteps 492800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2464/10000 episodes, total num timesteps 493000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2465/10000 episodes, total num timesteps 493200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2466/10000 episodes, total num timesteps 493400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2467/10000 episodes, total num timesteps 493600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2468/10000 episodes, total num timesteps 493800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2469/10000 episodes, total num timesteps 494000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2470/10000 episodes, total num timesteps 494200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2471/10000 episodes, total num timesteps 494400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2472/10000 episodes, total num timesteps 494600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2473/10000 episodes, total num timesteps 494800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2474/10000 episodes, total num timesteps 495000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2475/10000 episodes, total num timesteps 495200/2000000, FPS 325.

team_policy eval average step individual rewards of agent0: 0.8353367859467776
team_policy eval average team episode rewards of agent0: 90.0
team_policy eval idv catch total num of agent0: 35
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent1: 0.8913665199248552
team_policy eval average team episode rewards of agent1: 90.0
team_policy eval idv catch total num of agent1: 37
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent2: 0.687712481583319
team_policy eval average team episode rewards of agent2: 90.0
team_policy eval idv catch total num of agent2: 29
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent3: 0.5642394940020719
team_policy eval average team episode rewards of agent3: 90.0
team_policy eval idv catch total num of agent3: 24
team_policy eval team catch total num: 36
team_policy eval average step individual rewards of agent4: 0.4292053768621446
team_policy eval average team episode rewards of agent4: 90.0
team_policy eval idv catch total num of agent4: 19
team_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent0: 0.6004652591262901
idv_policy eval average team episode rewards of agent0: 112.5
idv_policy eval idv catch total num of agent0: 26
idv_policy eval team catch total num: 45
idv_policy eval average step individual rewards of agent1: 0.7347603676436397
idv_policy eval average team episode rewards of agent1: 112.5
idv_policy eval idv catch total num of agent1: 31
idv_policy eval team catch total num: 45
idv_policy eval average step individual rewards of agent2: 0.8310059924322418
idv_policy eval average team episode rewards of agent2: 112.5
idv_policy eval idv catch total num of agent2: 35
idv_policy eval team catch total num: 45
idv_policy eval average step individual rewards of agent3: 0.49344019343154727
idv_policy eval average team episode rewards of agent3: 112.5
idv_policy eval idv catch total num of agent3: 22
idv_policy eval team catch total num: 45
idv_policy eval average step individual rewards of agent4: 0.9525220319747085
idv_policy eval average team episode rewards of agent4: 112.5
idv_policy eval idv catch total num of agent4: 40
idv_policy eval team catch total num: 45

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2476/10000 episodes, total num timesteps 495400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2477/10000 episodes, total num timesteps 495600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2478/10000 episodes, total num timesteps 495800/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2479/10000 episodes, total num timesteps 496000/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2480/10000 episodes, total num timesteps 496200/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2481/10000 episodes, total num timesteps 496400/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2482/10000 episodes, total num timesteps 496600/2000000, FPS 325.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2483/10000 episodes, total num timesteps 496800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2484/10000 episodes, total num timesteps 497000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2485/10000 episodes, total num timesteps 497200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2486/10000 episodes, total num timesteps 497400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2487/10000 episodes, total num timesteps 497600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2488/10000 episodes, total num timesteps 497800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2489/10000 episodes, total num timesteps 498000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2490/10000 episodes, total num timesteps 498200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2491/10000 episodes, total num timesteps 498400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2492/10000 episodes, total num timesteps 498600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2493/10000 episodes, total num timesteps 498800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2494/10000 episodes, total num timesteps 499000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2495/10000 episodes, total num timesteps 499200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2496/10000 episodes, total num timesteps 499400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2497/10000 episodes, total num timesteps 499600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2498/10000 episodes, total num timesteps 499800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2499/10000 episodes, total num timesteps 500000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2500/10000 episodes, total num timesteps 500200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.7873453150600686
team_policy eval average team episode rewards of agent0: 107.5
team_policy eval idv catch total num of agent0: 33
team_policy eval team catch total num: 43
team_policy eval average step individual rewards of agent1: 0.6357704138418461
team_policy eval average team episode rewards of agent1: 107.5
team_policy eval idv catch total num of agent1: 27
team_policy eval team catch total num: 43
team_policy eval average step individual rewards of agent2: 0.9908948714246736
team_policy eval average team episode rewards of agent2: 107.5
team_policy eval idv catch total num of agent2: 41
team_policy eval team catch total num: 43
team_policy eval average step individual rewards of agent3: 0.9013551612392606
team_policy eval average team episode rewards of agent3: 107.5
team_policy eval idv catch total num of agent3: 37
team_policy eval team catch total num: 43
team_policy eval average step individual rewards of agent4: 0.7381019845690905
team_policy eval average team episode rewards of agent4: 107.5
team_policy eval idv catch total num of agent4: 31
team_policy eval team catch total num: 43
idv_policy eval average step individual rewards of agent0: 0.6543474198857421
idv_policy eval average team episode rewards of agent0: 87.5
idv_policy eval idv catch total num of agent0: 28
idv_policy eval team catch total num: 35
idv_policy eval average step individual rewards of agent1: 0.8786339145752092
idv_policy eval average team episode rewards of agent1: 87.5
idv_policy eval idv catch total num of agent1: 37
idv_policy eval team catch total num: 35
idv_policy eval average step individual rewards of agent2: 0.45767172655422317
idv_policy eval average team episode rewards of agent2: 87.5
idv_policy eval idv catch total num of agent2: 20
idv_policy eval team catch total num: 35
idv_policy eval average step individual rewards of agent3: 0.34975518234096703
idv_policy eval average team episode rewards of agent3: 87.5
idv_policy eval idv catch total num of agent3: 16
idv_policy eval team catch total num: 35
idv_policy eval average step individual rewards of agent4: 0.7923523089577842
idv_policy eval average team episode rewards of agent4: 87.5
idv_policy eval idv catch total num of agent4: 33
idv_policy eval team catch total num: 35

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2501/10000 episodes, total num timesteps 500400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2502/10000 episodes, total num timesteps 500600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2503/10000 episodes, total num timesteps 500800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2504/10000 episodes, total num timesteps 501000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2505/10000 episodes, total num timesteps 501200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2506/10000 episodes, total num timesteps 501400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2507/10000 episodes, total num timesteps 501600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2508/10000 episodes, total num timesteps 501800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2509/10000 episodes, total num timesteps 502000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2510/10000 episodes, total num timesteps 502200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2511/10000 episodes, total num timesteps 502400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2512/10000 episodes, total num timesteps 502600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2513/10000 episodes, total num timesteps 502800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2514/10000 episodes, total num timesteps 503000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2515/10000 episodes, total num timesteps 503200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2516/10000 episodes, total num timesteps 503400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2517/10000 episodes, total num timesteps 503600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2518/10000 episodes, total num timesteps 503800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2519/10000 episodes, total num timesteps 504000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2520/10000 episodes, total num timesteps 504200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2521/10000 episodes, total num timesteps 504400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2522/10000 episodes, total num timesteps 504600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2523/10000 episodes, total num timesteps 504800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2524/10000 episodes, total num timesteps 505000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2525/10000 episodes, total num timesteps 505200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 1.420657230607719
team_policy eval average team episode rewards of agent0: 137.5
team_policy eval idv catch total num of agent0: 58
team_policy eval team catch total num: 55
team_policy eval average step individual rewards of agent1: 0.6673365525862508
team_policy eval average team episode rewards of agent1: 137.5
team_policy eval idv catch total num of agent1: 28
team_policy eval team catch total num: 55
team_policy eval average step individual rewards of agent2: 0.6193240174499987
team_policy eval average team episode rewards of agent2: 137.5
team_policy eval idv catch total num of agent2: 26
team_policy eval team catch total num: 55
team_policy eval average step individual rewards of agent3: 0.9140186955712327
team_policy eval average team episode rewards of agent3: 137.5
team_policy eval idv catch total num of agent3: 38
team_policy eval team catch total num: 55
team_policy eval average step individual rewards of agent4: 0.9922907000418028
team_policy eval average team episode rewards of agent4: 137.5
team_policy eval idv catch total num of agent4: 41
team_policy eval team catch total num: 55
idv_policy eval average step individual rewards of agent0: 1.1781156236542076
idv_policy eval average team episode rewards of agent0: 140.0
idv_policy eval idv catch total num of agent0: 48
idv_policy eval team catch total num: 56
idv_policy eval average step individual rewards of agent1: 1.1924727175207883
idv_policy eval average team episode rewards of agent1: 140.0
idv_policy eval idv catch total num of agent1: 49
idv_policy eval team catch total num: 56
idv_policy eval average step individual rewards of agent2: 0.6404090415492564
idv_policy eval average team episode rewards of agent2: 140.0
idv_policy eval idv catch total num of agent2: 27
idv_policy eval team catch total num: 56
idv_policy eval average step individual rewards of agent3: 0.9217884100982741
idv_policy eval average team episode rewards of agent3: 140.0
idv_policy eval idv catch total num of agent3: 38
idv_policy eval team catch total num: 56
idv_policy eval average step individual rewards of agent4: 0.9172426554004726
idv_policy eval average team episode rewards of agent4: 140.0
idv_policy eval idv catch total num of agent4: 38
idv_policy eval team catch total num: 56

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2526/10000 episodes, total num timesteps 505400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2527/10000 episodes, total num timesteps 505600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2528/10000 episodes, total num timesteps 505800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2529/10000 episodes, total num timesteps 506000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2530/10000 episodes, total num timesteps 506200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2531/10000 episodes, total num timesteps 506400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2532/10000 episodes, total num timesteps 506600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2533/10000 episodes, total num timesteps 506800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2534/10000 episodes, total num timesteps 507000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2535/10000 episodes, total num timesteps 507200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2536/10000 episodes, total num timesteps 507400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2537/10000 episodes, total num timesteps 507600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2538/10000 episodes, total num timesteps 507800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2539/10000 episodes, total num timesteps 508000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2540/10000 episodes, total num timesteps 508200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2541/10000 episodes, total num timesteps 508400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2542/10000 episodes, total num timesteps 508600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2543/10000 episodes, total num timesteps 508800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2544/10000 episodes, total num timesteps 509000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2545/10000 episodes, total num timesteps 509200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2546/10000 episodes, total num timesteps 509400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2547/10000 episodes, total num timesteps 509600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2548/10000 episodes, total num timesteps 509800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2549/10000 episodes, total num timesteps 510000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2550/10000 episodes, total num timesteps 510200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.6891545784062656
team_policy eval average team episode rewards of agent0: 67.5
team_policy eval idv catch total num of agent0: 29
team_policy eval team catch total num: 27
team_policy eval average step individual rewards of agent1: 0.09850593559362149
team_policy eval average team episode rewards of agent1: 67.5
team_policy eval idv catch total num of agent1: 6
team_policy eval team catch total num: 27
team_policy eval average step individual rewards of agent2: 0.6596400844137278
team_policy eval average team episode rewards of agent2: 67.5
team_policy eval idv catch total num of agent2: 28
team_policy eval team catch total num: 27
team_policy eval average step individual rewards of agent3: 0.3817169739961632
team_policy eval average team episode rewards of agent3: 67.5
team_policy eval idv catch total num of agent3: 17
team_policy eval team catch total num: 27
team_policy eval average step individual rewards of agent4: 0.4618886494382323
team_policy eval average team episode rewards of agent4: 67.5
team_policy eval idv catch total num of agent4: 20
team_policy eval team catch total num: 27
idv_policy eval average step individual rewards of agent0: 0.8057214061217777
idv_policy eval average team episode rewards of agent0: 72.5
idv_policy eval idv catch total num of agent0: 34
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent1: 0.7314377803128977
idv_policy eval average team episode rewards of agent1: 72.5
idv_policy eval idv catch total num of agent1: 31
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent2: 0.4493553006282374
idv_policy eval average team episode rewards of agent2: 72.5
idv_policy eval idv catch total num of agent2: 20
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent3: 0.6156431563617896
idv_policy eval average team episode rewards of agent3: 72.5
idv_policy eval idv catch total num of agent3: 26
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent4: 0.6734077926955605
idv_policy eval average team episode rewards of agent4: 72.5
idv_policy eval idv catch total num of agent4: 29
idv_policy eval team catch total num: 29

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2551/10000 episodes, total num timesteps 510400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2552/10000 episodes, total num timesteps 510600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2553/10000 episodes, total num timesteps 510800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2554/10000 episodes, total num timesteps 511000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2555/10000 episodes, total num timesteps 511200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2556/10000 episodes, total num timesteps 511400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2557/10000 episodes, total num timesteps 511600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2558/10000 episodes, total num timesteps 511800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2559/10000 episodes, total num timesteps 512000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2560/10000 episodes, total num timesteps 512200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2561/10000 episodes, total num timesteps 512400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2562/10000 episodes, total num timesteps 512600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2563/10000 episodes, total num timesteps 512800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2564/10000 episodes, total num timesteps 513000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2565/10000 episodes, total num timesteps 513200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2566/10000 episodes, total num timesteps 513400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2567/10000 episodes, total num timesteps 513600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2568/10000 episodes, total num timesteps 513800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2569/10000 episodes, total num timesteps 514000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2570/10000 episodes, total num timesteps 514200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2571/10000 episodes, total num timesteps 514400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2572/10000 episodes, total num timesteps 514600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2573/10000 episodes, total num timesteps 514800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2574/10000 episodes, total num timesteps 515000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2575/10000 episodes, total num timesteps 515200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.5054833328840944
team_policy eval average team episode rewards of agent0: 105.0
team_policy eval idv catch total num of agent0: 22
team_policy eval team catch total num: 42
team_policy eval average step individual rewards of agent1: 1.118448576364047
team_policy eval average team episode rewards of agent1: 105.0
team_policy eval idv catch total num of agent1: 46
team_policy eval team catch total num: 42
team_policy eval average step individual rewards of agent2: 0.9921763884975522
team_policy eval average team episode rewards of agent2: 105.0
team_policy eval idv catch total num of agent2: 41
team_policy eval team catch total num: 42
team_policy eval average step individual rewards of agent3: 0.9178299533806453
team_policy eval average team episode rewards of agent3: 105.0
team_policy eval idv catch total num of agent3: 38
team_policy eval team catch total num: 42
team_policy eval average step individual rewards of agent4: 0.7688499105197173
team_policy eval average team episode rewards of agent4: 105.0
team_policy eval idv catch total num of agent4: 32
team_policy eval team catch total num: 42
idv_policy eval average step individual rewards of agent0: 0.5162168600102981
idv_policy eval average team episode rewards of agent0: 92.5
idv_policy eval idv catch total num of agent0: 22
idv_policy eval team catch total num: 37
idv_policy eval average step individual rewards of agent1: 0.9134908384591136
idv_policy eval average team episode rewards of agent1: 92.5
idv_policy eval idv catch total num of agent1: 38
idv_policy eval team catch total num: 37
idv_policy eval average step individual rewards of agent2: 0.837454032324564
idv_policy eval average team episode rewards of agent2: 92.5
idv_policy eval idv catch total num of agent2: 35
idv_policy eval team catch total num: 37
idv_policy eval average step individual rewards of agent3: 0.5632915673753456
idv_policy eval average team episode rewards of agent3: 92.5
idv_policy eval idv catch total num of agent3: 24
idv_policy eval team catch total num: 37
idv_policy eval average step individual rewards of agent4: 0.35937868748696006
idv_policy eval average team episode rewards of agent4: 92.5
idv_policy eval idv catch total num of agent4: 16
idv_policy eval team catch total num: 37

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2576/10000 episodes, total num timesteps 515400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2577/10000 episodes, total num timesteps 515600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2578/10000 episodes, total num timesteps 515800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2579/10000 episodes, total num timesteps 516000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2580/10000 episodes, total num timesteps 516200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2581/10000 episodes, total num timesteps 516400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2582/10000 episodes, total num timesteps 516600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2583/10000 episodes, total num timesteps 516800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2584/10000 episodes, total num timesteps 517000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2585/10000 episodes, total num timesteps 517200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2586/10000 episodes, total num timesteps 517400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2587/10000 episodes, total num timesteps 517600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2588/10000 episodes, total num timesteps 517800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2589/10000 episodes, total num timesteps 518000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2590/10000 episodes, total num timesteps 518200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2591/10000 episodes, total num timesteps 518400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2592/10000 episodes, total num timesteps 518600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2593/10000 episodes, total num timesteps 518800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2594/10000 episodes, total num timesteps 519000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2595/10000 episodes, total num timesteps 519200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2596/10000 episodes, total num timesteps 519400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2597/10000 episodes, total num timesteps 519600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2598/10000 episodes, total num timesteps 519800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2599/10000 episodes, total num timesteps 520000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2600/10000 episodes, total num timesteps 520200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 1.088102811775611
team_policy eval average team episode rewards of agent0: 127.5
team_policy eval idv catch total num of agent0: 45
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent1: 0.7927806833045495
team_policy eval average team episode rewards of agent1: 127.5
team_policy eval idv catch total num of agent1: 33
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent2: 0.9987091823726822
team_policy eval average team episode rewards of agent2: 127.5
team_policy eval idv catch total num of agent2: 41
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent3: 0.7433167183262737
team_policy eval average team episode rewards of agent3: 127.5
team_policy eval idv catch total num of agent3: 31
team_policy eval team catch total num: 51
team_policy eval average step individual rewards of agent4: 0.8371287724949705
team_policy eval average team episode rewards of agent4: 127.5
team_policy eval idv catch total num of agent4: 35
team_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent0: 0.7223694825150108
idv_policy eval average team episode rewards of agent0: 127.5
idv_policy eval idv catch total num of agent0: 31
idv_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent1: 0.8338269723665754
idv_policy eval average team episode rewards of agent1: 127.5
idv_policy eval idv catch total num of agent1: 35
idv_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent2: 0.5508597102037904
idv_policy eval average team episode rewards of agent2: 127.5
idv_policy eval idv catch total num of agent2: 24
idv_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent3: 0.9877253306680209
idv_policy eval average team episode rewards of agent3: 127.5
idv_policy eval idv catch total num of agent3: 41
idv_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent4: 1.2990153219436946
idv_policy eval average team episode rewards of agent4: 127.5
idv_policy eval idv catch total num of agent4: 53
idv_policy eval team catch total num: 51

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2601/10000 episodes, total num timesteps 520400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2602/10000 episodes, total num timesteps 520600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2603/10000 episodes, total num timesteps 520800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2604/10000 episodes, total num timesteps 521000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2605/10000 episodes, total num timesteps 521200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2606/10000 episodes, total num timesteps 521400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2607/10000 episodes, total num timesteps 521600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2608/10000 episodes, total num timesteps 521800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2609/10000 episodes, total num timesteps 522000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2610/10000 episodes, total num timesteps 522200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2611/10000 episodes, total num timesteps 522400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2612/10000 episodes, total num timesteps 522600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2613/10000 episodes, total num timesteps 522800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2614/10000 episodes, total num timesteps 523000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2615/10000 episodes, total num timesteps 523200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2616/10000 episodes, total num timesteps 523400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2617/10000 episodes, total num timesteps 523600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2618/10000 episodes, total num timesteps 523800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2619/10000 episodes, total num timesteps 524000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2620/10000 episodes, total num timesteps 524200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2621/10000 episodes, total num timesteps 524400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2622/10000 episodes, total num timesteps 524600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2623/10000 episodes, total num timesteps 524800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2624/10000 episodes, total num timesteps 525000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2625/10000 episodes, total num timesteps 525200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.4682033283028708
team_policy eval average team episode rewards of agent0: 120.0
team_policy eval idv catch total num of agent0: 21
team_policy eval team catch total num: 48
team_policy eval average step individual rewards of agent1: 1.2130563386386033
team_policy eval average team episode rewards of agent1: 120.0
team_policy eval idv catch total num of agent1: 50
team_policy eval team catch total num: 48
team_policy eval average step individual rewards of agent2: 0.949632632297579
team_policy eval average team episode rewards of agent2: 120.0
team_policy eval idv catch total num of agent2: 39
team_policy eval team catch total num: 48
team_policy eval average step individual rewards of agent3: 0.7618817534009384
team_policy eval average team episode rewards of agent3: 120.0
team_policy eval idv catch total num of agent3: 32
team_policy eval team catch total num: 48
team_policy eval average step individual rewards of agent4: 0.7579969566612125
team_policy eval average team episode rewards of agent4: 120.0
team_policy eval idv catch total num of agent4: 32
team_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent0: 0.7109139816263647
idv_policy eval average team episode rewards of agent0: 127.5
idv_policy eval idv catch total num of agent0: 30
idv_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent1: 1.3441879401990682
idv_policy eval average team episode rewards of agent1: 127.5
idv_policy eval idv catch total num of agent1: 55
idv_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent2: 1.0940443267348483
idv_policy eval average team episode rewards of agent2: 127.5
idv_policy eval idv catch total num of agent2: 45
idv_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent3: 0.7527471207690484
idv_policy eval average team episode rewards of agent3: 127.5
idv_policy eval idv catch total num of agent3: 32
idv_policy eval team catch total num: 51
idv_policy eval average step individual rewards of agent4: 0.8950351798397423
idv_policy eval average team episode rewards of agent4: 127.5
idv_policy eval idv catch total num of agent4: 37
idv_policy eval team catch total num: 51

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2626/10000 episodes, total num timesteps 525400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2627/10000 episodes, total num timesteps 525600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2628/10000 episodes, total num timesteps 525800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2629/10000 episodes, total num timesteps 526000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2630/10000 episodes, total num timesteps 526200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2631/10000 episodes, total num timesteps 526400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2632/10000 episodes, total num timesteps 526600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2633/10000 episodes, total num timesteps 526800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2634/10000 episodes, total num timesteps 527000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2635/10000 episodes, total num timesteps 527200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2636/10000 episodes, total num timesteps 527400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2637/10000 episodes, total num timesteps 527600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2638/10000 episodes, total num timesteps 527800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2639/10000 episodes, total num timesteps 528000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2640/10000 episodes, total num timesteps 528200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2641/10000 episodes, total num timesteps 528400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2642/10000 episodes, total num timesteps 528600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2643/10000 episodes, total num timesteps 528800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2644/10000 episodes, total num timesteps 529000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2645/10000 episodes, total num timesteps 529200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2646/10000 episodes, total num timesteps 529400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2647/10000 episodes, total num timesteps 529600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2648/10000 episodes, total num timesteps 529800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2649/10000 episodes, total num timesteps 530000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2650/10000 episodes, total num timesteps 530200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.861612441260691
team_policy eval average team episode rewards of agent0: 122.5
team_policy eval idv catch total num of agent0: 36
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent1: 0.9921377382040477
team_policy eval average team episode rewards of agent1: 122.5
team_policy eval idv catch total num of agent1: 41
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent2: 1.0191360432434535
team_policy eval average team episode rewards of agent2: 122.5
team_policy eval idv catch total num of agent2: 42
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent3: 0.6143775360982017
team_policy eval average team episode rewards of agent3: 122.5
team_policy eval idv catch total num of agent3: 26
team_policy eval team catch total num: 49
team_policy eval average step individual rewards of agent4: 0.688857236911497
team_policy eval average team episode rewards of agent4: 122.5
team_policy eval idv catch total num of agent4: 29
team_policy eval team catch total num: 49
idv_policy eval average step individual rewards of agent0: 0.8316460629578825
idv_policy eval average team episode rewards of agent0: 110.0
idv_policy eval idv catch total num of agent0: 35
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent1: 1.0138511608408924
idv_policy eval average team episode rewards of agent1: 110.0
idv_policy eval idv catch total num of agent1: 42
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent2: 0.85091431319583
idv_policy eval average team episode rewards of agent2: 110.0
idv_policy eval idv catch total num of agent2: 36
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent3: 0.8314926765471945
idv_policy eval average team episode rewards of agent3: 110.0
idv_policy eval idv catch total num of agent3: 35
idv_policy eval team catch total num: 44
idv_policy eval average step individual rewards of agent4: 0.6983927125465377
idv_policy eval average team episode rewards of agent4: 110.0
idv_policy eval idv catch total num of agent4: 30
idv_policy eval team catch total num: 44

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2651/10000 episodes, total num timesteps 530400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2652/10000 episodes, total num timesteps 530600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2653/10000 episodes, total num timesteps 530800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2654/10000 episodes, total num timesteps 531000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2655/10000 episodes, total num timesteps 531200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2656/10000 episodes, total num timesteps 531400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2657/10000 episodes, total num timesteps 531600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2658/10000 episodes, total num timesteps 531800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2659/10000 episodes, total num timesteps 532000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2660/10000 episodes, total num timesteps 532200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2661/10000 episodes, total num timesteps 532400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2662/10000 episodes, total num timesteps 532600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2663/10000 episodes, total num timesteps 532800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2664/10000 episodes, total num timesteps 533000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2665/10000 episodes, total num timesteps 533200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2666/10000 episodes, total num timesteps 533400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2667/10000 episodes, total num timesteps 533600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2668/10000 episodes, total num timesteps 533800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2669/10000 episodes, total num timesteps 534000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2670/10000 episodes, total num timesteps 534200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2671/10000 episodes, total num timesteps 534400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2672/10000 episodes, total num timesteps 534600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2673/10000 episodes, total num timesteps 534800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2674/10000 episodes, total num timesteps 535000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2675/10000 episodes, total num timesteps 535200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 1.1215459858982009
team_policy eval average team episode rewards of agent0: 125.0
team_policy eval idv catch total num of agent0: 46
team_policy eval team catch total num: 50
team_policy eval average step individual rewards of agent1: 0.5089346236503377
team_policy eval average team episode rewards of agent1: 125.0
team_policy eval idv catch total num of agent1: 22
team_policy eval team catch total num: 50
team_policy eval average step individual rewards of agent2: 0.808259300954708
team_policy eval average team episode rewards of agent2: 125.0
team_policy eval idv catch total num of agent2: 34
team_policy eval team catch total num: 50
team_policy eval average step individual rewards of agent3: 0.837712315657547
team_policy eval average team episode rewards of agent3: 125.0
team_policy eval idv catch total num of agent3: 35
team_policy eval team catch total num: 50
team_policy eval average step individual rewards of agent4: 0.8870652391677152
team_policy eval average team episode rewards of agent4: 125.0
team_policy eval idv catch total num of agent4: 37
team_policy eval team catch total num: 50
idv_policy eval average step individual rewards of agent0: 1.1417828057323887
idv_policy eval average team episode rewards of agent0: 115.0
idv_policy eval idv catch total num of agent0: 47
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent1: 0.6625023506233724
idv_policy eval average team episode rewards of agent1: 115.0
idv_policy eval idv catch total num of agent1: 28
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent2: 1.4244026887214942
idv_policy eval average team episode rewards of agent2: 115.0
idv_policy eval idv catch total num of agent2: 58
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent3: 0.8413425573433405
idv_policy eval average team episode rewards of agent3: 115.0
idv_policy eval idv catch total num of agent3: 35
idv_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent4: 0.4357303294616851
idv_policy eval average team episode rewards of agent4: 115.0
idv_policy eval idv catch total num of agent4: 19
idv_policy eval team catch total num: 46

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2676/10000 episodes, total num timesteps 535400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2677/10000 episodes, total num timesteps 535600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2678/10000 episodes, total num timesteps 535800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2679/10000 episodes, total num timesteps 536000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2680/10000 episodes, total num timesteps 536200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2681/10000 episodes, total num timesteps 536400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2682/10000 episodes, total num timesteps 536600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2683/10000 episodes, total num timesteps 536800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2684/10000 episodes, total num timesteps 537000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2685/10000 episodes, total num timesteps 537200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2686/10000 episodes, total num timesteps 537400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2687/10000 episodes, total num timesteps 537600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2688/10000 episodes, total num timesteps 537800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2689/10000 episodes, total num timesteps 538000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2690/10000 episodes, total num timesteps 538200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2691/10000 episodes, total num timesteps 538400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2692/10000 episodes, total num timesteps 538600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2693/10000 episodes, total num timesteps 538800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2694/10000 episodes, total num timesteps 539000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2695/10000 episodes, total num timesteps 539200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2696/10000 episodes, total num timesteps 539400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2697/10000 episodes, total num timesteps 539600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2698/10000 episodes, total num timesteps 539800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2699/10000 episodes, total num timesteps 540000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2700/10000 episodes, total num timesteps 540200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 1.0470907458653387
team_policy eval average team episode rewards of agent0: 112.5
team_policy eval idv catch total num of agent0: 43
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent1: 0.8885534179251598
team_policy eval average team episode rewards of agent1: 112.5
team_policy eval idv catch total num of agent1: 37
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent2: 0.8685653564379819
team_policy eval average team episode rewards of agent2: 112.5
team_policy eval idv catch total num of agent2: 36
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent3: 0.660726893626789
team_policy eval average team episode rewards of agent3: 112.5
team_policy eval idv catch total num of agent3: 28
team_policy eval team catch total num: 45
team_policy eval average step individual rewards of agent4: 0.4086659909278789
team_policy eval average team episode rewards of agent4: 112.5
team_policy eval idv catch total num of agent4: 18
team_policy eval team catch total num: 45
idv_policy eval average step individual rewards of agent0: 0.9628677047552245
idv_policy eval average team episode rewards of agent0: 142.5
idv_policy eval idv catch total num of agent0: 40
idv_policy eval team catch total num: 57
idv_policy eval average step individual rewards of agent1: 0.8818109585511594
idv_policy eval average team episode rewards of agent1: 142.5
idv_policy eval idv catch total num of agent1: 37
idv_policy eval team catch total num: 57
idv_policy eval average step individual rewards of agent2: 0.941683999646351
idv_policy eval average team episode rewards of agent2: 142.5
idv_policy eval idv catch total num of agent2: 39
idv_policy eval team catch total num: 57
idv_policy eval average step individual rewards of agent3: 0.9434289344593398
idv_policy eval average team episode rewards of agent3: 142.5
idv_policy eval idv catch total num of agent3: 39
idv_policy eval team catch total num: 57
idv_policy eval average step individual rewards of agent4: 0.9873564135264232
idv_policy eval average team episode rewards of agent4: 142.5
idv_policy eval idv catch total num of agent4: 41
idv_policy eval team catch total num: 57

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2701/10000 episodes, total num timesteps 540400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2702/10000 episodes, total num timesteps 540600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2703/10000 episodes, total num timesteps 540800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2704/10000 episodes, total num timesteps 541000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2705/10000 episodes, total num timesteps 541200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2706/10000 episodes, total num timesteps 541400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2707/10000 episodes, total num timesteps 541600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2708/10000 episodes, total num timesteps 541800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2709/10000 episodes, total num timesteps 542000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2710/10000 episodes, total num timesteps 542200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2711/10000 episodes, total num timesteps 542400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2712/10000 episodes, total num timesteps 542600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2713/10000 episodes, total num timesteps 542800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2714/10000 episodes, total num timesteps 543000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2715/10000 episodes, total num timesteps 543200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2716/10000 episodes, total num timesteps 543400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2717/10000 episodes, total num timesteps 543600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2718/10000 episodes, total num timesteps 543800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2719/10000 episodes, total num timesteps 544000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2720/10000 episodes, total num timesteps 544200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2721/10000 episodes, total num timesteps 544400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2722/10000 episodes, total num timesteps 544600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2723/10000 episodes, total num timesteps 544800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2724/10000 episodes, total num timesteps 545000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2725/10000 episodes, total num timesteps 545200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.8128311153937642
team_policy eval average team episode rewards of agent0: 95.0
team_policy eval idv catch total num of agent0: 34
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent1: 0.8147186367649766
team_policy eval average team episode rewards of agent1: 95.0
team_policy eval idv catch total num of agent1: 34
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent2: 0.9851603031650497
team_policy eval average team episode rewards of agent2: 95.0
team_policy eval idv catch total num of agent2: 41
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent3: 0.4188358520818552
team_policy eval average team episode rewards of agent3: 95.0
team_policy eval idv catch total num of agent3: 19
team_policy eval team catch total num: 38
team_policy eval average step individual rewards of agent4: 0.7794079158572802
team_policy eval average team episode rewards of agent4: 95.0
team_policy eval idv catch total num of agent4: 33
team_policy eval team catch total num: 38
idv_policy eval average step individual rewards of agent0: 0.5112520110358126
idv_policy eval average team episode rewards of agent0: 65.0
idv_policy eval idv catch total num of agent0: 22
idv_policy eval team catch total num: 26
idv_policy eval average step individual rewards of agent1: 0.35758420553591674
idv_policy eval average team episode rewards of agent1: 65.0
idv_policy eval idv catch total num of agent1: 16
idv_policy eval team catch total num: 26
idv_policy eval average step individual rewards of agent2: 0.5794790997368152
idv_policy eval average team episode rewards of agent2: 65.0
idv_policy eval idv catch total num of agent2: 25
idv_policy eval team catch total num: 26
idv_policy eval average step individual rewards of agent3: 0.5270012649622019
idv_policy eval average team episode rewards of agent3: 65.0
idv_policy eval idv catch total num of agent3: 23
idv_policy eval team catch total num: 26
idv_policy eval average step individual rewards of agent4: 0.46276362781508745
idv_policy eval average team episode rewards of agent4: 65.0
idv_policy eval idv catch total num of agent4: 20
idv_policy eval team catch total num: 26

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2726/10000 episodes, total num timesteps 545400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2727/10000 episodes, total num timesteps 545600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2728/10000 episodes, total num timesteps 545800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2729/10000 episodes, total num timesteps 546000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2730/10000 episodes, total num timesteps 546200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2731/10000 episodes, total num timesteps 546400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2732/10000 episodes, total num timesteps 546600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2733/10000 episodes, total num timesteps 546800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2734/10000 episodes, total num timesteps 547000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2735/10000 episodes, total num timesteps 547200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2736/10000 episodes, total num timesteps 547400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2737/10000 episodes, total num timesteps 547600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2738/10000 episodes, total num timesteps 547800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2739/10000 episodes, total num timesteps 548000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2740/10000 episodes, total num timesteps 548200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2741/10000 episodes, total num timesteps 548400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2742/10000 episodes, total num timesteps 548600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2743/10000 episodes, total num timesteps 548800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2744/10000 episodes, total num timesteps 549000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2745/10000 episodes, total num timesteps 549200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2746/10000 episodes, total num timesteps 549400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2747/10000 episodes, total num timesteps 549600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2748/10000 episodes, total num timesteps 549800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2749/10000 episodes, total num timesteps 550000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2750/10000 episodes, total num timesteps 550200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.8272409774270676
team_policy eval average team episode rewards of agent0: 130.0
team_policy eval idv catch total num of agent0: 35
team_policy eval team catch total num: 52
team_policy eval average step individual rewards of agent1: 0.8061815675289802
team_policy eval average team episode rewards of agent1: 130.0
team_policy eval idv catch total num of agent1: 34
team_policy eval team catch total num: 52
team_policy eval average step individual rewards of agent2: 1.2754785338343693
team_policy eval average team episode rewards of agent2: 130.0
team_policy eval idv catch total num of agent2: 52
team_policy eval team catch total num: 52
team_policy eval average step individual rewards of agent3: 0.8063532878673024
team_policy eval average team episode rewards of agent3: 130.0
team_policy eval idv catch total num of agent3: 34
team_policy eval team catch total num: 52
team_policy eval average step individual rewards of agent4: 0.9115478148716083
team_policy eval average team episode rewards of agent4: 130.0
team_policy eval idv catch total num of agent4: 38
team_policy eval team catch total num: 52
idv_policy eval average step individual rewards of agent0: 0.5553948539186163
idv_policy eval average team episode rewards of agent0: 85.0
idv_policy eval idv catch total num of agent0: 24
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent1: 0.7909849762630882
idv_policy eval average team episode rewards of agent1: 85.0
idv_policy eval idv catch total num of agent1: 33
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent2: 0.295069260980808
idv_policy eval average team episode rewards of agent2: 85.0
idv_policy eval idv catch total num of agent2: 14
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent3: 0.44915456101425916
idv_policy eval average team episode rewards of agent3: 85.0
idv_policy eval idv catch total num of agent3: 20
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent4: 0.9136243546949161
idv_policy eval average team episode rewards of agent4: 85.0
idv_policy eval idv catch total num of agent4: 38
idv_policy eval team catch total num: 34

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2751/10000 episodes, total num timesteps 550400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2752/10000 episodes, total num timesteps 550600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2753/10000 episodes, total num timesteps 550800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2754/10000 episodes, total num timesteps 551000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2755/10000 episodes, total num timesteps 551200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2756/10000 episodes, total num timesteps 551400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2757/10000 episodes, total num timesteps 551600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2758/10000 episodes, total num timesteps 551800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2759/10000 episodes, total num timesteps 552000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2760/10000 episodes, total num timesteps 552200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2761/10000 episodes, total num timesteps 552400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2762/10000 episodes, total num timesteps 552600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2763/10000 episodes, total num timesteps 552800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2764/10000 episodes, total num timesteps 553000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2765/10000 episodes, total num timesteps 553200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2766/10000 episodes, total num timesteps 553400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2767/10000 episodes, total num timesteps 553600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2768/10000 episodes, total num timesteps 553800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2769/10000 episodes, total num timesteps 554000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2770/10000 episodes, total num timesteps 554200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2771/10000 episodes, total num timesteps 554400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2772/10000 episodes, total num timesteps 554600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2773/10000 episodes, total num timesteps 554800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2774/10000 episodes, total num timesteps 555000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2775/10000 episodes, total num timesteps 555200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.47290023144448623
team_policy eval average team episode rewards of agent0: 75.0
team_policy eval idv catch total num of agent0: 21
team_policy eval team catch total num: 30
team_policy eval average step individual rewards of agent1: 0.48503639088505623
team_policy eval average team episode rewards of agent1: 75.0
team_policy eval idv catch total num of agent1: 21
team_policy eval team catch total num: 30
team_policy eval average step individual rewards of agent2: 0.24842707306215456
team_policy eval average team episode rewards of agent2: 75.0
team_policy eval idv catch total num of agent2: 12
team_policy eval team catch total num: 30
team_policy eval average step individual rewards of agent3: 0.598779524427959
team_policy eval average team episode rewards of agent3: 75.0
team_policy eval idv catch total num of agent3: 26
team_policy eval team catch total num: 30
team_policy eval average step individual rewards of agent4: 0.5760182850516552
team_policy eval average team episode rewards of agent4: 75.0
team_policy eval idv catch total num of agent4: 25
team_policy eval team catch total num: 30
idv_policy eval average step individual rewards of agent0: 0.7376978681311063
idv_policy eval average team episode rewards of agent0: 157.5
idv_policy eval idv catch total num of agent0: 31
idv_policy eval team catch total num: 63
idv_policy eval average step individual rewards of agent1: 1.153458787877688
idv_policy eval average team episode rewards of agent1: 157.5
idv_policy eval idv catch total num of agent1: 47
idv_policy eval team catch total num: 63
idv_policy eval average step individual rewards of agent2: 1.1926550162086351
idv_policy eval average team episode rewards of agent2: 157.5
idv_policy eval idv catch total num of agent2: 49
idv_policy eval team catch total num: 63
idv_policy eval average step individual rewards of agent3: 1.0701726398703986
idv_policy eval average team episode rewards of agent3: 157.5
idv_policy eval idv catch total num of agent3: 44
idv_policy eval team catch total num: 63
idv_policy eval average step individual rewards of agent4: 1.013794806494337
idv_policy eval average team episode rewards of agent4: 157.5
idv_policy eval idv catch total num of agent4: 42
idv_policy eval team catch total num: 63

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2776/10000 episodes, total num timesteps 555400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2777/10000 episodes, total num timesteps 555600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2778/10000 episodes, total num timesteps 555800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2779/10000 episodes, total num timesteps 556000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2780/10000 episodes, total num timesteps 556200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2781/10000 episodes, total num timesteps 556400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2782/10000 episodes, total num timesteps 556600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2783/10000 episodes, total num timesteps 556800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2784/10000 episodes, total num timesteps 557000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2785/10000 episodes, total num timesteps 557200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2786/10000 episodes, total num timesteps 557400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2787/10000 episodes, total num timesteps 557600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2788/10000 episodes, total num timesteps 557800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2789/10000 episodes, total num timesteps 558000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2790/10000 episodes, total num timesteps 558200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2791/10000 episodes, total num timesteps 558400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2792/10000 episodes, total num timesteps 558600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2793/10000 episodes, total num timesteps 558800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2794/10000 episodes, total num timesteps 559000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2795/10000 episodes, total num timesteps 559200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2796/10000 episodes, total num timesteps 559400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2797/10000 episodes, total num timesteps 559600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2798/10000 episodes, total num timesteps 559800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2799/10000 episodes, total num timesteps 560000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2800/10000 episodes, total num timesteps 560200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.7678273333371304
team_policy eval average team episode rewards of agent0: 75.0
team_policy eval idv catch total num of agent0: 32
team_policy eval team catch total num: 30
team_policy eval average step individual rewards of agent1: 0.9067401603016458
team_policy eval average team episode rewards of agent1: 75.0
team_policy eval idv catch total num of agent1: 38
team_policy eval team catch total num: 30
team_policy eval average step individual rewards of agent2: 0.35974809614726494
team_policy eval average team episode rewards of agent2: 75.0
team_policy eval idv catch total num of agent2: 16
team_policy eval team catch total num: 30
team_policy eval average step individual rewards of agent3: 0.717116913817439
team_policy eval average team episode rewards of agent3: 75.0
team_policy eval idv catch total num of agent3: 30
team_policy eval team catch total num: 30
team_policy eval average step individual rewards of agent4: 0.7820706351204134
team_policy eval average team episode rewards of agent4: 75.0
team_policy eval idv catch total num of agent4: 33
team_policy eval team catch total num: 30
idv_policy eval average step individual rewards of agent0: 0.4335723574811757
idv_policy eval average team episode rewards of agent0: 62.5
idv_policy eval idv catch total num of agent0: 19
idv_policy eval team catch total num: 25
idv_policy eval average step individual rewards of agent1: 0.5621620538506175
idv_policy eval average team episode rewards of agent1: 62.5
idv_policy eval idv catch total num of agent1: 24
idv_policy eval team catch total num: 25
idv_policy eval average step individual rewards of agent2: 0.5799236335204522
idv_policy eval average team episode rewards of agent2: 62.5
idv_policy eval idv catch total num of agent2: 25
idv_policy eval team catch total num: 25
idv_policy eval average step individual rewards of agent3: 0.33038667859523557
idv_policy eval average team episode rewards of agent3: 62.5
idv_policy eval idv catch total num of agent3: 15
idv_policy eval team catch total num: 25
idv_policy eval average step individual rewards of agent4: 0.35645235956285104
idv_policy eval average team episode rewards of agent4: 62.5
idv_policy eval idv catch total num of agent4: 16
idv_policy eval team catch total num: 25

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2801/10000 episodes, total num timesteps 560400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2802/10000 episodes, total num timesteps 560600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2803/10000 episodes, total num timesteps 560800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2804/10000 episodes, total num timesteps 561000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2805/10000 episodes, total num timesteps 561200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2806/10000 episodes, total num timesteps 561400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2807/10000 episodes, total num timesteps 561600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2808/10000 episodes, total num timesteps 561800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2809/10000 episodes, total num timesteps 562000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2810/10000 episodes, total num timesteps 562200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2811/10000 episodes, total num timesteps 562400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2812/10000 episodes, total num timesteps 562600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2813/10000 episodes, total num timesteps 562800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2814/10000 episodes, total num timesteps 563000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2815/10000 episodes, total num timesteps 563200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2816/10000 episodes, total num timesteps 563400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2817/10000 episodes, total num timesteps 563600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2818/10000 episodes, total num timesteps 563800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2819/10000 episodes, total num timesteps 564000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2820/10000 episodes, total num timesteps 564200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2821/10000 episodes, total num timesteps 564400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2822/10000 episodes, total num timesteps 564600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2823/10000 episodes, total num timesteps 564800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2824/10000 episodes, total num timesteps 565000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2825/10000 episodes, total num timesteps 565200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.5508587790279108
team_policy eval average team episode rewards of agent0: 105.0
team_policy eval idv catch total num of agent0: 24
team_policy eval team catch total num: 42
team_policy eval average step individual rewards of agent1: 1.141226673317866
team_policy eval average team episode rewards of agent1: 105.0
team_policy eval idv catch total num of agent1: 47
team_policy eval team catch total num: 42
team_policy eval average step individual rewards of agent2: 0.5846291832813931
team_policy eval average team episode rewards of agent2: 105.0
team_policy eval idv catch total num of agent2: 25
team_policy eval team catch total num: 42
team_policy eval average step individual rewards of agent3: 0.6820716198097508
team_policy eval average team episode rewards of agent3: 105.0
team_policy eval idv catch total num of agent3: 29
team_policy eval team catch total num: 42
team_policy eval average step individual rewards of agent4: 1.0930189578896374
team_policy eval average team episode rewards of agent4: 105.0
team_policy eval idv catch total num of agent4: 45
team_policy eval team catch total num: 42
idv_policy eval average step individual rewards of agent0: 0.6275132160156826
idv_policy eval average team episode rewards of agent0: 90.0
idv_policy eval idv catch total num of agent0: 27
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent1: 0.5777781340196956
idv_policy eval average team episode rewards of agent1: 90.0
idv_policy eval idv catch total num of agent1: 25
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent2: 1.142952131065095
idv_policy eval average team episode rewards of agent2: 90.0
idv_policy eval idv catch total num of agent2: 47
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent3: 0.5356722962553607
idv_policy eval average team episode rewards of agent3: 90.0
idv_policy eval idv catch total num of agent3: 24
idv_policy eval team catch total num: 36
idv_policy eval average step individual rewards of agent4: 0.5231172404656749
idv_policy eval average team episode rewards of agent4: 90.0
idv_policy eval idv catch total num of agent4: 23
idv_policy eval team catch total num: 36

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2826/10000 episodes, total num timesteps 565400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2827/10000 episodes, total num timesteps 565600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2828/10000 episodes, total num timesteps 565800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2829/10000 episodes, total num timesteps 566000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2830/10000 episodes, total num timesteps 566200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2831/10000 episodes, total num timesteps 566400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2832/10000 episodes, total num timesteps 566600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2833/10000 episodes, total num timesteps 566800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2834/10000 episodes, total num timesteps 567000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2835/10000 episodes, total num timesteps 567200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2836/10000 episodes, total num timesteps 567400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2837/10000 episodes, total num timesteps 567600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2838/10000 episodes, total num timesteps 567800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2839/10000 episodes, total num timesteps 568000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2840/10000 episodes, total num timesteps 568200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2841/10000 episodes, total num timesteps 568400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2842/10000 episodes, total num timesteps 568600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2843/10000 episodes, total num timesteps 568800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2844/10000 episodes, total num timesteps 569000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2845/10000 episodes, total num timesteps 569200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2846/10000 episodes, total num timesteps 569400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2847/10000 episodes, total num timesteps 569600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2848/10000 episodes, total num timesteps 569800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2849/10000 episodes, total num timesteps 570000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2850/10000 episodes, total num timesteps 570200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.24928399277316007
team_policy eval average team episode rewards of agent0: 80.0
team_policy eval idv catch total num of agent0: 12
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent1: 0.9706321831217576
team_policy eval average team episode rewards of agent1: 80.0
team_policy eval idv catch total num of agent1: 40
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent2: 0.8405167905177666
team_policy eval average team episode rewards of agent2: 80.0
team_policy eval idv catch total num of agent2: 35
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent3: 0.16905396425226993
team_policy eval average team episode rewards of agent3: 80.0
team_policy eval idv catch total num of agent3: 9
team_policy eval team catch total num: 32
team_policy eval average step individual rewards of agent4: 0.8870936153259135
team_policy eval average team episode rewards of agent4: 80.0
team_policy eval idv catch total num of agent4: 37
team_policy eval team catch total num: 32
idv_policy eval average step individual rewards of agent0: 1.0174106750404508
idv_policy eval average team episode rewards of agent0: 102.5
idv_policy eval idv catch total num of agent0: 42
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent1: 0.43097143433518
idv_policy eval average team episode rewards of agent1: 102.5
idv_policy eval idv catch total num of agent1: 19
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent2: 0.6823217562887309
idv_policy eval average team episode rewards of agent2: 102.5
idv_policy eval idv catch total num of agent2: 29
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent3: 0.6648445968322078
idv_policy eval average team episode rewards of agent3: 102.5
idv_policy eval idv catch total num of agent3: 28
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent4: 0.6602493726098918
idv_policy eval average team episode rewards of agent4: 102.5
idv_policy eval idv catch total num of agent4: 28
idv_policy eval team catch total num: 41

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2851/10000 episodes, total num timesteps 570400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2852/10000 episodes, total num timesteps 570600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2853/10000 episodes, total num timesteps 570800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2854/10000 episodes, total num timesteps 571000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2855/10000 episodes, total num timesteps 571200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2856/10000 episodes, total num timesteps 571400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2857/10000 episodes, total num timesteps 571600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2858/10000 episodes, total num timesteps 571800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2859/10000 episodes, total num timesteps 572000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2860/10000 episodes, total num timesteps 572200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2861/10000 episodes, total num timesteps 572400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2862/10000 episodes, total num timesteps 572600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2863/10000 episodes, total num timesteps 572800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2864/10000 episodes, total num timesteps 573000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2865/10000 episodes, total num timesteps 573200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2866/10000 episodes, total num timesteps 573400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2867/10000 episodes, total num timesteps 573600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2868/10000 episodes, total num timesteps 573800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2869/10000 episodes, total num timesteps 574000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2870/10000 episodes, total num timesteps 574200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2871/10000 episodes, total num timesteps 574400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2872/10000 episodes, total num timesteps 574600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2873/10000 episodes, total num timesteps 574800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2874/10000 episodes, total num timesteps 575000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2875/10000 episodes, total num timesteps 575200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.5118382996572413
team_policy eval average team episode rewards of agent0: 105.0
team_policy eval idv catch total num of agent0: 22
team_policy eval team catch total num: 42
team_policy eval average step individual rewards of agent1: 0.8700048677635109
team_policy eval average team episode rewards of agent1: 105.0
team_policy eval idv catch total num of agent1: 36
team_policy eval team catch total num: 42
team_policy eval average step individual rewards of agent2: 0.8925090349252932
team_policy eval average team episode rewards of agent2: 105.0
team_policy eval idv catch total num of agent2: 37
team_policy eval team catch total num: 42
team_policy eval average step individual rewards of agent3: 0.9105931243764949
team_policy eval average team episode rewards of agent3: 105.0
team_policy eval idv catch total num of agent3: 38
team_policy eval team catch total num: 42
team_policy eval average step individual rewards of agent4: 0.5888225206017041
team_policy eval average team episode rewards of agent4: 105.0
team_policy eval idv catch total num of agent4: 25
team_policy eval team catch total num: 42
idv_policy eval average step individual rewards of agent0: 1.091900294488033
idv_policy eval average team episode rewards of agent0: 95.0
idv_policy eval idv catch total num of agent0: 45
idv_policy eval team catch total num: 38
idv_policy eval average step individual rewards of agent1: 0.6866179619225778
idv_policy eval average team episode rewards of agent1: 95.0
idv_policy eval idv catch total num of agent1: 29
idv_policy eval team catch total num: 38
idv_policy eval average step individual rewards of agent2: 0.43456263036004633
idv_policy eval average team episode rewards of agent2: 95.0
idv_policy eval idv catch total num of agent2: 19
idv_policy eval team catch total num: 38
idv_policy eval average step individual rewards of agent3: 0.560310831369132
idv_policy eval average team episode rewards of agent3: 95.0
idv_policy eval idv catch total num of agent3: 24
idv_policy eval team catch total num: 38
idv_policy eval average step individual rewards of agent4: 0.6536693441799758
idv_policy eval average team episode rewards of agent4: 95.0
idv_policy eval idv catch total num of agent4: 28
idv_policy eval team catch total num: 38

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2876/10000 episodes, total num timesteps 575400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2877/10000 episodes, total num timesteps 575600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2878/10000 episodes, total num timesteps 575800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2879/10000 episodes, total num timesteps 576000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2880/10000 episodes, total num timesteps 576200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2881/10000 episodes, total num timesteps 576400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2882/10000 episodes, total num timesteps 576600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2883/10000 episodes, total num timesteps 576800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2884/10000 episodes, total num timesteps 577000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2885/10000 episodes, total num timesteps 577200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2886/10000 episodes, total num timesteps 577400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2887/10000 episodes, total num timesteps 577600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2888/10000 episodes, total num timesteps 577800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2889/10000 episodes, total num timesteps 578000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2890/10000 episodes, total num timesteps 578200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2891/10000 episodes, total num timesteps 578400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2892/10000 episodes, total num timesteps 578600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2893/10000 episodes, total num timesteps 578800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2894/10000 episodes, total num timesteps 579000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2895/10000 episodes, total num timesteps 579200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2896/10000 episodes, total num timesteps 579400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2897/10000 episodes, total num timesteps 579600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2898/10000 episodes, total num timesteps 579800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2899/10000 episodes, total num timesteps 580000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2900/10000 episodes, total num timesteps 580200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.7141500886411675
team_policy eval average team episode rewards of agent0: 120.0
team_policy eval idv catch total num of agent0: 30
team_policy eval team catch total num: 48
team_policy eval average step individual rewards of agent1: 1.245500063759613
team_policy eval average team episode rewards of agent1: 120.0
team_policy eval idv catch total num of agent1: 51
team_policy eval team catch total num: 48
team_policy eval average step individual rewards of agent2: 0.6277080764473751
team_policy eval average team episode rewards of agent2: 120.0
team_policy eval idv catch total num of agent2: 27
team_policy eval team catch total num: 48
team_policy eval average step individual rewards of agent3: 0.9047089680265055
team_policy eval average team episode rewards of agent3: 120.0
team_policy eval idv catch total num of agent3: 38
team_policy eval team catch total num: 48
team_policy eval average step individual rewards of agent4: 1.0205153720039224
team_policy eval average team episode rewards of agent4: 120.0
team_policy eval idv catch total num of agent4: 42
team_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent0: 0.3064017908718752
idv_policy eval average team episode rewards of agent0: 102.5
idv_policy eval idv catch total num of agent0: 14
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent1: 1.118872278676332
idv_policy eval average team episode rewards of agent1: 102.5
idv_policy eval idv catch total num of agent1: 46
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent2: 0.9087352948286513
idv_policy eval average team episode rewards of agent2: 102.5
idv_policy eval idv catch total num of agent2: 38
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent3: 0.6300949271450433
idv_policy eval average team episode rewards of agent3: 102.5
idv_policy eval idv catch total num of agent3: 27
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent4: 0.45338188656449874
idv_policy eval average team episode rewards of agent4: 102.5
idv_policy eval idv catch total num of agent4: 20
idv_policy eval team catch total num: 41

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2901/10000 episodes, total num timesteps 580400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2902/10000 episodes, total num timesteps 580600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2903/10000 episodes, total num timesteps 580800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2904/10000 episodes, total num timesteps 581000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2905/10000 episodes, total num timesteps 581200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2906/10000 episodes, total num timesteps 581400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2907/10000 episodes, total num timesteps 581600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2908/10000 episodes, total num timesteps 581800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2909/10000 episodes, total num timesteps 582000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2910/10000 episodes, total num timesteps 582200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2911/10000 episodes, total num timesteps 582400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2912/10000 episodes, total num timesteps 582600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2913/10000 episodes, total num timesteps 582800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2914/10000 episodes, total num timesteps 583000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2915/10000 episodes, total num timesteps 583200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2916/10000 episodes, total num timesteps 583400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2917/10000 episodes, total num timesteps 583600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2918/10000 episodes, total num timesteps 583800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2919/10000 episodes, total num timesteps 584000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2920/10000 episodes, total num timesteps 584200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2921/10000 episodes, total num timesteps 584400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2922/10000 episodes, total num timesteps 584600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2923/10000 episodes, total num timesteps 584800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2924/10000 episodes, total num timesteps 585000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2925/10000 episodes, total num timesteps 585200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.8677599320373554
team_policy eval average team episode rewards of agent0: 102.5
team_policy eval idv catch total num of agent0: 36
team_policy eval team catch total num: 41
team_policy eval average step individual rewards of agent1: 0.6643752944899951
team_policy eval average team episode rewards of agent1: 102.5
team_policy eval idv catch total num of agent1: 28
team_policy eval team catch total num: 41
team_policy eval average step individual rewards of agent2: 0.6400287082750675
team_policy eval average team episode rewards of agent2: 102.5
team_policy eval idv catch total num of agent2: 27
team_policy eval team catch total num: 41
team_policy eval average step individual rewards of agent3: 1.1207565525768108
team_policy eval average team episode rewards of agent3: 102.5
team_policy eval idv catch total num of agent3: 46
team_policy eval team catch total num: 41
team_policy eval average step individual rewards of agent4: 0.43792428388008986
team_policy eval average team episode rewards of agent4: 102.5
team_policy eval idv catch total num of agent4: 19
team_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent0: 1.086142619142101
idv_policy eval average team episode rewards of agent0: 132.5
idv_policy eval idv catch total num of agent0: 45
idv_policy eval team catch total num: 53
idv_policy eval average step individual rewards of agent1: 0.6442937800786683
idv_policy eval average team episode rewards of agent1: 132.5
idv_policy eval idv catch total num of agent1: 27
idv_policy eval team catch total num: 53
idv_policy eval average step individual rewards of agent2: 0.8136369469496058
idv_policy eval average team episode rewards of agent2: 132.5
idv_policy eval idv catch total num of agent2: 34
idv_policy eval team catch total num: 53
idv_policy eval average step individual rewards of agent3: 0.8422658416889877
idv_policy eval average team episode rewards of agent3: 132.5
idv_policy eval idv catch total num of agent3: 35
idv_policy eval team catch total num: 53
idv_policy eval average step individual rewards of agent4: 0.9203131389937518
idv_policy eval average team episode rewards of agent4: 132.5
idv_policy eval idv catch total num of agent4: 38
idv_policy eval team catch total num: 53

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2926/10000 episodes, total num timesteps 585400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2927/10000 episodes, total num timesteps 585600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2928/10000 episodes, total num timesteps 585800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2929/10000 episodes, total num timesteps 586000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2930/10000 episodes, total num timesteps 586200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2931/10000 episodes, total num timesteps 586400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2932/10000 episodes, total num timesteps 586600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2933/10000 episodes, total num timesteps 586800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2934/10000 episodes, total num timesteps 587000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2935/10000 episodes, total num timesteps 587200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2936/10000 episodes, total num timesteps 587400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2937/10000 episodes, total num timesteps 587600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2938/10000 episodes, total num timesteps 587800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2939/10000 episodes, total num timesteps 588000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2940/10000 episodes, total num timesteps 588200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2941/10000 episodes, total num timesteps 588400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2942/10000 episodes, total num timesteps 588600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2943/10000 episodes, total num timesteps 588800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2944/10000 episodes, total num timesteps 589000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2945/10000 episodes, total num timesteps 589200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2946/10000 episodes, total num timesteps 589400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2947/10000 episodes, total num timesteps 589600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2948/10000 episodes, total num timesteps 589800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2949/10000 episodes, total num timesteps 590000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2950/10000 episodes, total num timesteps 590200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.785184574694295
team_policy eval average team episode rewards of agent0: 115.0
team_policy eval idv catch total num of agent0: 33
team_policy eval team catch total num: 46
team_policy eval average step individual rewards of agent1: 0.5885898758153904
team_policy eval average team episode rewards of agent1: 115.0
team_policy eval idv catch total num of agent1: 25
team_policy eval team catch total num: 46
team_policy eval average step individual rewards of agent2: 0.7870734273018745
team_policy eval average team episode rewards of agent2: 115.0
team_policy eval idv catch total num of agent2: 33
team_policy eval team catch total num: 46
team_policy eval average step individual rewards of agent3: 0.5883459944188278
team_policy eval average team episode rewards of agent3: 115.0
team_policy eval idv catch total num of agent3: 25
team_policy eval team catch total num: 46
team_policy eval average step individual rewards of agent4: 1.2242949094717748
team_policy eval average team episode rewards of agent4: 115.0
team_policy eval idv catch total num of agent4: 50
team_policy eval team catch total num: 46
idv_policy eval average step individual rewards of agent0: 1.0159699950748677
idv_policy eval average team episode rewards of agent0: 120.0
idv_policy eval idv catch total num of agent0: 42
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent1: 0.7915344950237226
idv_policy eval average team episode rewards of agent1: 120.0
idv_policy eval idv catch total num of agent1: 33
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent2: 0.762410153613904
idv_policy eval average team episode rewards of agent2: 120.0
idv_policy eval idv catch total num of agent2: 32
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent3: 0.6803661395348791
idv_policy eval average team episode rewards of agent3: 120.0
idv_policy eval idv catch total num of agent3: 29
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent4: 0.8107178053227417
idv_policy eval average team episode rewards of agent4: 120.0
idv_policy eval idv catch total num of agent4: 34
idv_policy eval team catch total num: 48

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2951/10000 episodes, total num timesteps 590400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2952/10000 episodes, total num timesteps 590600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2953/10000 episodes, total num timesteps 590800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2954/10000 episodes, total num timesteps 591000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2955/10000 episodes, total num timesteps 591200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2956/10000 episodes, total num timesteps 591400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2957/10000 episodes, total num timesteps 591600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2958/10000 episodes, total num timesteps 591800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2959/10000 episodes, total num timesteps 592000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2960/10000 episodes, total num timesteps 592200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2961/10000 episodes, total num timesteps 592400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2962/10000 episodes, total num timesteps 592600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2963/10000 episodes, total num timesteps 592800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2964/10000 episodes, total num timesteps 593000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2965/10000 episodes, total num timesteps 593200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2966/10000 episodes, total num timesteps 593400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2967/10000 episodes, total num timesteps 593600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2968/10000 episodes, total num timesteps 593800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2969/10000 episodes, total num timesteps 594000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2970/10000 episodes, total num timesteps 594200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2971/10000 episodes, total num timesteps 594400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2972/10000 episodes, total num timesteps 594600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2973/10000 episodes, total num timesteps 594800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2974/10000 episodes, total num timesteps 595000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2975/10000 episodes, total num timesteps 595200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.9074202442023007
team_policy eval average team episode rewards of agent0: 100.0
team_policy eval idv catch total num of agent0: 38
team_policy eval team catch total num: 40
team_policy eval average step individual rewards of agent1: 0.608205196317686
team_policy eval average team episode rewards of agent1: 100.0
team_policy eval idv catch total num of agent1: 26
team_policy eval team catch total num: 40
team_policy eval average step individual rewards of agent2: 0.6385437486354817
team_policy eval average team episode rewards of agent2: 100.0
team_policy eval idv catch total num of agent2: 27
team_policy eval team catch total num: 40
team_policy eval average step individual rewards of agent3: 0.7098369769590651
team_policy eval average team episode rewards of agent3: 100.0
team_policy eval idv catch total num of agent3: 30
team_policy eval team catch total num: 40
team_policy eval average step individual rewards of agent4: 0.7351121167286729
team_policy eval average team episode rewards of agent4: 100.0
team_policy eval idv catch total num of agent4: 31
team_policy eval team catch total num: 40
idv_policy eval average step individual rewards of agent0: 0.48892254608729274
idv_policy eval average team episode rewards of agent0: 120.0
idv_policy eval idv catch total num of agent0: 21
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent1: 0.664055116724832
idv_policy eval average team episode rewards of agent1: 120.0
idv_policy eval idv catch total num of agent1: 28
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent2: 1.0968910533827003
idv_policy eval average team episode rewards of agent2: 120.0
idv_policy eval idv catch total num of agent2: 45
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent3: 1.325435271575269
idv_policy eval average team episode rewards of agent3: 120.0
idv_policy eval idv catch total num of agent3: 54
idv_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent4: 0.9502681632158191
idv_policy eval average team episode rewards of agent4: 120.0
idv_policy eval idv catch total num of agent4: 39
idv_policy eval team catch total num: 48

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2976/10000 episodes, total num timesteps 595400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2977/10000 episodes, total num timesteps 595600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2978/10000 episodes, total num timesteps 595800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2979/10000 episodes, total num timesteps 596000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2980/10000 episodes, total num timesteps 596200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2981/10000 episodes, total num timesteps 596400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2982/10000 episodes, total num timesteps 596600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2983/10000 episodes, total num timesteps 596800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2984/10000 episodes, total num timesteps 597000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2985/10000 episodes, total num timesteps 597200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2986/10000 episodes, total num timesteps 597400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2987/10000 episodes, total num timesteps 597600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2988/10000 episodes, total num timesteps 597800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2989/10000 episodes, total num timesteps 598000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2990/10000 episodes, total num timesteps 598200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2991/10000 episodes, total num timesteps 598400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2992/10000 episodes, total num timesteps 598600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2993/10000 episodes, total num timesteps 598800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2994/10000 episodes, total num timesteps 599000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2995/10000 episodes, total num timesteps 599200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2996/10000 episodes, total num timesteps 599400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2997/10000 episodes, total num timesteps 599600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2998/10000 episodes, total num timesteps 599800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 2999/10000 episodes, total num timesteps 600000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3000/10000 episodes, total num timesteps 600200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.7089354952734774
team_policy eval average team episode rewards of agent0: 55.0
team_policy eval idv catch total num of agent0: 30
team_policy eval team catch total num: 22
team_policy eval average step individual rewards of agent1: 0.49200074432695545
team_policy eval average team episode rewards of agent1: 55.0
team_policy eval idv catch total num of agent1: 22
team_policy eval team catch total num: 22
team_policy eval average step individual rewards of agent2: 0.4268111399236449
team_policy eval average team episode rewards of agent2: 55.0
team_policy eval idv catch total num of agent2: 19
team_policy eval team catch total num: 22
team_policy eval average step individual rewards of agent3: 0.664352509671043
team_policy eval average team episode rewards of agent3: 55.0
team_policy eval idv catch total num of agent3: 28
team_policy eval team catch total num: 22
team_policy eval average step individual rewards of agent4: 0.34429311070525126
team_policy eval average team episode rewards of agent4: 55.0
team_policy eval idv catch total num of agent4: 16
team_policy eval team catch total num: 22
idv_policy eval average step individual rewards of agent0: 0.5579572131005975
idv_policy eval average team episode rewards of agent0: 102.5
idv_policy eval idv catch total num of agent0: 24
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent1: 0.5871363446468005
idv_policy eval average team episode rewards of agent1: 102.5
idv_policy eval idv catch total num of agent1: 25
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent2: 1.2457948368628178
idv_policy eval average team episode rewards of agent2: 102.5
idv_policy eval idv catch total num of agent2: 51
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent3: 0.8895433486061343
idv_policy eval average team episode rewards of agent3: 102.5
idv_policy eval idv catch total num of agent3: 37
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent4: 0.5887088518174853
idv_policy eval average team episode rewards of agent4: 102.5
idv_policy eval idv catch total num of agent4: 25
idv_policy eval team catch total num: 41

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3001/10000 episodes, total num timesteps 600400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3002/10000 episodes, total num timesteps 600600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3003/10000 episodes, total num timesteps 600800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3004/10000 episodes, total num timesteps 601000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3005/10000 episodes, total num timesteps 601200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3006/10000 episodes, total num timesteps 601400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3007/10000 episodes, total num timesteps 601600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3008/10000 episodes, total num timesteps 601800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3009/10000 episodes, total num timesteps 602000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3010/10000 episodes, total num timesteps 602200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3011/10000 episodes, total num timesteps 602400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3012/10000 episodes, total num timesteps 602600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3013/10000 episodes, total num timesteps 602800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3014/10000 episodes, total num timesteps 603000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3015/10000 episodes, total num timesteps 603200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3016/10000 episodes, total num timesteps 603400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3017/10000 episodes, total num timesteps 603600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3018/10000 episodes, total num timesteps 603800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3019/10000 episodes, total num timesteps 604000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3020/10000 episodes, total num timesteps 604200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3021/10000 episodes, total num timesteps 604400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3022/10000 episodes, total num timesteps 604600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3023/10000 episodes, total num timesteps 604800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3024/10000 episodes, total num timesteps 605000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3025/10000 episodes, total num timesteps 605200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.45791432919441033
team_policy eval average team episode rewards of agent0: 92.5
team_policy eval idv catch total num of agent0: 20
team_policy eval team catch total num: 37
team_policy eval average step individual rewards of agent1: 1.0969446104655591
team_policy eval average team episode rewards of agent1: 92.5
team_policy eval idv catch total num of agent1: 45
team_policy eval team catch total num: 37
team_policy eval average step individual rewards of agent2: 0.7547974233409842
team_policy eval average team episode rewards of agent2: 92.5
team_policy eval idv catch total num of agent2: 32
team_policy eval team catch total num: 37
team_policy eval average step individual rewards of agent3: 0.7299831093461177
team_policy eval average team episode rewards of agent3: 92.5
team_policy eval idv catch total num of agent3: 31
team_policy eval team catch total num: 37
team_policy eval average step individual rewards of agent4: 0.7145387246729479
team_policy eval average team episode rewards of agent4: 92.5
team_policy eval idv catch total num of agent4: 30
team_policy eval team catch total num: 37
idv_policy eval average step individual rewards of agent0: 0.526377544431237
idv_policy eval average team episode rewards of agent0: 97.5
idv_policy eval idv catch total num of agent0: 23
idv_policy eval team catch total num: 39
idv_policy eval average step individual rewards of agent1: 0.8045498091035529
idv_policy eval average team episode rewards of agent1: 97.5
idv_policy eval idv catch total num of agent1: 34
idv_policy eval team catch total num: 39
idv_policy eval average step individual rewards of agent2: 0.5033683134234799
idv_policy eval average team episode rewards of agent2: 97.5
idv_policy eval idv catch total num of agent2: 22
idv_policy eval team catch total num: 39
idv_policy eval average step individual rewards of agent3: 0.8582159388655041
idv_policy eval average team episode rewards of agent3: 97.5
idv_policy eval idv catch total num of agent3: 36
idv_policy eval team catch total num: 39
idv_policy eval average step individual rewards of agent4: 0.6051471692556073
idv_policy eval average team episode rewards of agent4: 97.5
idv_policy eval idv catch total num of agent4: 26
idv_policy eval team catch total num: 39

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3026/10000 episodes, total num timesteps 605400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3027/10000 episodes, total num timesteps 605600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3028/10000 episodes, total num timesteps 605800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3029/10000 episodes, total num timesteps 606000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3030/10000 episodes, total num timesteps 606200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3031/10000 episodes, total num timesteps 606400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3032/10000 episodes, total num timesteps 606600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3033/10000 episodes, total num timesteps 606800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3034/10000 episodes, total num timesteps 607000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3035/10000 episodes, total num timesteps 607200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3036/10000 episodes, total num timesteps 607400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3037/10000 episodes, total num timesteps 607600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3038/10000 episodes, total num timesteps 607800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3039/10000 episodes, total num timesteps 608000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3040/10000 episodes, total num timesteps 608200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3041/10000 episodes, total num timesteps 608400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3042/10000 episodes, total num timesteps 608600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3043/10000 episodes, total num timesteps 608800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3044/10000 episodes, total num timesteps 609000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3045/10000 episodes, total num timesteps 609200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3046/10000 episodes, total num timesteps 609400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3047/10000 episodes, total num timesteps 609600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3048/10000 episodes, total num timesteps 609800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3049/10000 episodes, total num timesteps 610000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3050/10000 episodes, total num timesteps 610200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.9750040564557813
team_policy eval average team episode rewards of agent0: 132.5
team_policy eval idv catch total num of agent0: 40
team_policy eval team catch total num: 53
team_policy eval average step individual rewards of agent1: 0.7907120130101736
team_policy eval average team episode rewards of agent1: 132.5
team_policy eval idv catch total num of agent1: 33
team_policy eval team catch total num: 53
team_policy eval average step individual rewards of agent2: 1.0409275584529987
team_policy eval average team episode rewards of agent2: 132.5
team_policy eval idv catch total num of agent2: 43
team_policy eval team catch total num: 53
team_policy eval average step individual rewards of agent3: 0.5589433207149118
team_policy eval average team episode rewards of agent3: 132.5
team_policy eval idv catch total num of agent3: 24
team_policy eval team catch total num: 53
team_policy eval average step individual rewards of agent4: 0.7647111497689888
team_policy eval average team episode rewards of agent4: 132.5
team_policy eval idv catch total num of agent4: 32
team_policy eval team catch total num: 53
idv_policy eval average step individual rewards of agent0: 0.6320865364789248
idv_policy eval average team episode rewards of agent0: 102.5
idv_policy eval idv catch total num of agent0: 27
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent1: 0.36882145831148905
idv_policy eval average team episode rewards of agent1: 102.5
idv_policy eval idv catch total num of agent1: 17
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent2: 0.6809478660982584
idv_policy eval average team episode rewards of agent2: 102.5
idv_policy eval idv catch total num of agent2: 29
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent3: 0.8311847951894958
idv_policy eval average team episode rewards of agent3: 102.5
idv_policy eval idv catch total num of agent3: 35
idv_policy eval team catch total num: 41
idv_policy eval average step individual rewards of agent4: 0.7271543120007538
idv_policy eval average team episode rewards of agent4: 102.5
idv_policy eval idv catch total num of agent4: 31
idv_policy eval team catch total num: 41

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3051/10000 episodes, total num timesteps 610400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3052/10000 episodes, total num timesteps 610600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3053/10000 episodes, total num timesteps 610800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3054/10000 episodes, total num timesteps 611000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3055/10000 episodes, total num timesteps 611200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3056/10000 episodes, total num timesteps 611400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3057/10000 episodes, total num timesteps 611600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3058/10000 episodes, total num timesteps 611800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3059/10000 episodes, total num timesteps 612000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3060/10000 episodes, total num timesteps 612200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3061/10000 episodes, total num timesteps 612400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3062/10000 episodes, total num timesteps 612600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3063/10000 episodes, total num timesteps 612800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3064/10000 episodes, total num timesteps 613000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3065/10000 episodes, total num timesteps 613200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3066/10000 episodes, total num timesteps 613400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3067/10000 episodes, total num timesteps 613600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3068/10000 episodes, total num timesteps 613800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3069/10000 episodes, total num timesteps 614000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3070/10000 episodes, total num timesteps 614200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3071/10000 episodes, total num timesteps 614400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3072/10000 episodes, total num timesteps 614600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3073/10000 episodes, total num timesteps 614800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3074/10000 episodes, total num timesteps 615000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3075/10000 episodes, total num timesteps 615200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 1.2725338453851347
team_policy eval average team episode rewards of agent0: 147.5
team_policy eval idv catch total num of agent0: 52
team_policy eval team catch total num: 59
team_policy eval average step individual rewards of agent1: 1.2474946925618156
team_policy eval average team episode rewards of agent1: 147.5
team_policy eval idv catch total num of agent1: 51
team_policy eval team catch total num: 59
team_policy eval average step individual rewards of agent2: 0.9436910549071552
team_policy eval average team episode rewards of agent2: 147.5
team_policy eval idv catch total num of agent2: 39
team_policy eval team catch total num: 59
team_policy eval average step individual rewards of agent3: 0.9408851984416018
team_policy eval average team episode rewards of agent3: 147.5
team_policy eval idv catch total num of agent3: 39
team_policy eval team catch total num: 59
team_policy eval average step individual rewards of agent4: 0.6085374332886826
team_policy eval average team episode rewards of agent4: 147.5
team_policy eval idv catch total num of agent4: 26
team_policy eval team catch total num: 59
idv_policy eval average step individual rewards of agent0: 1.095860743596143
idv_policy eval average team episode rewards of agent0: 117.5
idv_policy eval idv catch total num of agent0: 45
idv_policy eval team catch total num: 47
idv_policy eval average step individual rewards of agent1: 0.5772510576636996
idv_policy eval average team episode rewards of agent1: 117.5
idv_policy eval idv catch total num of agent1: 25
idv_policy eval team catch total num: 47
idv_policy eval average step individual rewards of agent2: 0.9695319449290443
idv_policy eval average team episode rewards of agent2: 117.5
idv_policy eval idv catch total num of agent2: 40
idv_policy eval team catch total num: 47
idv_policy eval average step individual rewards of agent3: 0.7061999187300971
idv_policy eval average team episode rewards of agent3: 117.5
idv_policy eval idv catch total num of agent3: 30
idv_policy eval team catch total num: 47
idv_policy eval average step individual rewards of agent4: 0.7038967579003266
idv_policy eval average team episode rewards of agent4: 117.5
idv_policy eval idv catch total num of agent4: 30
idv_policy eval team catch total num: 47

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3076/10000 episodes, total num timesteps 615400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3077/10000 episodes, total num timesteps 615600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3078/10000 episodes, total num timesteps 615800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3079/10000 episodes, total num timesteps 616000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3080/10000 episodes, total num timesteps 616200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3081/10000 episodes, total num timesteps 616400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3082/10000 episodes, total num timesteps 616600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3083/10000 episodes, total num timesteps 616800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3084/10000 episodes, total num timesteps 617000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3085/10000 episodes, total num timesteps 617200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3086/10000 episodes, total num timesteps 617400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3087/10000 episodes, total num timesteps 617600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3088/10000 episodes, total num timesteps 617800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3089/10000 episodes, total num timesteps 618000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3090/10000 episodes, total num timesteps 618200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3091/10000 episodes, total num timesteps 618400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3092/10000 episodes, total num timesteps 618600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3093/10000 episodes, total num timesteps 618800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3094/10000 episodes, total num timesteps 619000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3095/10000 episodes, total num timesteps 619200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3096/10000 episodes, total num timesteps 619400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3097/10000 episodes, total num timesteps 619600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3098/10000 episodes, total num timesteps 619800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3099/10000 episodes, total num timesteps 620000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3100/10000 episodes, total num timesteps 620200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.5837237153045632
team_policy eval average team episode rewards of agent0: 107.5
team_policy eval idv catch total num of agent0: 25
team_policy eval team catch total num: 43
team_policy eval average step individual rewards of agent1: 1.5535426418184068
team_policy eval average team episode rewards of agent1: 107.5
team_policy eval idv catch total num of agent1: 63
team_policy eval team catch total num: 43
team_policy eval average step individual rewards of agent2: 0.5668160061179343
team_policy eval average team episode rewards of agent2: 107.5
team_policy eval idv catch total num of agent2: 24
team_policy eval team catch total num: 43
team_policy eval average step individual rewards of agent3: 0.667417769492025
team_policy eval average team episode rewards of agent3: 107.5
team_policy eval idv catch total num of agent3: 28
team_policy eval team catch total num: 43
team_policy eval average step individual rewards of agent4: 0.7403540684486413
team_policy eval average team episode rewards of agent4: 107.5
team_policy eval idv catch total num of agent4: 31
team_policy eval team catch total num: 43
idv_policy eval average step individual rewards of agent0: 0.7420353869647035
idv_policy eval average team episode rewards of agent0: 75.0
idv_policy eval idv catch total num of agent0: 31
idv_policy eval team catch total num: 30
idv_policy eval average step individual rewards of agent1: 0.765827580186723
idv_policy eval average team episode rewards of agent1: 75.0
idv_policy eval idv catch total num of agent1: 32
idv_policy eval team catch total num: 30
idv_policy eval average step individual rewards of agent2: 0.516894296280381
idv_policy eval average team episode rewards of agent2: 75.0
idv_policy eval idv catch total num of agent2: 22
idv_policy eval team catch total num: 30
idv_policy eval average step individual rewards of agent3: 0.3592200019529951
idv_policy eval average team episode rewards of agent3: 75.0
idv_policy eval idv catch total num of agent3: 16
idv_policy eval team catch total num: 30
idv_policy eval average step individual rewards of agent4: 0.45920433095386853
idv_policy eval average team episode rewards of agent4: 75.0
idv_policy eval idv catch total num of agent4: 20
idv_policy eval team catch total num: 30

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3101/10000 episodes, total num timesteps 620400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3102/10000 episodes, total num timesteps 620600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3103/10000 episodes, total num timesteps 620800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3104/10000 episodes, total num timesteps 621000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3105/10000 episodes, total num timesteps 621200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3106/10000 episodes, total num timesteps 621400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3107/10000 episodes, total num timesteps 621600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3108/10000 episodes, total num timesteps 621800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3109/10000 episodes, total num timesteps 622000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3110/10000 episodes, total num timesteps 622200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3111/10000 episodes, total num timesteps 622400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3112/10000 episodes, total num timesteps 622600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3113/10000 episodes, total num timesteps 622800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3114/10000 episodes, total num timesteps 623000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3115/10000 episodes, total num timesteps 623200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3116/10000 episodes, total num timesteps 623400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3117/10000 episodes, total num timesteps 623600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3118/10000 episodes, total num timesteps 623800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3119/10000 episodes, total num timesteps 624000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3120/10000 episodes, total num timesteps 624200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3121/10000 episodes, total num timesteps 624400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3122/10000 episodes, total num timesteps 624600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3123/10000 episodes, total num timesteps 624800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3124/10000 episodes, total num timesteps 625000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3125/10000 episodes, total num timesteps 625200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.8797849026334745
team_policy eval average team episode rewards of agent0: 87.5
team_policy eval idv catch total num of agent0: 37
team_policy eval team catch total num: 35
team_policy eval average step individual rewards of agent1: 0.6521681410591822
team_policy eval average team episode rewards of agent1: 87.5
team_policy eval idv catch total num of agent1: 28
team_policy eval team catch total num: 35
team_policy eval average step individual rewards of agent2: 0.5131231501469186
team_policy eval average team episode rewards of agent2: 87.5
team_policy eval idv catch total num of agent2: 22
team_policy eval team catch total num: 35
team_policy eval average step individual rewards of agent3: 0.9051213577893558
team_policy eval average team episode rewards of agent3: 87.5
team_policy eval idv catch total num of agent3: 38
team_policy eval team catch total num: 35
team_policy eval average step individual rewards of agent4: 0.7066351300397582
team_policy eval average team episode rewards of agent4: 87.5
team_policy eval idv catch total num of agent4: 30
team_policy eval team catch total num: 35
idv_policy eval average step individual rewards of agent0: 0.4762827397657821
idv_policy eval average team episode rewards of agent0: 95.0
idv_policy eval idv catch total num of agent0: 21
idv_policy eval team catch total num: 38
idv_policy eval average step individual rewards of agent1: 0.6286947496236338
idv_policy eval average team episode rewards of agent1: 95.0
idv_policy eval idv catch total num of agent1: 27
idv_policy eval team catch total num: 38
idv_policy eval average step individual rewards of agent2: 0.5129703576289095
idv_policy eval average team episode rewards of agent2: 95.0
idv_policy eval idv catch total num of agent2: 22
idv_policy eval team catch total num: 38
idv_policy eval average step individual rewards of agent3: 1.2467748589584375
idv_policy eval average team episode rewards of agent3: 95.0
idv_policy eval idv catch total num of agent3: 51
idv_policy eval team catch total num: 38
idv_policy eval average step individual rewards of agent4: 0.884175654056992
idv_policy eval average team episode rewards of agent4: 95.0
idv_policy eval idv catch total num of agent4: 37
idv_policy eval team catch total num: 38

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3126/10000 episodes, total num timesteps 625400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3127/10000 episodes, total num timesteps 625600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3128/10000 episodes, total num timesteps 625800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3129/10000 episodes, total num timesteps 626000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3130/10000 episodes, total num timesteps 626200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3131/10000 episodes, total num timesteps 626400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3132/10000 episodes, total num timesteps 626600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3133/10000 episodes, total num timesteps 626800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3134/10000 episodes, total num timesteps 627000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3135/10000 episodes, total num timesteps 627200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3136/10000 episodes, total num timesteps 627400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3137/10000 episodes, total num timesteps 627600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3138/10000 episodes, total num timesteps 627800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3139/10000 episodes, total num timesteps 628000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3140/10000 episodes, total num timesteps 628200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3141/10000 episodes, total num timesteps 628400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3142/10000 episodes, total num timesteps 628600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3143/10000 episodes, total num timesteps 628800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3144/10000 episodes, total num timesteps 629000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3145/10000 episodes, total num timesteps 629200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3146/10000 episodes, total num timesteps 629400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3147/10000 episodes, total num timesteps 629600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3148/10000 episodes, total num timesteps 629800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3149/10000 episodes, total num timesteps 630000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3150/10000 episodes, total num timesteps 630200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 1.0945190079124543
team_policy eval average team episode rewards of agent0: 120.0
team_policy eval idv catch total num of agent0: 45
team_policy eval team catch total num: 48
team_policy eval average step individual rewards of agent1: 1.2708874923706655
team_policy eval average team episode rewards of agent1: 120.0
team_policy eval idv catch total num of agent1: 52
team_policy eval team catch total num: 48
team_policy eval average step individual rewards of agent2: 0.6248533899966244
team_policy eval average team episode rewards of agent2: 120.0
team_policy eval idv catch total num of agent2: 27
team_policy eval team catch total num: 48
team_policy eval average step individual rewards of agent3: 1.1204288967656868
team_policy eval average team episode rewards of agent3: 120.0
team_policy eval idv catch total num of agent3: 46
team_policy eval team catch total num: 48
team_policy eval average step individual rewards of agent4: 0.6585806072795072
team_policy eval average team episode rewards of agent4: 120.0
team_policy eval idv catch total num of agent4: 28
team_policy eval team catch total num: 48
idv_policy eval average step individual rewards of agent0: 0.2813378698782207
idv_policy eval average team episode rewards of agent0: 72.5
idv_policy eval idv catch total num of agent0: 13
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent1: 0.842944349516729
idv_policy eval average team episode rewards of agent1: 72.5
idv_policy eval idv catch total num of agent1: 35
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent2: 0.4125803112713104
idv_policy eval average team episode rewards of agent2: 72.5
idv_policy eval idv catch total num of agent2: 18
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent3: 1.113601816669007
idv_policy eval average team episode rewards of agent3: 72.5
idv_policy eval idv catch total num of agent3: 46
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent4: 0.34403051923846306
idv_policy eval average team episode rewards of agent4: 72.5
idv_policy eval idv catch total num of agent4: 16
idv_policy eval team catch total num: 29

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3151/10000 episodes, total num timesteps 630400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3152/10000 episodes, total num timesteps 630600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3153/10000 episodes, total num timesteps 630800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3154/10000 episodes, total num timesteps 631000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3155/10000 episodes, total num timesteps 631200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3156/10000 episodes, total num timesteps 631400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3157/10000 episodes, total num timesteps 631600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3158/10000 episodes, total num timesteps 631800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3159/10000 episodes, total num timesteps 632000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3160/10000 episodes, total num timesteps 632200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3161/10000 episodes, total num timesteps 632400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3162/10000 episodes, total num timesteps 632600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3163/10000 episodes, total num timesteps 632800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3164/10000 episodes, total num timesteps 633000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3165/10000 episodes, total num timesteps 633200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3166/10000 episodes, total num timesteps 633400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3167/10000 episodes, total num timesteps 633600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3168/10000 episodes, total num timesteps 633800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3169/10000 episodes, total num timesteps 634000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3170/10000 episodes, total num timesteps 634200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3171/10000 episodes, total num timesteps 634400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3172/10000 episodes, total num timesteps 634600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3173/10000 episodes, total num timesteps 634800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3174/10000 episodes, total num timesteps 635000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3175/10000 episodes, total num timesteps 635200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.3917764510758041
team_policy eval average team episode rewards of agent0: 77.5
team_policy eval idv catch total num of agent0: 17
team_policy eval team catch total num: 31
team_policy eval average step individual rewards of agent1: 0.606339719200385
team_policy eval average team episode rewards of agent1: 77.5
team_policy eval idv catch total num of agent1: 26
team_policy eval team catch total num: 31
team_policy eval average step individual rewards of agent2: 0.7138447294198285
team_policy eval average team episode rewards of agent2: 77.5
team_policy eval idv catch total num of agent2: 30
team_policy eval team catch total num: 31
team_policy eval average step individual rewards of agent3: 0.7628692458827192
team_policy eval average team episode rewards of agent3: 77.5
team_policy eval idv catch total num of agent3: 32
team_policy eval team catch total num: 31
team_policy eval average step individual rewards of agent4: 0.6040293480224058
team_policy eval average team episode rewards of agent4: 77.5
team_policy eval idv catch total num of agent4: 26
team_policy eval team catch total num: 31
idv_policy eval average step individual rewards of agent0: 0.8907034849056376
idv_policy eval average team episode rewards of agent0: 72.5
idv_policy eval idv catch total num of agent0: 37
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent1: 0.34961080511721454
idv_policy eval average team episode rewards of agent1: 72.5
idv_policy eval idv catch total num of agent1: 16
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent2: 0.5846829910596635
idv_policy eval average team episode rewards of agent2: 72.5
idv_policy eval idv catch total num of agent2: 25
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent3: 0.35847749034110604
idv_policy eval average team episode rewards of agent3: 72.5
idv_policy eval idv catch total num of agent3: 16
idv_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent4: 0.6572382884728555
idv_policy eval average team episode rewards of agent4: 72.5
idv_policy eval idv catch total num of agent4: 28
idv_policy eval team catch total num: 29

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3176/10000 episodes, total num timesteps 635400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3177/10000 episodes, total num timesteps 635600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3178/10000 episodes, total num timesteps 635800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3179/10000 episodes, total num timesteps 636000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3180/10000 episodes, total num timesteps 636200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3181/10000 episodes, total num timesteps 636400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3182/10000 episodes, total num timesteps 636600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3183/10000 episodes, total num timesteps 636800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3184/10000 episodes, total num timesteps 637000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3185/10000 episodes, total num timesteps 637200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3186/10000 episodes, total num timesteps 637400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3187/10000 episodes, total num timesteps 637600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3188/10000 episodes, total num timesteps 637800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3189/10000 episodes, total num timesteps 638000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3190/10000 episodes, total num timesteps 638200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3191/10000 episodes, total num timesteps 638400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3192/10000 episodes, total num timesteps 638600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3193/10000 episodes, total num timesteps 638800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3194/10000 episodes, total num timesteps 639000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3195/10000 episodes, total num timesteps 639200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3196/10000 episodes, total num timesteps 639400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3197/10000 episodes, total num timesteps 639600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3198/10000 episodes, total num timesteps 639800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3199/10000 episodes, total num timesteps 640000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3200/10000 episodes, total num timesteps 640200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.680496579014769
team_policy eval average team episode rewards of agent0: 107.5
team_policy eval idv catch total num of agent0: 29
team_policy eval team catch total num: 43
team_policy eval average step individual rewards of agent1: 0.5203353636611002
team_policy eval average team episode rewards of agent1: 107.5
team_policy eval idv catch total num of agent1: 23
team_policy eval team catch total num: 43
team_policy eval average step individual rewards of agent2: 0.324462615426172
team_policy eval average team episode rewards of agent2: 107.5
team_policy eval idv catch total num of agent2: 15
team_policy eval team catch total num: 43
team_policy eval average step individual rewards of agent3: 1.118647359782117
team_policy eval average team episode rewards of agent3: 107.5
team_policy eval idv catch total num of agent3: 46
team_policy eval team catch total num: 43
team_policy eval average step individual rewards of agent4: 1.0393969828749776
team_policy eval average team episode rewards of agent4: 107.5
team_policy eval idv catch total num of agent4: 43
team_policy eval team catch total num: 43
idv_policy eval average step individual rewards of agent0: 0.9674313644688559
idv_policy eval average team episode rewards of agent0: 145.0
idv_policy eval idv catch total num of agent0: 40
idv_policy eval team catch total num: 58
idv_policy eval average step individual rewards of agent1: 1.3428234426229975
idv_policy eval average team episode rewards of agent1: 145.0
idv_policy eval idv catch total num of agent1: 55
idv_policy eval team catch total num: 58
idv_policy eval average step individual rewards of agent2: 1.0634873827316635
idv_policy eval average team episode rewards of agent2: 145.0
idv_policy eval idv catch total num of agent2: 44
idv_policy eval team catch total num: 58
idv_policy eval average step individual rewards of agent3: 1.035113189168991
idv_policy eval average team episode rewards of agent3: 145.0
idv_policy eval idv catch total num of agent3: 43
idv_policy eval team catch total num: 58
idv_policy eval average step individual rewards of agent4: 0.7369972641687417
idv_policy eval average team episode rewards of agent4: 145.0
idv_policy eval idv catch total num of agent4: 31
idv_policy eval team catch total num: 58

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3201/10000 episodes, total num timesteps 640400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3202/10000 episodes, total num timesteps 640600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3203/10000 episodes, total num timesteps 640800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3204/10000 episodes, total num timesteps 641000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3205/10000 episodes, total num timesteps 641200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3206/10000 episodes, total num timesteps 641400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3207/10000 episodes, total num timesteps 641600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3208/10000 episodes, total num timesteps 641800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3209/10000 episodes, total num timesteps 642000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3210/10000 episodes, total num timesteps 642200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3211/10000 episodes, total num timesteps 642400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3212/10000 episodes, total num timesteps 642600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3213/10000 episodes, total num timesteps 642800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3214/10000 episodes, total num timesteps 643000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3215/10000 episodes, total num timesteps 643200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3216/10000 episodes, total num timesteps 643400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3217/10000 episodes, total num timesteps 643600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3218/10000 episodes, total num timesteps 643800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3219/10000 episodes, total num timesteps 644000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3220/10000 episodes, total num timesteps 644200/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3221/10000 episodes, total num timesteps 644400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3222/10000 episodes, total num timesteps 644600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3223/10000 episodes, total num timesteps 644800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3224/10000 episodes, total num timesteps 645000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3225/10000 episodes, total num timesteps 645200/2000000, FPS 324.

team_policy eval average step individual rewards of agent0: 0.6076333631043724
team_policy eval average team episode rewards of agent0: 72.5
team_policy eval idv catch total num of agent0: 26
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent1: 0.4876528098680153
team_policy eval average team episode rewards of agent1: 72.5
team_policy eval idv catch total num of agent1: 21
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent2: 0.577386871248369
team_policy eval average team episode rewards of agent2: 72.5
team_policy eval idv catch total num of agent2: 25
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent3: 0.345430263966007
team_policy eval average team episode rewards of agent3: 72.5
team_policy eval idv catch total num of agent3: 16
team_policy eval team catch total num: 29
team_policy eval average step individual rewards of agent4: 0.8368498852733509
team_policy eval average team episode rewards of agent4: 72.5
team_policy eval idv catch total num of agent4: 35
team_policy eval team catch total num: 29
idv_policy eval average step individual rewards of agent0: 0.6962737095848497
idv_policy eval average team episode rewards of agent0: 85.0
idv_policy eval idv catch total num of agent0: 29
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent1: 0.5325412913573563
idv_policy eval average team episode rewards of agent1: 85.0
idv_policy eval idv catch total num of agent1: 23
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent2: 0.6482257402976375
idv_policy eval average team episode rewards of agent2: 85.0
idv_policy eval idv catch total num of agent2: 28
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent3: 0.7659208550497394
idv_policy eval average team episode rewards of agent3: 85.0
idv_policy eval idv catch total num of agent3: 32
idv_policy eval team catch total num: 34
idv_policy eval average step individual rewards of agent4: 0.6554658929735573
idv_policy eval average team episode rewards of agent4: 85.0
idv_policy eval idv catch total num of agent4: 28
idv_policy eval team catch total num: 34

 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3226/10000 episodes, total num timesteps 645400/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3227/10000 episodes, total num timesteps 645600/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3228/10000 episodes, total num timesteps 645800/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3229/10000 episodes, total num timesteps 646000/2000000, FPS 324.


 Scenario simple_tag_tr Algo rmappotrsyn Exp exp_train_continue_tag_base_CMT_s2r2_v1 updates 3230/10000 episodes, total num timesteps 646200/2000000, FPS 324.

wandb: - 0.006 MB of 0.006 MB uploaded
wandb: \ 0.006 MB of 1.424 MB uploaded
wandb: | 1.424 MB of 1.424 MB uploaded
wandb: / 1.424 MB of 1.424 MB uploaded
wandb: 
wandb: Run history:
wandb:                                       Aa_idv_actor_loss ▇▁▁▁▂▁▂▂▂▃▃▃▃▄▃▄▄▄▄▄▄▄▅▅▅▆▆▆▆▆▆▆▇▆▇▇▇▇██
wandb:                                          Ab_policy_loss █▃▃▂▄▂▂▂▂▄▅▃▅▇▃▄▂▂▄▄▃▂▄▃▁▅▃▃▃▃▂▂▂▁▂▁▂▁▄▁
wandb:                                     Ac_idv_ppo_loss_abs ▇▆▃▁▃▂▄▄▄▅▇▅▆▇▆▇▇▇▇▇▇▇▇▇▆▇▇▇▇██▇▇██▇▆█▇▇
wandb:                                         Ad_idv_ppo_prop ▇▅▃▁▃▂▄▄▄▅▆▅▅▆▆▆▆▆▆▆▆▆▇▇▆▇▇▇▇▇█▇▇██▇▇███
wandb:                                                  Ae_eta ▃▆▅▃▄▃▄▃▄▇▂█▂▂▄▁▂▄▅▄▆▃▂▅▄▂▃▆▅▅▃▃▄▃▂▃▅▄▁▂
wandb:                                    Af_noclip_proportion ▂████████▇▆▆▇▅▇▆▇▆▆▇▁▇▆█▇▇▇▇▆▇██▇▇▇▇█▇▂▇
wandb:                                    Ag_update_proportion ▆▆▆▅▃▁▂▁▁▅▆▄▃▃▅▅▆▆▅▇▆▆█▆▇▄▆▅▆▇▇▇█▆▆▇█▇▄█
wandb:                                          Ah_update_loss ▂▁▃▂▅▄▆▇▇▇▆▆▇▇▄▅▂▄█▅▂▅▄▄▁▇▃▃▄▂▄▂▁▃▄▂▃▂▃▂
wandb:                                         Ai_idv_epsilon' ▁▁▁▁▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███
wandb:                                            Aj_idv_sigma ▅▁▁▁▄▂▃▁▁▃▄▄▅▆▅█▂▅▄▄▃▄▆▆▃█▆▃▅▃▄▃▅▅█▃▃▃▅▃
wandb:              Ak_idv_clip(sigma, 1-epislon', 1+epislon') ▁▁▁▁▂▁▂▁▁▂▃▃▂▂▂▄▂▃▃▁▁▃▅▂▃▄▄▂▂▃▃▃▄▄█▃▂▂▃▄
wandb:                                Al_idv_noclip_proportion ▁▂▃▃▂▄▄▆▅▅▅▆▅▅▆▆▆▆▇▇▇▇▆▇▇▇▇▇▇▇█▇▇█▇███▇█
wandb:                       Am_idv_(sigma*A)update_proportion ▁▂▃▄▂▅▄▇▆▅▅▆▅▅▆▆▆▇▇▆▇▆▆▇▇▇▇▇▇▇█▇▇█▇█▇███
wandb:                             An_idv_(sigma*A)update_loss ▅█▄▄▂▇█▃▄▆▃▂▂▆▅▁▇▂▁▄▅▄▅▄▅▁▄▅▃▅▂▄▅▃▃▄▂▅▅▄
wandb:                                     Ao_idv_entropy_prop ▂▄▆█▆▇▅▅▅▄▃▄▄▃▄▃▃▃▃▃▃▃▂▂▃▂▂▂▂▂▁▂▂▁▁▂▂▁▁▁
wandb:                                         Ap_dist_entropy ▁███████████████████████████████████████
wandb:                                          Aq_idv_kl_prop ▁▁▁▁▁▁▁▁▁▁▂▂▂▄▂▃▂▃▃▃▃▃▄▄▃▅▅▄▄▄▃▅▅▄▅▄▅▅█▄
wandb:                                          Ar_idv_kl_coef ▁▁▁▁▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███
wandb:                                          As_idv_kl_loss ▅▁▁▁▃▁▂▁▂▂▄▃▄█▄▅▃▄▄▅▅▄▆▅▃▆▅▄▄▄▄▅▅▃▄▃▄▄▇▃
wandb:                                    At_idv_cross_entropy ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb:                                           Au_value_loss ▁▁▁▂▁▃▃▃▃▁▃▂█▇▃▄▂▃▂▃▂▄▃▃▄▃▃▃▃▂▂▂▂▂▄▁▂▂▃▂
wandb:                                           Av_advantages ▄▅▆▃▃▄█▃▆▅▃█▆▆▃▆▂▅▃▄▃▄▆▆▄▃▃▄▁▂▄▃▂▄▃▇▃▆▃▅
wandb:                                       Aw_idv_actor_norm ██▅▄▅▂▁▁▂▄▃▂▃▅▂▄▂▃▃▄▃▃▄▃▂▅▄▃▄▃▂▄▃▄▃▃▂▃▅▃
wandb:                                      Ax_idv_critic_norm ▅▅▃▆▂▇▆█▇▃▅▄█▃▃▄▂▂▃▃▂▃▃▂▂▂▃▃▃▁▂▁▁▂▂▁▂▂▃▂
wandb:                                     Ba_idv_org_min_prop ▂▂▃▅▁▃▄▃▃▃▆▅▄▄▇▆▇▇▆▇▆▆▇▆▇▄▆▅▅▇▇▇█▆▅██▆▄█
wandb:                                     Bb_idv_org_max_prop ██▆▄▆▂▁▂▂▅▃▂▃▃▂▃▂▂▂▄▃▄▄▄▄▄▃▃▄▃▃▄▃▃▄▃▃▄▄▃
wandb:                                     Bc_idv_org_org_prop ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb:                                     Bd_idv_new_min_prop ▁▃▃▃▄▄▄█▇▅▃▅▄▃▄▅▃▄▆▃▄▅▄▅▆█▆▆▆▅▅▄▅▆█▅▄▆█▅
wandb:                                     Be_idv_new_max_prop ▁▂▂▃▂▅▄▆▅▅▅▆▅▅▆▆▇▇▇▇▇▆▆▇▇▇▇▇▇▇██▇█▇█████
wandb:                                      Ta_team_actor_loss ▆▁▁▁▂▂▂▂▂▂▃▃▃▄▃▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▆▆▇▆▇▇▇▇██
wandb:                                     Tb_team_policy_loss ▂▄▃▄▃▃▄▄▁▂▁▅▄▄▄▆▅▅▄▆▇▂▄▅▆▇▆█▆█▆▅▅▂▅▃▅▃█▄
wandb:                                    Tc_team_ppo_loss_abs ▂▁▁▁▁▁▂▁▂▂▃▂▄▅▄▅▅▆▆▇▆▇▇▇▇▇▇▇▇▇█▇▇▇█▇▇█▇█
wandb:                                        Td_team_ppo_prop ▃▁▁▁▂▁▂▁▂▂▃▃▅▅▅▆▆▆▆▇▆▇▇▇▇▇▇▇▇▇█▇▇███████
wandb:                                        Te_team_epsilon^ ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb:                                          Tf_team_sigma^ ▄▆▆▆▄▆▄▆▇▅▄▄▄█▅▁▆▅█▇▇▅▄▅▅▄▃▇▆▇▄▆▆▄▁▆█▇▆▆
wandb:          Tg_team_clip(sigma^, 1-epislon^', 1+epislon^') ▂▇▇▇▄▇▅▆█▆▄▄▄▆▅▁▆▆▆▇▆▅▂▅▅▄▃▇▆▇▅▇▅▅▁▆██▆▅
wandb:                               Th_team_noclip_proportion ▄███▆█▇▇▇▆▄▆▃▁▅▃▅▅▄▄▄▄▂▃▆▃▃▄▄▄▅▄▃▅▄▅▄▅▂▅
wandb:                     Ti_team_(sigma^*A)update_proportion ▅███▇█▇█▇▆▅▆▃▁▅▄▆▅▄▅▅▄▃▃▇▄▄▅▄▄▅▅▂▅▄▅▄▅▂▅
wandb:                           Tj_team_(sigma^*A)update_loss ▆▇█▇█▇▇▇▄█▅▇▆▁▆▆▅▅▂▄▃▆▄▄▇▄▆▄▆▃▆▅▂▆█▆▆▅▃▆
wandb:                                    Tk_team_entropy_prop ▅███▇█▇█▇▇▅▆▄▃▄▃▃▃▃▂▂▂▂▂▂▂▂▂▂▂▁▂▁▁▁▁▁▁▁▁
wandb:                                    Tl_team_dist_entropy ▁███████████████████████████████████████
wandb:                                         Tm_team_kl_prop █▁▁▁▄▂▃▂▂▄▅▃▄▇▄▅▃▃▄▄▄▃▅▄▂▅▄▃▃▃▃▄▄▃▃▃▃▃▅▂
wandb:                                         Tn_team_kl_coef ████▇▇▇▇▇▆▆▆▆▆▆▅▅▅▅▅▄▄▄▄▄▄▃▃▃▃▃▂▂▂▂▂▂▁▁▁
wandb:                                         To_team_kl_loss ▅▁▁▁▃▂▂▁▂▃▄▃▄█▄▅▃▄▅▅▄▄▆▅▃▆▅▄▄▄▄▅▅▄▅▃▄▄▇▃
wandb:                                   Tp_team_cross_entropy ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb:                                      Tq_team_value_loss ▁▁▁▁▁▁▁▂▂▁▂▂██▃▄▃▃▃▃▃▅▃▄▅▄▃▃▃▂▂▂▂▂▄▂▂▃▄▃
wandb:                                      Tr_team_advantages ▅▅▄▃▅▅▅▄▄▅▅▅▄▅▆▆▃▃▆▅▆▅▃▃▅▇▅█▇▄▅▃▁▄▃██▆▃▅
wandb:                                      Ts_team_actor_norm █▃▁▂▆▂▁▂▂▂▂▂▃█▃▅▃▅▄▅▆▄▇▅▄▆▆▅▄▅▃▅▅▄▅▄▃▄▅▄
wandb:                                     Tt_team_critic_norm ▃▂▁▃▁▃▄▄▄▂▄▄█▇▄▅▄▄▄▃▄▅▄▃▃▃▄▄▃▂▃▂▂▃▃▂▃▃▄▃
wandb:                     agent0/average_episode_team_rewards ▁▁▁▁▁▁▁▁▁▁▂▁▃▄▂▃▃▃▃▅▅▆▅▆█▅▆▅▆▅▅▆▅▆█▄▃▆▆▅
wandb:                  agent0/average_step_individual_rewards ▂▁▁▂▁▂▂▂▂▁▂▂▃▄▃▃▄▄▃▄▄█▄▄▅▅▅▄▅▅▅▄▄▆▇▄▄▅▄▄
wandb:     agent0/idv_policy_eval_average_episode_team_rewards ▁▁▁▁▁▁▂▂▂▃▄▅▄▆▄▅▅▆▅▄▄▇▇▆▃▆▄▅▆▅▆▇▇▄█▅▆▅▅▅
wandb:  agent0/idv_policy_eval_average_step_individual_rewards ▁▂▁▂▂▁▂▂▂▂▄▄▄▆▅▄▅▆▆▅▆▅█▇▂▇▆▇▅▅▅█▅▄▅▇▄▄▄▅
wandb:              agent0/idv_policy_eval_idv_catch_total_num ▁▁▁▂▂▁▂▂▂▂▃▄▄▆▅▄▅▆▆▅▆▅█▇▂▇▆▇▅▅▅█▅▄▅▇▄▄▄▅
wandb:             agent0/idv_policy_eval_team_catch_total_num ▁▁▁▁▁▁▂▂▂▃▄▅▄▆▄▅▅▆▅▄▄▇▇▆▃▆▄▅▆▅▆▇▇▄█▅▆▅▅▅
wandb:    agent0/team_policy_eval_average_episode_team_rewards ▁▁▁▁▁▁▂▂▂▂▄▂▃▅▄▅▆▆▄▆▇▅▆▇▄▅█▇▅▆▅▇▆▅▄▅▅▅▅▄
wandb: agent0/team_policy_eval_average_step_individual_rewards ▁▂▁▁▁▂▃▂▂▂▃▃▄▅▄▄▅▄▄▅▆▆▃▆▃▅▆▅▄▆▅█▄▅▄▄▆▄▆▄
wandb:             agent0/team_policy_eval_idv_catch_total_num ▁▁▁▁▁▂▂▂▂▂▃▃▄▅▄▄▅▄▄▅▆▆▃▆▃▅▆▅▄▆▅█▄▅▄▄▆▃▅▄
wandb:            agent0/team_policy_eval_team_catch_total_num ▁▁▁▁▁▁▂▂▂▂▄▂▃▅▄▅▆▆▄▆▇▅▆▇▄▅█▇▅▆▅▇▆▅▄▅▅▅▅▄
wandb:                     agent1/average_episode_team_rewards ▁▁▁▁▁▁▁▁▁▁▂▁▃▄▂▃▃▃▃▅▅▆▅▆█▅▆▅▆▅▅▆▅▆█▄▃▆▆▅
wandb:                  agent1/average_step_individual_rewards ▂▁▂▁▂▁▂▁▁▂▃▂▄▄▃▃▄▂▃▃▄▅▅▇▇▅▅▄▆▅▆▅▆▅█▃▅▅▅▄
wandb:     agent1/idv_policy_eval_average_episode_team_rewards ▁▁▁▁▁▁▂▂▂▃▄▅▄▆▄▅▅▆▅▄▄▇▇▆▃▆▄▅▆▅▆▇▇▄█▅▆▅▅▅
wandb:  agent1/idv_policy_eval_average_step_individual_rewards ▁▁▁▁▁▂▂▂▂▂▄▄▃▇▄▄▄▄▅▄▄▅▆▄▄▆▆▆▆▃▅▇█▃▇▅▅▅▄▄
wandb:              agent1/idv_policy_eval_idv_catch_total_num ▁▁▁▁▁▂▂▂▂▂▃▄▂▇▄▄▄▄▅▄▄▅▆▄▄▅▆▆▆▃▅▇█▃▇▅▅▅▄▄
wandb:             agent1/idv_policy_eval_team_catch_total_num ▁▁▁▁▁▁▂▂▂▃▄▅▄▆▄▅▅▆▅▄▄▇▇▆▃▆▄▅▆▅▆▇▇▄█▅▆▅▅▅
wandb:    agent1/team_policy_eval_average_episode_team_rewards ▁▁▁▁▁▁▂▂▂▂▄▂▃▅▄▅▆▆▄▆▇▅▆▇▄▅█▇▅▆▅▇▆▅▄▅▅▅▅▄
wandb: agent1/team_policy_eval_average_step_individual_rewards ▁▁▁▁▂▁▁▂▂▁▄▄▅▄▄▅▅▄▅▆▅▄▄▆▅▂█▆▅▄▆▅█▆▄▆▅▇▅▄
wandb:             agent1/team_policy_eval_idv_catch_total_num ▁▁▁▁▂▁▁▂▂▁▄▄▅▃▄▅▅▄▅▆▅▄▄▆▅▂█▆▅▄▆▅█▆▄▆▅▇▅▄
wandb:            agent1/team_policy_eval_team_catch_total_num ▁▁▁▁▁▁▂▂▂▂▄▂▃▅▄▅▆▆▄▆▇▅▆▇▄▅█▇▅▆▅▇▆▅▄▅▅▅▅▄
wandb:                     agent2/average_episode_team_rewards ▁▁▁▁▁▁▁▁▁▁▂▁▃▄▂▃▃▃▃▅▅▆▅▆█▅▆▅▆▅▅▆▅▆█▄▃▆▆▅
wandb:                  agent2/average_step_individual_rewards ▂▁▁▂▁▂▂▂▂▁▂▁▄▅▃▅▆▅▄█▆▅▃▆▄▄▇▆▅▅▄▅█▇▆▇▃▆▆▄
wandb:     agent2/idv_policy_eval_average_episode_team_rewards ▁▁▁▁▁▁▂▂▂▃▄▅▄▆▄▅▅▆▅▄▄▇▇▆▃▆▄▅▆▅▆▇▇▄█▅▆▅▅▅
wandb:  agent2/idv_policy_eval_average_step_individual_rewards ▁▁▁▂▁▂▂▄▂▂▄▅▄▆▅▅▆▆▄▂▄▇▆▃▃▆▃▄▇█▆▅▇▄█▄▇▄▄▅
wandb:              agent2/idv_policy_eval_idv_catch_total_num ▁▁▁▂▁▂▂▄▂▂▃▄▄▆▅▅▆▅▄▂▄▇▆▃▃▆▃▄▇█▆▅▇▄█▄▇▄▄▅
wandb:             agent2/idv_policy_eval_team_catch_total_num ▁▁▁▁▁▁▂▂▂▃▄▅▄▆▄▅▅▆▅▄▄▇▇▆▃▆▄▅▆▅▆▇▇▄█▅▆▅▅▅
wandb:    agent2/team_policy_eval_average_episode_team_rewards ▁▁▁▁▁▁▂▂▂▂▄▂▃▅▄▅▆▆▄▆▇▅▆▇▄▅█▇▅▆▅▇▆▅▄▅▅▅▅▄
wandb: agent2/team_policy_eval_average_step_individual_rewards ▁▂▂▁▂▂▂▂▃▃▄▂▄▄▅▅▆▅▆▇▆▆▅█▅▆▅█▅▅▅▅▆▇▃▆▅▅▄▅
wandb:             agent2/team_policy_eval_idv_catch_total_num ▁▂▁▁▂▂▂▂▃▃▄▂▄▃▅▅▆▅▆▇▆▆▄█▅▆▅█▅▅▅▅▆▇▃▆▅▅▄▄
wandb:            agent2/team_policy_eval_team_catch_total_num ▁▁▁▁▁▁▂▂▂▂▄▂▃▅▄▅▆▆▄▆▇▅▆▇▄▅█▇▅▆▅▇▆▅▄▅▅▅▅▄
wandb:                     agent3/average_episode_team_rewards ▁▁▁▁▁▁▁▁▁▁▂▁▃▄▂▃▃▃▃▅▅▆▅▆█▅▆▅▆▅▅▆▅▆█▄▃▆▆▅
wandb:                  agent3/average_step_individual_rewards ▁▂▂▁▂▁▂▂▂▂▃▃▃▄▃▃▃▄▄▅▆▅▆▃█▃▅▃▆▄▅▅▃▅▆▃▄▆▄▇
wandb:     agent3/idv_policy_eval_average_episode_team_rewards ▁▁▁▁▁▁▂▂▂▃▄▅▄▆▄▅▅▆▅▄▄▇▇▆▃▆▄▅▆▅▆▇▇▄█▅▆▅▅▅
wandb:  agent3/idv_policy_eval_average_step_individual_rewards ▁▁▁▁▂▁▂▃▂▃▄▆▅▅▃▅▄▇▆▅▅▆▆▆▃▆▄▆▄▅▄▆▅▄▇▄█▆█▅
wandb:              agent3/idv_policy_eval_idv_catch_total_num ▁▁▁▁▂▁▂▃▂▃▄▅▅▅▃▅▄▇▆▅▅▆▆▆▃▆▄▆▄▅▄▆▅▄▇▄█▆█▅
wandb:             agent3/idv_policy_eval_team_catch_total_num ▁▁▁▁▁▁▂▂▂▃▄▅▄▆▄▅▅▆▅▄▄▇▇▆▃▆▄▅▆▅▆▇▇▄█▅▆▅▅▅
wandb:    agent3/team_policy_eval_average_episode_team_rewards ▁▁▁▁▁▁▂▂▂▂▄▂▃▅▄▅▆▆▄▆▇▅▆▇▄▅█▇▅▆▅▇▆▅▄▅▅▅▅▄
wandb: agent3/team_policy_eval_average_step_individual_rewards ▂▁▁▁▂▁▁▂▂▂▄▂▃▄▄▃▆▇▃▆▇▅▇▅▄▅█▇▅▆▄▆▅▃▄▆▅▅▆▃
wandb:             agent3/team_policy_eval_idv_catch_total_num ▁▁▁▁▂▁▁▂▂▂▃▂▂▄▄▃▆▇▃▅▇▅▇▅▄▅█▇▅▆▄▆▅▃▄▆▅▅▆▃
wandb:            agent3/team_policy_eval_team_catch_total_num ▁▁▁▁▁▁▂▂▂▂▄▂▃▅▄▅▆▆▄▆▇▅▆▇▄▅█▇▅▆▅▇▆▅▄▅▅▅▅▄
wandb:                     agent4/average_episode_team_rewards ▁▁▁▁▁▁▁▁▁▁▂▁▃▄▂▃▃▃▃▅▅▆▅▆█▅▆▅▆▅▅▆▅▆█▄▃▆▆▅
wandb:                  agent4/average_step_individual_rewards ▂▂▂▁▂▁▂▂▂▂▂▄▃▂▅▅▃▄▃▄▄▅▆▇▅▇▅▆█▇▃▅▇▇▅▅▃▅▇▆
wandb:     agent4/idv_policy_eval_average_episode_team_rewards ▁▁▁▁▁▁▂▂▂▃▄▅▄▆▄▅▅▆▅▄▄▇▇▆▃▆▄▅▆▅▆▇▇▄█▅▆▅▅▅
wandb:  agent4/idv_policy_eval_average_step_individual_rewards ▁▂▂▂▂▃▂▃▃▃▄▅▅▇▄▇▆▆▅▄▄▇▇█▅▆▃▃▆▅▇▇▆▄▇▅▇▅▆▅
wandb:              agent4/idv_policy_eval_idv_catch_total_num ▁▁▂▂▂▂▂▃▂▃▄▄▄▇▄▇▆▆▅▄▄▆▇█▅▆▂▃▆▄▇▆▆▄▇▅▇▅▆▅
wandb:             agent4/idv_policy_eval_team_catch_total_num ▁▁▁▁▁▁▂▂▂▃▄▅▄▆▄▅▅▆▅▄▄▇▇▆▃▆▄▅▆▅▆▇▇▄█▅▆▅▅▅
wandb:    agent4/team_policy_eval_average_episode_team_rewards ▁▁▁▁▁▁▂▂▂▂▄▂▃▅▄▅▆▆▄▆▇▅▆▇▄▅█▇▅▆▅▇▆▅▄▅▅▅▅▄
wandb: agent4/team_policy_eval_average_step_individual_rewards ▁▂▁▁▁▂▂▂▂▂▄▂▄▆▆▆█▇▄▅▆▅▇█▃▆█▅▆▆▄▇▆▆▅▅▆▅▅▆
wandb:             agent4/team_policy_eval_idv_catch_total_num ▁▂▁▁▁▂▂▂▂▂▄▂▄▆▆▆█▇▄▅▆▄▇█▃▆█▅▆▆▄▇▆▆▅▅▆▅▅▆
wandb:            agent4/team_policy_eval_team_catch_total_num ▁▁▁▁▁▁▂▂▂▂▄▂▃▅▄▅▆▆▄▆▇▅▆▇▄▅█▇▅▆▅▇▆▅▄▅▅▅▅▄
wandb: 
wandb: Run summary:
wandb:                                       Aa_idv_actor_loss -0.30154
wandb:                                          Ab_policy_loss 0.00223
wandb:                                     Ac_idv_ppo_loss_abs 0.77516
wandb:                                         Ad_idv_ppo_prop 0.71288
wandb:                                                  Ae_eta 0.99998
wandb:                                    Af_noclip_proportion 0.9978
wandb:                                    Ag_update_proportion 0.4942
wandb:                                          Ah_update_loss 0.06461
wandb:                                         Ai_idv_epsilon' 0.1615
wandb:                                            Aj_idv_sigma 1.00119
wandb:              Ak_idv_clip(sigma, 1-epislon', 1+epislon') 1.00067
wandb:                                Al_idv_noclip_proportion 0.946
wandb:                       Am_idv_(sigma*A)update_proportion 0.4776
wandb:                             An_idv_(sigma*A)update_loss -0.01658
wandb:                                     Ao_idv_entropy_prop 0.28324
wandb:                                         Ap_dist_entropy 4.83778
wandb:                                          Aq_idv_kl_prop 0.00388
wandb:                                          Ar_idv_kl_coef 2.261
wandb:                                          As_idv_kl_loss 0.00186
wandb:                                    At_idv_cross_entropy 0.0
wandb:                                           Au_value_loss 0.39212
wandb:                                           Av_advantages -0.0
wandb:                                       Aw_idv_actor_norm 0.28861
wandb:                                      Ax_idv_critic_norm 0.2866
wandb:                                     Ba_idv_org_min_prop 0.3876
wandb:                                     Bb_idv_org_max_prop 0.1066
wandb:                                     Bc_idv_org_org_prop 0.0
wandb:                                     Bd_idv_new_min_prop 0.083
wandb:                                     Be_idv_new_max_prop 0.3946
wandb:                                      Ta_team_actor_loss -0.30196
wandb:                                     Tb_team_policy_loss -0.00153
wandb:                                    Tc_team_ppo_loss_abs 0.78345
wandb:                                        Td_team_ppo_prop 0.71288
wandb:                                        Te_team_epsilon^ 0.2
wandb:                                          Tf_team_sigma^ 1.00594
wandb:          Tg_team_clip(sigma^, 1-epislon^', 1+epislon^') 1.00262
wandb:                               Th_team_noclip_proportion 0.9683
wandb:                     Ti_team_(sigma^*A)update_proportion 0.9825
wandb:                           Tj_team_(sigma^*A)update_loss -0.01229
wandb:                                    Tk_team_entropy_prop 0.28025
wandb:                                    Tl_team_dist_entropy 4.83785
wandb:                                         Tm_team_kl_prop 0.00687
wandb:                                         Tn_team_kl_coef 4.062
wandb:                                         To_team_kl_loss 0.00186
wandb:                                   Tp_team_cross_entropy 0.0
wandb:                                      Tq_team_value_loss 0.39113
wandb:                                      Tr_team_advantages 0.0
wandb:                                      Ts_team_actor_norm 0.32107
wandb:                                     Tt_team_critic_norm 0.20677
wandb:                     agent0/average_episode_team_rewards 90.0
wandb:                  agent0/average_step_individual_rewards 0.45753
wandb:     agent0/idv_policy_eval_average_episode_team_rewards 85.0
wandb:  agent0/idv_policy_eval_average_step_individual_rewards 0.69627
wandb:              agent0/idv_policy_eval_idv_catch_total_num 29
wandb:             agent0/idv_policy_eval_team_catch_total_num 34
wandb:    agent0/team_policy_eval_average_episode_team_rewards 72.5
wandb: agent0/team_policy_eval_average_step_individual_rewards 0.60763
wandb:             agent0/team_policy_eval_idv_catch_total_num 26
wandb:            agent0/team_policy_eval_team_catch_total_num 29
wandb:                     agent1/average_episode_team_rewards 90.0
wandb:                  agent1/average_step_individual_rewards 0.7052
wandb:     agent1/idv_policy_eval_average_episode_team_rewards 85.0
wandb:  agent1/idv_policy_eval_average_step_individual_rewards 0.53254
wandb:              agent1/idv_policy_eval_idv_catch_total_num 23
wandb:             agent1/idv_policy_eval_team_catch_total_num 34
wandb:    agent1/team_policy_eval_average_episode_team_rewards 72.5
wandb: agent1/team_policy_eval_average_step_individual_rewards 0.48765
wandb:             agent1/team_policy_eval_idv_catch_total_num 21
wandb:            agent1/team_policy_eval_team_catch_total_num 29
wandb:                     agent2/average_episode_team_rewards 90.0
wandb:                  agent2/average_step_individual_rewards 0.52258
wandb:     agent2/idv_policy_eval_average_episode_team_rewards 85.0
wandb:  agent2/idv_policy_eval_average_step_individual_rewards 0.64823
wandb:              agent2/idv_policy_eval_idv_catch_total_num 28
wandb:             agent2/idv_policy_eval_team_catch_total_num 34
wandb:    agent2/team_policy_eval_average_episode_team_rewards 72.5
wandb: agent2/team_policy_eval_average_step_individual_rewards 0.57739
wandb:             agent2/team_policy_eval_idv_catch_total_num 25
wandb:            agent2/team_policy_eval_team_catch_total_num 29
wandb:                     agent3/average_episode_team_rewards 90.0
wandb:                  agent3/average_step_individual_rewards 0.50583
wandb:     agent3/idv_policy_eval_average_episode_team_rewards 85.0
wandb:  agent3/idv_policy_eval_average_step_individual_rewards 0.76592
wandb:              agent3/idv_policy_eval_idv_catch_total_num 32
wandb:             agent3/idv_policy_eval_team_catch_total_num 34
wandb:    agent3/team_policy_eval_average_episode_team_rewards 72.5
wandb: agent3/team_policy_eval_average_step_individual_rewards 0.34543
wandb:             agent3/team_policy_eval_idv_catch_total_num 16
wandb:            agent3/team_policy_eval_team_catch_total_num 29
wandb:                     agent4/average_episode_team_rewards 90.0
wandb:                  agent4/average_step_individual_rewards 0.68714
wandb:     agent4/idv_policy_eval_average_episode_team_rewards 85.0
wandb:  agent4/idv_policy_eval_average_step_individual_rewards 0.65547
wandb:              agent4/idv_policy_eval_idv_catch_total_num 28
wandb:             agent4/idv_policy_eval_team_catch_total_num 34
wandb:    agent4/team_policy_eval_average_episode_team_rewards 72.5
wandb: agent4/team_policy_eval_average_step_individual_rewards 0.83685
wandb:             agent4/team_policy_eval_idv_catch_total_num 35
wandb:            agent4/team_policy_eval_team_catch_total_num 29
wandb: 
wandb: 🚀 View run MPE_6 at: https://wandb.ai/804703098/Continue_Tag_Base_v1/runs/f0rehbk5
wandb: ⭐️ View project at: https://wandb.ai/804703098/Continue_Tag_Base_v1
wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 4 other file(s)
wandb: Find logs at: ./results/MPE/simple_tag_tr/rmappotrsyn/exp_train_continue_tag_base_CMT_s2r2_v1/wandb/run-20240508_193302-f0rehbk5/logs
Traceback (most recent call last):
  File "train/train_mpe_trsyn.py", line 244, in <module>
    main(sys.argv[1:])
  File "train/train_mpe_trsyn.py", line 229, in main
    runner.run()
  File "/home/user/zhangyang/PycharmProjects/Nips2024-ITPC-v2/Nips2024-ITPC-v2/onpolicy/runner/shared/mpe_runner_trsyn.py", line 64, in run
    obs, rewards, dones, infos = self.envs.step(actions_env)
  File "/home/user/zhangyang/PycharmProjects/Nips2024-ITPC-v2/Nips2024-ITPC-v2/onpolicy/envs/env_wrappers.py", line 106, in step
    self.step_async(actions)
  File "/home/user/zhangyang/PycharmProjects/Nips2024-ITPC-v2/Nips2024-ITPC-v2/onpolicy/envs/env_wrappers.py", line 261, in step_async
    remote.send(('step', action))
  File "/home/user/anaconda3/envs/zypy38/lib/python3.8/multiprocessing/connection.py", line 206, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "/home/user/anaconda3/envs/zypy38/lib/python3.8/multiprocessing/connection.py", line 411, in _send_bytes
    self._send(header + buf)
  File "/home/user/anaconda3/envs/zypy38/lib/python3.8/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
