
This is the implementation of "Collecting The Puzzle Pieces: Disentangled Self-Driven Human Pose Transfer by Permuting Textures".

============ Dependency=========================

Python 3.8
Pyotrch 1.11
CUDA 11.3
GCC 8.3

=============Prepare Dataset=======================


a) Download DeepFashion datset and unzip the img_highres/ folder. Replace the "root" variable in [create_deepfashion.py] with your path to the img_highres/ folder, and run [python create_deepfashion.py] to resize images.

b) Download Market-1501 Datset. Put the unzipped bounding_box_train/ and bounding_box_test/ folders under data/market/.

c) Use DensePose model to generate the pose representations. The parsing maps can be generated using the offline CorrPM model from the paper "Correlating Edge, Pose with Parsing".


==============Traning=====================

Run train_deepf.py --result_dir result/deepf/ --model_dir model/deepf/ to train PT^2 on DeepFashion.
Run train_market.py --result_dir result/market/ --model_dir model/market/ to train PT^2 on Market-1501.

==============Testing=====================

[test.py] would print all evaluation scores and produce visualized images under the specified directory.

Run test.py  --dataset deepf --result_dir result/deepf/ --checkpoint_ema_path model/deepf/ema.ckpt for DeepFashion dataset.
Run test.py  --dataset market --result_dir result/market/ --checkpoint_ema_path model/market/ema.ckpt for Market-1501 dataset.
