--UTILIZED COMPUTING HARDWARE 
Experiments are carried over on 4 NVIDIA RTX A4000 GPUs.

--DATASETS
For citation and amazon graphs, we use dgl.data package, where DGL is a free software with Apache License 2.0.

Furthermore, Pokec-n network is created and presented in 
[1] Dai, Enyan, and Suhang Wang. "Say no to the discrimination: Learning fair graph neural networks with limited sensitive attribute information." Proceedings of the 14th ACM International Conference on Web Search and Data Mining. 2021.

--NECESSARY PACKAGES
All experiments are executed in a python 3.8 environment. 
PyTorch 1.12 is utilized to obtain results. It is installed in the Anaconda environment (python 3.8) through the command:	pip install torch==1.12.0+cu116 torchvision==0.13.0+cu116 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu116
All other necessary packages can be installed with the command:	 	  	      	      	  		         pip install -r requirements.txt

-- Results in Table 1 --

The codes to generate the results for the proposed schemes in this study are improved upon the codes provided in the URL: 'https://github.com/Graph-COM/GraphMaker'
An MIT license is provided for the corresponding repository.
The results in Table 1 can be regenerated through the codes in the directory: 'FairWire_supplementary/GraphMaker'.
For training diffusion models, the corresponding command line arguments follow as:

-for Cora:      			python train_org.py -d cora 
-for Citeseer:  			python train_org.py -d citeseer
-for Amazon Photo: 		python train_org.py -d amazon_photo
-for Amazon Computer 		python train_org.py -d amazon_computer

After training is complete, the saved models can be used to sample synthetic graphs. The link prediction results in Table 1 can be regenerated with the following commands (the created metrics for (G|G) and (G|\tilde{G})):

-for Cora:      				python sample_org.py --model_path cora_org_cpts/Sync_T3.pth
-for Citeseer:      			python sample_org.py --model_path citeseer_org_cpts/Sync_T3.pth
-for Amazon Photo:      			python sample_org.py --model_path amazon_photo_org_cpts/Sync_T3.pth
-for Amazon Computer 			python sample_org.py --model_path amazon_computer_org_cpts/Sync_T3.pth

-- Results in Table 2 --
The results for $\mathcal{L}_{FairWire}$ in Table 2 can be regenerated through the codes in the directory: 'FairWire_supplementary/FairWire_LP'. Due to the organization of codes, the diffusion models must be trained and the corresponding models must be saved before running the following codes. The results for $\mathcal{L}_{FairWire}$ in Table 2 can be obtained by the following commands from 'FairWire/FairWire_LP' (the created metrics for G|G):

-for Cora (set alpha in eval_utils.py 0.05): 

     				python sample.py --model_path cora_org_cpts/Sync_T3.pth
 
-for Citeseer (set alpha in eval_utils.py 0.1): 
  
   				python sample.py --model_path citeseer_org_cpts/Sync_T3.pth 

-for Amazon Photo (set alpha in eval_utils.py 0.01): 
   
  				python sample.py --model_path amazon_photo_org_cpts/Sync_T3.pth 

-for Amazon Computer (set alpha in eval_utils.py 0.5):

			python sample.py --model_path amazon_computer_org_cpts/Sync_T3.pth 

Similarly, the results for the natural baseline $\mathcal{G}$ can be regenerated by the following commands from 'FairWire_supplementary/GraphMaker' (the created metrics for G|G):

-for Cora:      				python sample_lp.py --model_path cora_org_cpts/Sync_T3.pth
-for Citeseer:      			python sample_lp.py --model_path citeseer_org_cpts/Sync_T3.pth
-for Amazon Photo:      			python sample_lp.py --model_path amazon_photo_org_cpts/Sync_T3.pth
-for Amazon Computer 			python sample_lp.py --model_path amazon_computer_org_cpts/Sync_T3.pth

-- Results in Table 3 --
The results for FairWire in Table 3 can be regenerated through the codes in the directory: 'FairWire_supplementary/FairWire'.
For training fair diffusion models, the corresponding command line arguments should be used from 'FairWire_supplementary/FairWire':

-for Cora:      			python train.py -d cora -aA 10 -aX 0
-for Citeseer:  			python train.py -d citeseer -aA 0.1 -aX 0
-for Amazon Photo: 		python train.py -d amazon_photo -aA 0.05 -aX 0


After training is complete, the saved models can be used to sample synthetic graphs. The link prediction results in Table 3 can be regenerated with the following commands ((G|\tilde{G})):

-for Cora:      				python sample.py --model_path cora_10.0_0.0_cpts/Sync_T3.pth
-for Citeseer:      			python sample.py --model_path citeseer_0.1_0.0_cpts/Sync_T3.pth
-for Amazon Photo:      			python sample.py --model_path amazon_photo_0.05_0.0_cpts/Sync_T3.pth


-- Results in Table 4 --

The results for FairWire in Table 4 can be regenerated through the codes in the directory: 'FairWire_supplementary/FairWire'.
For training fair diffusion models, the corresponding command line arguments should be used from 'FairWire_supplementary/FairWire':

-for German:      			python train.py -d german -aA 10 -aX 0
-for Pokec_n:  				python train.py -d pokec_n -aA 1.0 -aX 0


After training is complete, the saved models can be used to sample synthetic graphs. The link prediction results in Table 4 can be regenerated with the following commands ((G|\tilde{G})):

-for German:      				python sample.py --model_path german_10.0_0.0_cpts/Sync_T3.pth
-for Pokec_n:      				python sample.py --model_path pokec_n_1.0_0.0_cpts/Sync_T3.pth



-- Results for FairGen in Table 4 --

In order to obtain results for FairGen, we first create the synthetic graphs with this algorithm. For this step, the codes in "https://github.com/Leo02016/FairGen" are used together with the datasets German and Pokec-n. 
In order to use these codes, first the necessary packages should be installed (conda env create -f environment.yml). Then, with the required packages installed, 
the following command line arguments should be used from 'FairWire_supplementary/FairGen'. 

-for German 					python main.py -d german -b
-for Pokec_n:					python main.py -d pokec_n -b

After these commands, the output edge lists of the output synthetic graphs are created in the directories "./data/german/german_output_edgelist_0_2_metric.txt" and "./data/pokec_n/pokec_n_output_edgelist_0_2_metric.txt", respectively.
In order to evaluate these graphs for node classification, the following command line arguments should be used from 'FairWire_supplementary/FairGen'.

-for German:      				python sample.py -d german
-for Pokec_n:      				python sample.py -d pokec_n

-- Results in Table 9 --
The results for link prediction in Table 9 can be regenerated through the codes in the directory: 'FairWire_supplementary/FairWire_LP' (the created metrics for G|G):

-for lambda = 0.01 (set alpha in eval_utils.py 0.01): 

     				python sample.py --model_path cora_org_cpts/Sync_T3.pth
				python sample.py --model_path citeseer_org_cpts/Sync_T3.pth
				python sample.py --model_path amazon_photo_org_cpts/Sync_T3.pth
				python sample.py --model_path amazon_computer_org_cpts/Sync_T3.pth
 
-for lambda = 0.05 (set alpha in eval_utils.py 0.05): 

     				python sample.py --model_path cora_org_cpts/Sync_T3.pth
				python sample.py --model_path citeseer_org_cpts/Sync_T3.pth
				python sample.py --model_path amazon_photo_org_cpts/Sync_T3.pth
				python sample.py --model_path amazon_computer_org_cpts/Sync_T3.pth

-for lambda = 0.01 (set alpha in eval_utils.py 0.1): 

     				python sample.py --model_path cora_org_cpts/Sync_T3.pth
				python sample.py --model_path citeseer_org_cpts/Sync_T3.pth
				python sample.py --model_path amazon_photo_org_cpts/Sync_T3.pth
				python sample.py --model_path amazon_computer_org_cpts/Sync_T3.pth


The results for graph generation in Table 6 can be regenerated through the codes in the directory: 'FairWire_supplementary/FairWire'. First, for training fair diffusion models, the corresponding command line arguments should be used from 'FairWire_supplementary/FairWire':

-for lambda = 0.01
				python train.py -d citeseer -aA 0.01 -aX 0
				python train.py -d amazon_photo -aA 0.01 -aX 0
-for lambda = 0.05
				python train.py -d citeseer -aA 0.05 -aX 0
				python train.py -d amazon_photo -aA 0.05 -aX 0
-for lambda = 0.1
				python train.py -d citeseer -aA 0.1 -aX 0
				python train.py -d amazon_photo -aA 0.1 -aX 0

After training is complete, the saved models can be used to sample synthetic graphs. The results in Table 6 for graph generation can be regenerated with the following commands ((G|\tilde{G})):

-for lambda = 0.01: 
     				python sample.py --model_path citeseer_0.01_0.0_cpts/Sync_T3.pth
				python sample.py --model_path amazon_photo_0.01_0.0_cpts/Sync_T3.pth
 
-for lambda = 0.05: 
     				python sample.py --model_path citeseer_0.05_0.0_cpts/Sync_T3.pth
				python sample.py --model_path amazon_photo_0.05_0.0_cpts/Sync_T3.pth
-for lambda = 0.1: 
     				python sample.py --model_path citeseer_0.1_0.0_cpts/Sync_T3.pth
				python sample.py --model_path amazon_photo_0.1_0.0_cpts/Sync_T3.pth

-- Results in Table 10 --

The results in Table 10 can be regenerated through the codes in the directory: 'FairWire_supplementary/FairWire'. First, for training fair diffusion models, the corresponding command line arguments should be used from 'FairWire_supplementary/FairWire':

-for lambda = 1.0
				python train.py -d cora -aA 1.0 -aX 0
-for lambda = 10.0
				python train.py -d cora -aA 10.0 -aX 0
-for lambda = 100.0
				python train.py -d cora -aA 100.0 -aX 0

After training is complete, the saved models can be used to sample synthetic graphs. The results in Table 7 can be regenerated with the following commands ((G|\tilde{G})):

-for lambda = 1.0: 
     				python sample.py --model_path cora_1.0_0.0_cpts/Sync_T3.pth
 
-for lambda = 10.0: 
     				python sample.py --model_path cora_10.0_0.0_cpts/Sync_T3.pth

-for lambda = 100.0: 
     				python sample.py --model_path cora_100.0_0.0_cpts/Sync_T3.pth
