This Folder includes the supplementary materials for the paper "Data-Free FP8 Quantization of High-Quality Diffusion Models"

It includes: 

*A presentation containing comparison (3 step counts X 4 methods) between 25 images with Stable-diffusion Prompts*

The stable-diffusion images are picked from the first 100 prompts of *Gustavosta/Stable-Diffusion-Prompts* (open-sourced huggingface dataset). 
The 100 prompts we used for SSIM/PSNR evaluation in the paper are designed to imitate real usage.
For the file included here, we removed images with political statements, conspiracy theories, images that can be considered not-safe-for-work, as well as comparisons between images where the differences were too small. 

*A presentation containing comparison (3 step counts X 4 methods) between 20 images with MS-COCO prompts*

These are the first 20 prompts we got using the API: `from T2IBenchmark.datasets import get_coco_30k_captions`.
In this case, the prompts were less risky, and we avoided cherry-picking samples.

*The source code for the research project*

The source code includes env.yml enviroment for easy installation via CUDA:

`conda env create -f env.yml`

`conda activate qdiff`

Otherwise, the repository main dependency is on 

`diffusers==0.25.1`
`transfomers==4.37.2`

Other packages can be installed on demand. 
Disclaimer: The code uses the qdiff library, but in this version, the library was added to the source for simplicity.

Once installed, the main script for running diffusion is
`python generate_images.py`

Example runs:

For FP32, baseline run over Stable diffusion prompts. Generating 5 images with 40 steps:
`python generate_images.py -n 40 -A M23E8 --prompt sd  -N 5 --fp32 `


For M2E5 run with R2N (--noSR), Stable diffusion prompts. Generating 3 images with 80 steps:
`python generate_images.py -n 80 -A M2E5 --noSR  --prompt sd  -N 3 `

For M3E4 run with R2N and flex-bias (-f), MS-COCO. Generating 3 images with 80 steps:
`python generate_images.py -n 80 -A M3E4 -f --noSR  --prompt coco  -N 3 `

For M4E3 run with SR, with flex-bias (-f), with custom 'lion' prompt. Generating 1 image with 20,40..200 steps:

`python generate_images.py -n -2 -A M4E3 -f  --prompt lion -N 1 `

The custom prompt appears in utils.prompts

`"lion": "A majestic lion jumping from a big stone at night, with star-filled skies. Hyperdetailed, with Complex tropic, African background."`

It also has a negative prompt:

`"lion": "extra limbs"`

You may add additional prompts directly to the utils.prompts file, e.g.:

`"<mykey>": "<myprompt>"`

And then generate images using it. For example,

For M4E3 run with flex-bias, SR and WSR (p=4), with your prompt, 5 different seeds and 200 steps, use:
`python generate_images.py -n 200 -A M4E3 -f -p 4 --prompt <mykey> -N 5 `

The code also supports using different schedulers, guidance scaling and resolution.



