Abstract: Highlights•A diffusion model conditioning on Gaussian mixture model is proposed.•Latent distributions built by features are proven better than by classes.•A new negative Gaussian mixture gradient is integrated into our diffusion model.•Our new gradient offers benefits similar to Wasserstein distance.•Combining the gradient with entropy outperforms binary cross entropy in training.
Loading