To validate our PCA-based strand representation, we compare it to various deep learning-based methods and a simpler PCA formulation. All the models are trained on both USC-HairSalon and a private dataset with a greater diversity. Both experiments show a similar trend, where our PCA-based strand representation achieves a significantly low position error with a comparatively low curvature error.
Models trained on USC-HairSalon
Models trained on private dataset
We first show the comparison of strand VAE in GroomGen, where both our implementation and the official checkpoint failed to faithfully reproduce the curly hairstyle.
We then show the comparison of hairstyle VAE in GroomGen, where our implementation produces results that are more natural but sometimes over-smooth. The official checkpoint produces results that are less natural with weird curl patterns. These differences should arise from the difference in the training data.
We finally show that to create a similar kinky hairstyle, 100 points per strand is not enough, and GroomGen's architecture is unstable and cannot fully reconstruct the hairstyle.
Here we show random hair models synthesized by sampling the latent space, with comparison to GroomGen (our implementation).
Here we show hairstyle interpolation results, with comparison to [Weng et al. 2013] and [Zhou et al. 2018].
Here we show single-view hair reconstruction and editing results from an image sequence, with comparison to [Yang et al. 2019] and HairStep.
Here we show more hair-conditioned image generation results.