Virtual Measurement Garment for Per-Garment Virtual Try-On

Published: 23 Jan 2024, Last Modified: 30 May 2024GI 2024EveryoneRevisionsBibTeXCC BY 4.0
Letter Of Changes: In this letter, we will summarize the revision we made to address the reviewer's comments. We have attached a highlighted version of the revised paper in the supplementary material to help locate the revised parts. - We performed an additional ``dataset requirement analysis'' in Section 4.4. - Line 212: We fixed the compiling error. - Line 240: We corrected the inaccurate statement of "pix2pixHD is the state-of-the-art image-to-image translation method" in line 240. - We added a discussion on the main reason why our method only supports short-sleeve garments and how to expand the supported garment types in the future.
Keywords: Virtual try-on, Deep image synthesis
TL;DR: We propose a novel per-garment virtual try-on method that does not require an additional measurement garment. Meanwhile, our method enable users to perform virtual try-on in arbitrary background environment.
Abstract: The popularity of virtual try-on methods has increased in recent years as they allow users to preview the appearance of garments on themselves without physically wearing them. However, existing image-based methods for general virtual try-on provide limited support to synthesize realistic and consistent garment images under different poses, due to two main difficulties: 1) the dataset used to train these methods contains a vast collection of garments, but they lack fine details of each garment; 2) they synthesize results by warping the front-view image of the target garment in a rest pose, which results in poor quality and detail for other viewpoints and poses. To overcome these drawbacks, per-garment virtual try-on methods train garment-specific networks that can produce high-quality results with fine-grained details for a particular target garment. However, existing per-garment virtual try-on methods require the use of a physical measurement garment, which limits their applicability. In this paper, we propose a novel per-garment virtual try-on method that leverages a virtual measurement garment, which eliminates the need for the physical measurement garment, to guide the synthesis of high-quality and temporally consistent garment images under various poses. Furthermore, we introduce a gap-filling module that effectively fills the gap between the synthesized garment and body parts. We conduct qualitative and quantitative evaluations against a state-of-the-art image-based virtual try-on method and ablation studies to demonstrate that our method achieves superior performance in terms of realism and consistency of the generated garment images.
Supplementary Material: zip
Video: zip
Submission Number: 6
Loading