GT-MUST: Gated Try-on by Learning the Mannequin-Specific TransformationOpen Website

2022 (modified: 16 Nov 2022)ACM Multimedia 2022Readers: Everyone
Abstract: Given the mannequin (i.e., reference person) and target garment, the virtual try-on (VTON) task aims at dressing the mannequin in the provided garment automatically, having attracted increasing attention in recent years. Previous works usually conduct the garment deformation under the guidance of ''shape''. However, ''shape-only transformation'' ignores the local structures and results in unnatural distortions. To address this issue, we propose a Gated Try-on method by learning the ManneqUin-Specific Transformation (GT-MUST). Technically, we implement GT-MUST as a three-stage deep neural model. First, GT-MUST learns the ''mannequin-specific transformation'' with a ''take-off'' mechanism, which recovers the warped clothes of the mannequin to its original in-shop state. Then, the learned ''mannequin-specific transformation'' is inverted and utilized to help generate the mannequin-specific warped state for a target garment. Finally, a special gate is employed to better combine the mannequin-specific warped garment with the mannequin. GT-MUST benefits from learning to solve a much easier ''take-off'' task to obtain the mannequin-specific information than the common ''try-on'' task, since flat in-shop garments usually have less variation in shape than those clothed on the body. Experiments on the fashion dataset demonstrate that GT-MUST outperforms the state-of-the-art virtual try-on methods. The code is available at https://github.com/wangning-001/GT-MUST.
0 Replies

Loading