HyperDisGAN: A Controllable Variety Generative Model Via Hyperplane Distances for Downstream Classifications

21 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: data insufficiency, hyperplane space, cross-domain and intra-domain generation, controllable variation degree, hinge loss, pythagorean theorem, downstream classification
TL;DR: a novel data augmentation method improving the performance of the downstream classification architectures by reshaping the decision boundary in the hyperplane space
Abstract: Despite the potential benefits of data augmentation for mitigating the data insufficiency, traditional augmentation methods primarily rely on the prior intra-domain knowledge. On the other hand, advanced generative adversarial networks (GANs) generate cross-domain samples with limited variety, particularly in small-scale datasets. In light of these challenges, we propose that accurately controlling the variation degrees of generated samples can reshape the decision boundary in the hyperplane space for the downstream classifications. To achieve this, we develop a novel hyperplane distances GAN (HyperDisGAN) that effectively controls the locations of generated cross-domain and intra-domain samples. The locations are respectively defined using the vertical distances of the cross-domain target samples to the optimal hyperplane and the horizontal distances of the intra-domain target samples to the source samples, which are determined by Hinge Loss and Pythagorean Theorem. Experimental results show that the proposed HyperDisGAN consistently yields significant improvements in terms of the accuracy (ACC) and the area under the receiver operating characteristic curve (AUC) on two small-scale natural and two medical datasets, in the hyperplane spaces of eleven downstream classification architectures. Our codes are available in the anonymous link: https://anonymous.4open.science/r/HyperDisGAN-ICLR2024.
Supplementary Material: zip
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3037
Loading