Abstract: Ensuring logical consistency in predictions is a crucial yet overlooked aspect in face attribute classification. We explore the potential reasons for this oversight and introduce two pressing challenges to the field: 1) How can we ensure that a model, when trained with data checked for logical consistency, yields predictions that are logically consistent? 2) How can we achieve the same with training data that hasn't undergone logical consistency checks? Minimizing manual effort is also essential for enhancing automation. To address these challenges, we introduce two datasets, FH41K and CelebA-logic, and propose LogicNet, which combines adversarial learning and label poisoning to learn the logical relationship between attributes without the need for post-processing steps. The accuracy of LogicNet surpasses that of the next-best approach by 13.36%, 9.96%, and 1.01% on FH37K, FH41K, and CelebA-logic, respectively. In real-world case analysis, our approach can achieve a reduction of more than 50% in the average number of failed cases (logically inconsistent attributes) compared to other methods. Code link: https://github.com/HaiyuWu/LogicNet.
Loading