Unveiling Robust Feature Spaces: Image vs. Embedding-Oriented Approaches for Plant Disease IdentificationDownload PDFOpen Website

Published: 2023, Last Modified: 02 Apr 2024APSIPA ASC 2023Readers: Everyone
Abstract: Deep learning models, such as Convolutional Neural Networks (CNNs), possess the ability to learn robust features that enable effective plant disease classification. However, these models need large and diverse datasets that reflect the variations present in real-world scenarios in order to achieve high performance not only on classes that the model has already seen, but also on new classes that have not been part of the training. To address this challenge, the utilization of Conditional Generative Adversarial Network (CGAN) models offers significant advantages. CGANs can generate diverse synthetic data, thereby expanding our training dataset and enhancing the generalization capability of deep learning models. While the conventional approach typically focuses on training CGANs to generate synthetic plant disease images, our investigation goes a step further by exploring the effectiveness of training CGANs using embedded images as reference points. This research is driven by the challenge of training CGANs on high-dimensional color images, as opposed to their simpler embedding data with a low-dimension. We have discovered that CGAN trained easier in embedding space and performed better in plant disease classification for both seen and unseen compositions. Our visual analysis also showed that the small CGAN model is able to generate better synthetic embedding than the synthetic images, which leads to significant performance in the overall classification results.
0 Replies

Loading