Abstract: Existing plant disease classification models have achieved remarkable performance in recognizing in-laboratory diseased images. However, their performance often significantly degrades in classifying in-the-wild images. Furthermore, we observed that in-the-wild plant images may exhibit similar appearances across various diseases (i.e., small inter-class discrepancy) while the same diseases may look quite different (i.e., large intra-class variance). Motivated by this observation, we propose an in-the-wild multimodal plant disease recognition dataset that contains the largest number of disease classes but also text-based descriptions for each disease. Particularly, the newly provided text descriptions are introduced to provide rich information in textual modality and facilitate in-the-wild disease classification with small inter-class discrepancy and large intra-class variance issues. Therefore, our proposed dataset can be regarded as an ideal testbed for evaluating disease recognition methods in the real world. In addition, we further present a strong yet versatile baseline that models text descriptions and visual data through multiple prototypes for a given class. By fusing the contributions of multimodal prototypes in classification, our baseline can effectively address the small inter-class discrepancy and large intra-class variance issues. Remarkably, our baseline model can not only classify diseases but also recognize diseases in few-shot or training-free scenarios. Extensive benchmarking results demonstrate that our proposed in-the-wild multimodal dataset sets many new challenges to the plant disease recognition task and there is a large space to improve for future works.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Content] Vision and Language
Relevance To Conference: We propose a novel benchmarking plant disease image dataset comprising multimodal contents. The dataset includes in-the-wild images collected from diverse Internet sources. Besides images, we also provide descriptive textual prompts related to various diseases, allowing researchers to explore the challenges of disease recognition in real-world conditions.
Our dataset and baseline approach provide a standardized framework for evaluating disease recognition algorithms across visual and textual modalities.
In summary, our proposed benchmarking multimodal in-the-wild disease recognition dataset and versatile baseline offer valuable resources for advancing research at the intersection of multimedia analysis.
Supplementary Material: zip
Submission Number: 1189
Loading