Abstract: The present study seeks to evaluate different CNN models in order to compare their performance in recognizing a range of defects in apples and mangoes to ensure the quality of the production of these foods. Using the CNN models, InceptionV3, MobileNetV2, VGG16 and DenseNet121, which were trained with a dataset of real and synthetic images of apples and mangoes covering fruit in acceptable quality condition and with defects: rot, bruises, scabs and black spots. Training was performed with variations on the hyper-parameters and the metric is accuracy. The MobileNetV2 model achieved the highest accuracy in training and testing, obtaining 97.50% for apples and 92.50% for mangoes, making it the most suitable model for defect detection in these fruits. The InceptionV3 and DenseNet121 models presented accuracy values above 90%, while the VGG16 model obtained the poorest performance by not exceeding 80% accuracy for any of the fruits. The trained models, especially MobileNetV2, are capable o
External IDs:dblp:conf/visapp/PachecoGCVV23
Loading