Keywords: few-class, efficiency, neural network, convolutional neural network, transformer, dataset difficulty measurement, scaling law
TL;DR: A benchmark tool to conduct large-scale experiments with novel insights of importance of dataset difficulty and in-depth analysis of the scaling laws in the few-class arena.
Abstract: A wide variety of benchmark datasets with many classes (80-1000) have been created to assist Computer Vision architectural evolution. An increasing number of vision models are evaluated with these many-class datasets. However, real-world applications often involve substantially fewer classes of interest (2-10). This gap between many and few classes makes it difficult to predict performance of the few-class applications using the available many-class datasets. To date, little has been offered to evaluate models in this Few-Class Regime. We propose Few-Class Arena (FCA), as a unified benchmark with focus on testing efficient image classification models for few classes. We conduct a systematic evaluation of the ResNet family trained on ImageNet subsets from 2 to 1000 classes, and test a wide spectrum of Convolutional Neural Networks and Transformer architectures over ten datasets by using our newly proposed FCA tool. Furthermore, to aid an up-front assessment of dataset difficulty and a more efficient selection of models, we incorporate a difficulty measure as a function of class similarity. FCA offers a new tool for efficient machine learning in the Few-Class Regime, with goals ranging from new efficient similarity proposal, lightweight model architecture design to new scaling law discovery. FCA is user-friendly and can be easily extended to new models and datasets, facilitating future research work. Our benchmark is available at https://github.com/fewclassarena/fca.
Supplementary Material: zip
Submission Number: 14
Loading