Abstract: Standard segmentation setups are unable to deliver models
that can recognize concepts outside the training taxonomy.
Open-vocabulary approaches promise to close this gap
through language-image pretraining on billions of imagecaption pairs. Unfortunately, we observe that the promise
is not delivered due to several bottlenecks that have caused
the performance to plateau for almost two years. This paper
proposes novel oracle components that identify and decouple these bottlenecks by taking advantage of the groundtruth
information. The presented validation experiments deliver
important empirical findings that provide a deeper insight
into the failures of open-vocabulary models and suggest
prominent approaches to unlock the future research.
Loading