Abstract: Research in machine learning on Domain Adaptation has led to numerous methods for re-purposing highperformance pre-trained models for novel tasks, e.g., via finetuning a model with out-of-domain training data. When model weights are unavailable or otherwise fixed, there are fewer options available for exploiting its predictive power. In this paper we investigate whether the predictions of ensembles of fixed, pre-trained, out-of-domain image classification models can be used to improve the performance of an in-domain classifier, or replace it outright with comparable performance. Our approach involves computing the conditional probabilities from the confusion matrixes of out-of-domain predictions for in-domain training samples, then combining this information with prior probabilities and classification confidence using probability-ordered logical abduction, Etcetera Abduction, to select the most likely label for an in-domain test sample. We evaluate this approach using four image classification models in highly disparate domains. Results indicate that this method may be well-suited to applications where insufficient training data is available to train an accurate model on a novel task.
Loading