TL;DR: The Frechet Distance between train and test distribution correlates with the change in performance for functions that are not invariant to the shift.
Abstract: Learning theory tells us that more data is better when minimizing the generalization error of identically distributed training and test sets. However, when training and test distribution differ, this distribution shift can have a significant effect. With a novel perspective on function transfer learning, we are able to lower bound the change of performance when transferring from training to test set with the Wasserstein distance between the embedded training and test set distribution. We find that there is a trade-off affecting performance between how invariant a function is to changes in training and test distribution and how large this shift in distribution is. Empirically across several data domains, we substantiate this viewpoint by showing that test performance correlates strongly with the distance in data distributions between training and test set. Complementary to the popular belief that more data is always better, our results highlight the utility of also choosing a training data distribution that is close to the test data distribution when the learned function is not invariant to such changes.
Keywords: Generalization, Transfer learning, Frechet distance, Optimal transport, Domain adaptation, Distribution shift, Invariance
Original Pdf: pdf
4 Replies
Loading