An Empirical Investigation of Model-to-Model Distribution Shifts in Trained Convolutional FiltersDownload PDF

Published: 02 Dec 2021, Last Modified: 08 Sept 2024NeurIPS 2021 Workshop DistShift PosterReaders: Everyone
Keywords: CNN, computer vision, distribution shifts
Abstract: We present first empirical results from our ongoing investigation of distribution shifts in image data used for various computer vision tasks. Instead of analyzing the original training and test data, we propose to study shifts in the learned weights of trained models. In this work, we focus on the properties of the distributions of dominantly used 3x3 convolution filter kernels. We collected and publicly provide a data set with over half a billion filters from hundreds of trained CNNs, using a wide range of data sets, architectures, and vision tasks. Our analysis shows interesting distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like data type, task, architecture, or layer depth. We argue, that the observed properties are a valuable source for further investigation into a better understanding of the impact of shifts in the input data to the generalization abilities of CNN models and novel methods for more robust transfer-learning in this domain. Data available at https://github.com/paulgavrikov/CNN-Filter-DB/
TL;DR: First empirical results on distribution shifts in learned 3x3 convolution filters.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/an-empirical-investigation-of-model-to-model/code)
1 Reply

Loading