Generalize Neural Network Through Smooth Hypothesis Function

Published: 19 Mar 2024, Last Modified: 17 Jun 2024Tiny Papers @ ICLR 2024 ArchiveEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Generalization, Hermite Interpolation, Greadient Penalty
TL;DR: We dedcue Jacobian Regularization based on the Hermite interpolation to generalize the neural network.
Abstract: The neural network (NN) looks for the hypothesis function in the search space, where the hypothesis function can be conceptualized as a continuous interpolating function. Nevertheless, the issue of overfitting commonly arises in interpolation methods that focus on function values, such as Lagrange interpolation. We construct the geometric intuition to improve extrapolation and induce the Jacobian Regularization (JR) by Hermite interpolation. The concept of Jacobian Regularization resembles the gradient penalty (GP) employed in the Wasserstein GAN, with an experiment verifying the feasibility of our method.
Submission Number: 5
Loading