Track: tiny paper (up to 4 pages)
Keywords: Contractive Autoencoders, Denoising Diffusion models, Generative modeling, Representation learning, Geometry.
Abstract: Contractive Auto-Encoders (CAE) orthogonal representations by penalizing the Frobenius norm of the encoder Jacobian. This work provides a Poisson-based reformulation of the contractive penalty that yields a geometric decomposition of the regularizer. By introducing an auxiliary potential field $v_\phi$ defined as the solution of a Poisson equation whose source is the contractive term, and applying Green's first identity. The expected contractive penalty can be expressed as the sum of a boundary-flux contribution and an interior score--potential coupling term. The latter recovers known connections between regularized autoencoders and the score of the data distribution, while the boundary-flux term motivates an additional mechanism: probing the effective support of the data through out-of-distribution transformations. Inspired by diffusion models, we approximate the boundary term using a corruption operator, which induces both the evaluation points and a normal-like direction. We validate the proposed viewpoint on toy datasets and image data, visualizing $f(x)$, $\|\nabla_x f(x)\|$, the induced potential $v(x)$, and $\nabla v(x)$ under varying perturbation strengths, and we observe that the Poisson potential provides a global summary of contractivity that is sensitive to corruption-driven departures from the data manifold.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 48
Loading