Keywords: discrete activation space, space folding measure, graph theory
Abstract: Understanding the internal geometry of neural network representations remains an open challenge in deep learning research. Recent work has introduced a measure of space folding that quantifies how convex regions in input space map to non-convex, folded structures in activation space via straight-path induced walks of binarized activation patterns. In this paper, this space folding measure is linked to the classical Motzkin–Straus theorem through the graph Lagrangian of an interval graph constructed from such walks. This connection expresses a discrete, path-based geometric statistic as a continuous quadratic objective, suggesting that space folding can be incorporated as a differentiable regularization term in gradient-based training to guide networks toward more compact internal representations.
Submission Number: 95
Loading