Learning Morphological Representations of Image Transformations: Influence of Initialization and Layer Differentiability

Mihaela Dimitrova, Samy Blusseau, Santiago Velasco-Forero

Published: 01 Jan 2026, Last Modified: 07 Nov 2025CrossrefEveryoneRevisionsCC BY-SA 4.0
Abstract: As a combination of two successful paradigms in image processing, namely Mathematical Morphology and Deep Learning, Morphological Networks seem very promising for image analysis. However, their practical applications remain constrained due to possible limitations of expressivity and difficulties related to gradient descent based optimization algorithms. In this paper, we focus on neural architectures inspired by the morphological representation theory, which proves their expressivity. We investigate their optimization difficulties and find out that, rather than the non-differentiability of morphological layers, the sparsity of their gradient (or subgradient) is what limits them the most, and requires an appropriate initialization. Furthermore, we propose a method to reduce the number of model parameters after training, which produces a minimal equivalent operator.
Loading