Unsupervised Volumetric Displacement Fields Using Cost Function Unrolling

Published: 2021, Last Modified: 25 Jan 2026MIDOG/MOOD/Learn2Reg@MICCAI 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Steepest descent algorithms, which are commonly used in deep learning, use the gradient as the descent direction, either as-is or after a direction shift using preconditioning. In many scenarios calculating the gradient is numerically hard due to complex or non-differentiable cost functions, specifically next to singular points. In this work, we focus on the derivation of the Total Variation regularizer commonly used in unsupervised displacement fields cost functions. Specifically, we derive a differentiable proxy to the hard \(L^1\) smoothness constraint in an iterative scheme, which we refer to as Cost Unrolling. We show that our unrolled cost function enables more accurate gradients in regions where the gradients are hard to evaluate or even undefined without increasing the complexity of the original model. We demonstrate the effectiveness of our method in synthetic tests, as well as in the task of unsupervised learning of displacement fields between corresponding 3DCT lung scans. We report improved results compared to standard TV in all tested scenarios, achieved without modifying model architecture but simply through improving the gradients during training.
Loading