Neural Conditional Transport Maps

TMLR Paper5962 Authors

22 Sept 2025 (modified: 10 Mar 2026)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We present a neural framework for learning conditional optimal transport (OT) maps between probability distributions. Conditional OT maps are transformations that adapt based on auxiliary variables, such as labels, time indices, or other parameters. They are essential for applications ranging from generative modeling to uncertainty quantification of black-box models. However, existing methods for generating conditional OT maps face significant limitations: input convex neural networks (ICNNs) impose severe architectural constraints that limit expressivity. At the same time, simpler conditioning strategies, such as concatenation, fail to model fundamentally different transport behaviors across conditions. Our approach introduces a conditioning mechanism capable of simultaneously processing both categorical and continuous conditioning variables, using learnable embeddings and positional encoding. At the core of our method lies a hypernetwork that generates transport layer parameters based on these inputs, creating adaptive mappings that outperform simpler conditioning methods. We showcase the framework's practical impact through applications to global sensitivity analysis, enabling efficient computation of OT-based sensitivity indices for complex black-box models. This work advances the state-of-the-art in conditional optimal transport, enabling broader application of optimal transport principles to complex, high-dimensional domains such as generative modeling, black-box model explainability, and scientific computing.
Submission Type: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Dear Action Editor, Thank you for the constructive feedback and the recommendation of Accept with minor revision. The specific guidance on (i) more explicit acknowledgement of prior conditional methods and (ii) explicit comparisons to CondOT was very helpful in strengthening the manuscript. We have revised accordingly; below we summarize the changes with pointers to specific sections. 1) Clearer positioning and acknowledgement of prior conditional OT methods: Section 2.3 (Conditional Transport Challenges): Expanded to explicitly discuss the contributions and design choices of prior conditional and amortized OT approaches, including CondOT and Wang et al. (2025), and to clarify differences in assumptions and outputs (map vs. plan/dual; discrete vs. continuous conditioning; ICNN-constrained vs. unconstrained parameterizations). We aimed to present a balanced view of where these methods succeed and where our setting requires different choices. Section 1 (Introduction): Adjusted wording to make our scope explicit: our contribution centers on an effective conditional instantiation (conditioning mechanism + training recipe) and a systematic empirical study of conditioning design and training difficulty, building on the existing NOT framework rather than proposing a new OT objective. 2) Explicit CondOT comparisons: Section 4.4 (Comparisons with CondOT): Added a dedicated baseline subsection describing how CondOT is applied in our setting and why mixed discrete/continuous conditioning requires adaptation. We report two variants: CondOT Default (original architecture and optimization with tokenized factorized conditioning) and CondOT Adapted (replacing discretized conditioning with mixed one-hot categorical and normalized continuous scalars). Both variants use matched parameter counts and identical training budgets to isolate the effects of architecture design and optimization choices. Table 1 and Section 5.1.2 (Comparisons with previous work): CondOT results now appear alongside all ablation variants, with discussion of the outcomes. CondOT is excluded from the MNIST comparison since its original formulation does not support convolutional architectures; we note this limitation explicitly. 3) Other changes: Abstract and Introduction: Revised phrasing to ensure that claims about performance and novelty are grounded in the experimental evidence and ablation results, with clearer emphasis on our practical findings regarding conditional OT training stability. Code and data release: We have included a link to our open-source repository where code and data will be hosted. Improved clarity of writing in Sections 1-2 and updated references. These revisions directly address both requested changes and make the comparisons and positioning unambiguous. Thank you for your time and guidance throughout this process.
Code: https://github.com/crp94/tmlr-neural-conditional-ot
Assigned Action Editor: ~Emmanuel_Bengio1
Submission Number: 5962
Loading