Domain-Agnostic Neural Oil Painting via Normalization Affine Test-Time Adaptation

Qichao Dong, Lingyu Liu, Yaxiong Wang, Jason J.R. Liu, Zhedong Zheng

Published: 27 Oct 2025, Last Modified: 21 Jan 2026CrossrefEveryoneRevisionsCC BY-SA 4.0
Abstract: Neural oil painting synthesis is to sequentially predict brushstroke color and position, forming an oil painting step by step, which could serve as a painting teacher for education and entertainment. Existing methods usually suffer from degraded generalization for real-world photo inputs due to the training-test distribution gap, often manifesting as stroke-induced artifacts (e.g., over-smoothed textures or inconsistent granularity). In an attempt to mitigate this gap, we introduce a domain-agnostic neural painting (DANP) framework that aligns model to the test domain. In particular, we focus on updating affine parameters of normalization layers efficiently, while keeping other parameters frozen. To stabilize adaptation, our framework introduces: (1) Asymmetric Dual-Branch with mirror augmentation for robust feature alignment via geometric transformations, (2) Dual-Branch Interaction Loss combining intra-branch reconstruction and inter-branch consistency, and we also involve an empirical optimization strategy to mitigate gradient oscillations in practice. Experiments on real-world images from diverse domains (e.g., faces, landscapes, and artworks) validate the effectiveness of DANP in resolution-invariant adaptation, decreasing ~11.3% reconstruction error at 512px and ~20.3% at 1024px compared to the baseline model. It is worth noting that our method is compatible with existing methods, e.g., Paint Transformer, and further improve the ~10.3% perceptual quality. Dataset and code will be publicly released at: https://domain-agnostic-neural-oil-painting.github.io/DANP.
Loading