Bridging the Gap: Sketch-Aware Interpolation Network for High-Quality Animation Sketch Inbetweening

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Hand-drawn 2D animation workflow is typically initiated with the creation of sketch keyframes. Subsequent manual inbetweens are crafted for smoothness, which is a labor-intensive process and the prospect of automatic animation sketch interpolation has become highly appealing. Yet, common frame interpolation methods are generally hindered by two key issues: 1) limited texture and colour details in sketches, and 2) exaggerated alterations between two sketch keyframes. To overcome these issues, we propose a novel deep learning method - Sketch-Aware Interpolation Network (SAIN). This approach incorporates multi-level guidance that formulates region-level correspondence, stroke-level correspondence and pixel-level dynamics. A multi-stream U-Transformer is then devised to characterize sketch inbewteening patterns using these multi-level guides through the integration of self / cross-attention mechanisms. Additionally, to facilitate future research on animation sketch inbetweening, we constructed a large-scale dataset - STD-12K, comprising 30 sketch animation series in diverse artistic styles. Comprehensive experiments on this dataset convincingly show that our proposed SAIN surpasses the state-of-the-art interpolation methods. Our code and dataset will be publicly available.
Primary Subject Area: [Experience] Multimedia Applications
Relevance To Conference: This study investigates a critical task within the animation production industry: sketch-based, hand-drawn animation interpolation. A novel deep learning framework Sketch-Aware Interpolation Network (SAIN) is proposed for this purpose. Moreover, we constructed a large-scale dataset: STD-12K, comprising 30 sketch animation series in diverse artistic styles to advance the field of sketch animations by enhancing both their capabilities and development. We believe that this work is well-suited for ACM MM 2024 and will greatly interest readers, particularly those engaged in the understanding and creation of animations.
Supplementary Material: zip
Submission Number: 2754
Loading