Abstract: Diffusion-based models for story visualization have shown promise in generating content-coherent images for storytelling tasks. However, how to effectively integrate new characters into existing narratives while maintaining character consistency remains an open problem, particularly with limited data. Two major limitations hinder the progress: (1) the absence of a suitable benchmark due to potential character leakage and inconsistent text labeling, and (2) the challenge of distinguishing between new and old characters, leading to ambiguous results. To address these challenges, we introduce the NewEpisode benchmark, comprising refined datasets designed to evaluate generative models' adaptability in generating new stories with fresh characters using just a single example story. The refined dataset involves refined text prompts and eliminates character leakage. Additionally, to mitigate the character confusion of generated results, we propose EpicEvo, a method that customizes a diffusion-based visual story generation model with a single story featuring the new characters seamlessly integrating them into established character dynamics. EpicEvo introduces a novel adversarial character alignment module to align the generated images progressively in the diffusive process, with exemplar images of new characters, while applying knowledge distillation to prevent forgetting of characters and background details. Our evaluation quantitatively demonstrates that EpicEvo outperforms existing baselines on the NewEpisode benchmark, and qualitative studies confirm its superior customization of visual story generation in diffusion models.
In summary, EpicEvo provides an effective way to incorporate new characters using only one example story, unlocking new possibilities for applications such as serialized cartoons.
Primary Subject Area: [Generation] Generative Multimedia
Secondary Subject Area: [Generation] Multimedia Foundation Models
Relevance To Conference: The submission of our work on the EpicEvo method to the ACM Multimedia (ACM MM) conference is highly relevant and timely. ACM MM stands as a premier conference for presenting research on multimedia, including novel approaches to multimedia content creation and analysis. Our EpicEvo model directly addresses a critical challenge in multimedia storytelling: the integration of new characters into existing narratives with limited training samples. This challenge aligns with the conference’s focus on innovative multimedia techniques, especially in the context of story visualization and character coherence. Presenting EpicEvo at ACM MM will provide a platform to discuss its implications for future multimedia research, especially in the development of more adaptable and narrative-coherent multimedia content generation models. Therefore, our work is both pertinent and contributory to the ACM MM community’s ongoing exploration of cutting-edge multimedia technologies.
Supplementary Material: zip
Submission Number: 3628
Loading