Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Models

Published: 01 Jan 2024, Last Modified: 13 Nov 2024CVPR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Generative models have recently exhibited exceptional capabilities in text-to-image generation, but still struggle to generate image sequences coherently. In this work, we focus on a novel, yet challenging task of generating a co-herent image sequence based on a given storyline, denoted as open-ended visual storytelling. We make the following three contributions: (i) to fulfill the task of visual sto-rytelling, we propose a learning-based auto-regressive im-age generation model, termed as Story Gen, with a novel vision-language context module, that enables to generate the current frame by conditioning on the corresponding text prompt and preceding image-caption pairs; (ii) to ad-dress the data shortage of visual storytelling, we collect paired image-text sequences by sourcing from online videos and open-source E-books, establishing processing pipeline for constructing a large-scale dataset with diverse characters, storylines, and artistic styles, named StorySalon; (iii) Quantitative experiments and human evaluations have vali-dated the superiority of our StoryGen, where we show it can generalize to unseen characters without any optimization, and generate image sequences with coherent content and consistent character. Code, dataset, and models are avail-able at https://haoningwu3639.github.io/StoryGen_Webpage/. “Mirror mirror on the wall, who's the fairest of them all?” -Grimms' Fairy Tales
Loading