DECap: Towards Generalized Explicit Caption Editing via Diffusion Mechanism

23 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Explicit Caption Editing, Image Captioning, Diffusion Model
Abstract: Explicit Caption Editing (ECE) --- refining reference image captions through a sequence of explicit edit operations (e.g., KEEP, DETELE words) --- has raised significant attention due to its explainable and human-like nature. After training with carefully designed reference and ground-truth caption pairs, state-of-the-art ECE models exhibit limited generalization ability beyond the original training data distribution, i.e., they are tailored to refine content details only in in-domain samples but fail to correct errors in out-of-domain samples. To this end, we propose a new Diffusion-based Explicit Caption editing method: DECap. Specifically, we reformulate the ECE task as a denoising process under the diffusion mechanism, and introduce innovative edit-based noising and denoising processes. The noising process can help to eliminate the need for meticulous paired data selection by directly introducing word-level noises (i.e., random words) for model training, learning diverse distribution over input reference captions. The denoising process involves the explicit predictions of edit operations and corresponding content words, refining reference captions through iterative step-wise editing. To further improve the inference speed for caption editing, DECap discards the prevalent multi-stage design, and directly generates edit operations and content words simultaneously. Extensive experiments have demonstrated the strong generalization ability of DECap in various caption editing scenarios. More interestingly, it also shows great potential in improving both the quality and controllability of caption generation.
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6918
Loading