EmoFeedback²: Reinforcement of Continuous Emotional Image Generation via LVLM-based Reward and Textual Feedback

16 Sept 2025 (modified: 26 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Emotion Understanding, Continuous Emotion Image Generation, Reinforcement Fine-Tuning, Self-Promotion
TL;DR: We propose EmoFeedback², a novel framework that leverages a fine-tuned vision-language model to provide reward and textual feedback, enabling high-quality continuous emotional image generation.
Abstract: Continuous emotional image content generation (C-EICG) is emerging rapidly due to its ability to produce images aligned with both user descriptions and continuous emotional values. However, existing approaches lack emotional feedback from generated images, limiting the control of emotional continuity. Additionally, their simple alignment between emotions and naively generated texts fails to adaptively adjust emotional prompts according to image content, leading to insufficient emotional fidelity. To address these concerns, we propose a novel generation-understanding-feedback reinforcement paradigm (EmoFeedback²) for C-EICG, which exploits the reasoning capability of the fine-tuned large vision–language model (LVLM) to provide reward and textual feedback for generating high-quality images with continuous emotions. Specifically, we introduce an emotion-aware reward feedback strategy, where the LVLM evaluates the emotional values of generated images and computes the reward against target emotions, guiding the reinforcement fine-tuning of the generative model and enhancing the emotional continuity of images. Furthermore, we design a self-promotion textual feedback framework, in which the LVLM iteratively analyzes the emotional content of generated images and adaptively produces refinement suggestions for the next-round prompt, improving the emotional fidelity with fine-grained content. Extensive experimental results demonstrate that our approach effectively generates high-quality images with the desired emotions, outperforming existing state-of-the-art methods in our custom dataset. The code and dataset will be released soon.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 7588
Loading