MVPbev: Multi-view Perspective Image Generation from BEV with Test-time Controllability and Generalizability

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This work aims to address the multi-view perspective RGB generation from text prompts given Bird-Eye-View(BEV) semantics. Unlike prior methods that neglect layout consistency, lack the ability to handle detailed text prompts, or incapable of generalizing to unseen view points, MVPbev simultaneously generates cross-view consistent images of different perspective views with a two-stage design, allowing object-level control and novel view generation at test-time. Specifically, MVPbev firstly projects given BEV semantics to perspective view with camera parameters, empowering model to generalize to unseen view points. Then we introduce a multi-view attention module where special initialization and de-nosing processes are introduced to explicitly enforce local consistency among overlapping views w.r.t. cross-view homography. Last but not the least, MVPbev further allows test-time instance-level controllabity by refining a pre-trained text-to-image diffusion model. Our extensive experiments on NuScenes demonstrate that our method is capable of generating high-resolution photorealistic images from text descriptions with thousands of training samples, surpassing the state-of-the-art methods under various evaluation metrics. We further demonstrate the advances of our method in terms of generalizability and controllability with the help of novel evalution metrics and comprehensive human analysis. Our code and model will be made available.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Content] Multimodal Fusion
Relevance To Conference: By combining BEV semantics and text descriptions, this work bridges the gap between visual and textual information, enables the creation of richer multimedia content by fusing visual representations with descriptive language. Specifically, this research facilitates multi-view perspective RGB generation from BEV and text prompts, providing effective and diverse controllability (e.g. scene-level/instance-level text prompts). Novel methods proposed in this work enhance multimodal scene understating by leveraging both BEV and textual information, yielding favorable results.
Supplementary Material: zip
Submission Number: 1813
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview