Abstract: Reconstructing photorealistic and dynamic portrait avatars
from images is essential to many applications including
advertising, visual effects, and virtual reality. Depending
on the application, avatar reconstruction involves different
capture setups and constraints—for example, visual effects
studios use camera arrays to capture hundreds of reference
images, while content creators may seek to animate a single portrait image downloaded from the internet. As such,
there is a large and heterogeneous ecosystem of methods
for avatar reconstruction. Techniques based on multi-view
stereo or neural rendering achieve the highest quality results, but require hundreds of reference images. Recent generative models produce convincing avatars from a single
reference image, but visual fidelity yet lags behind multiview techniques. Here, we present CAP4D: an approach
that uses a morphable multi-view diffusion model to reconstruct photoreal 4D (dynamic 3D) portrait avatars from any
number of reference images (i.e., one to 100) and animate
and render them in real time. Our approach demonstrates
state-of-the-art performance for single-, few-, and multiimage 4D portrait avatar reconstruction, and takes steps to
bridge the gap in visual fidelity between single-image and
multi-view reconstruction techniques.
Loading