Abstract: This paper proposes a do-it-all neural model of human
hands, named LISA. The model can capture accurate hand
shape and appearance, generalize to arbitrary hand subjects, provide dense surface correspondences, be reconstructed from images in the wild, and can be easily animated. We train LISA by minimizing the shape and appearance losses on a large set of multi-view RGB image sequences annotated with coarse 3D poses of the hand skeleton. For a 3D point in the local hand coordinates, our model
predicts the color and the signed distance with respect to
each hand bone independently, and then combines the perbone predictions using the predicted skinning weights. The
shape, color, and pose representations are disentangled by
design, enabling fine control of the selected hand parameters. We experimentally demonstrate that LISA can accurately reconstruct a dynamic hand from monocular or
multi-view sequences, achieving a noticeably higher quality of reconstructed hand shapes compared to baseline approaches. Project page: https:// www.iri.upc.edu/ people/ecorona/lisa/.
0 Replies
Loading