Inverse Kinematics Embedded Network for Robust Patient Anatomy Avatar Reconstruction From Multimodal Data

Published: 01 Jan 2024, Last Modified: 28 Oct 2024IEEE Robotics Autom. Lett. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Patient modelling has a wide range of applications in medicine and healthcare, such as clinical teaching, surgery navigation and automatic robotized scanning. While patients are typically covered or occluded in medical scenes, directly regressing human meshes from single RGB images is challenging. To this end, we design a deep learning-based patient anatomy reconstruction network from RGB-D images with three key modules: 1) the attention-based multimodal fusion module, 2) the analytical inverse kinematics module and 3) the anatomical layer module. In our pipeline, the color and depth modality are fully fused by the multimodal attention module to obtain a cover-insensitive feature map. The estimated 3D keypoints, learned from the fused feature, are further converted to patient model parameters through the embedded analytical inverse kinematics module. To capture more detailed patient structures, we also present a parametric anatomy avatar by extending the Skinned Multi-Person Linear Model (SMPL) with internal bone and artery models. Final meshes are driven by the predicted parameters via the anatomical layer module, generating digital twins of patients. Experimental results on the Simultaneously-Collected Multimodal Lying Pose Dataset demonstrate that our approach surpasses state-of-the-art human mesh recovery methods and shows robustness to occlusions.
Loading