Abstract: Advancing technology to monitor our bodies and behavior while sleeping and resting are essential for healthcare. However, keen challenges arise from our tendency to rest under blankets. We present a multimodal approach to uncover the subjects and view bodies at rest without the blankets obscuring the view. For this, we introduce a channel-based fusion scheme to effectively fuse different modalities in a way that best leverages the knowledge captured by the multimodal sensors, including visual- and non-visual-based. The channel-based fusion scheme enhances the model's flexibility in the input at inference: one-to-many input modalities required at test time. Nonetheless, multimodal data or not, detecting humans at rest in bed is still a challenge due to the extreme occlusion when covered by a blanket. To mitigate the negative effects of blanket occlusion, we use an attention-based reconstruction module to explicitly reduce the uncertainty of occluded parts by generating uncovered modalities, which further update the current estimation via a cyclic fashion. Extensive experiments validate the proposed model's superiority over others.
0 Replies
Loading