SIHeDA-Net: Sensor to Image Heterogeneous Domain Adaptation NetworkDownload PDF

Published: 09 May 2022, Last Modified: 12 May 2023MIDL 2022 Short PapersReaders: Everyone
Keywords: Domain Adaptation, Latent Space Transfer, American Sign Language
Abstract: The main advantage of wearable devices lies in enabling them to be tracked without external infrastructure. However, unlike vision (cameras), there is a dearth of large-scale training data to develop robust ML models for wearable devices. SIHeDA-Net (Sensor-Image Heterogeneous Domain Adaptation) uses training data from public images of American Sign Language (ASL) that can be used for inferences on sensors even with errors by bridging the domain gaps through latent space transfer.
Registration: I acknowledge that acceptance of this work at MIDL requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: novel methodological ideas without extensive validation
Primary Subject Area: Transfer Learning and Domain Adaptation
Secondary Subject Area: Learning with Noisy Labels and Limited Data
Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
TL;DR: Exploring heterogeneous domain transfer for gesture recognition. Our model, SIHeDA-Net, uses images of American Sign Language (ASL) as inferences on noisy sensor data by bridging the domain gaps through latent space transfer.
Code And Data: https://github.com/spider-tronix/SIHeDA-Net
1 Reply

Loading