Abstract: Haptic feedback is becoming a crucial element for enhancing immersion in various media applications. To enrich this feedback, high-quality haptic content, an appropriate playback device, and efficient codecs for transmission are essential. This paper introduces a novel vibrotactile codec that employs an autoencoder architecture integrated with Convolutional Neural Networks (CNNs). It leverages a tailored perceptual model with a band structure derived from the audio domain, optimizing the perceived quality of the encoded signals during training. Additionally, we have developed and assessed multiple perceptual training losses to further enhance the performance of our codec.
External IDs:doi:10.1007/978-3-031-70061-3_22
Loading