Abstract: With the continuing advancement of ubiquitous computing and various sensor technologies, we are observing a massive population of multimodal sensors at the edge which posts significant challenges in fusing the data. In this poster we propose MultimodalHD, a novel Hyperdimensional Computing (HD)-based design for learning from multimodal data on edge devices. We use HD to encode raw sensory data to high-dimensional low-precision hypervectors, after which the multimodal hypervectors are fed to an attentive fusion module for learning richer representations via inter-modality attention. Our experiments on multimodal time-series datasets show MultimodalHD to be highly efficient. MultimodalHD achieves 17x and 14x speedup in training time per epoch on HAR and MHEALTH datasets when comparing with state-of-the-art RNNs, while maintaining comparable accuracy performance.
Loading