Frequency-Aware Masked Autoencoders for Multimodal Pretraining on Biosignals

ICLR 2024 Workshop TS4H Submission10 Authors

Published: 08 Mar 2024, Last Modified: 31 Mar 2024TS4H PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: biosignals, pretraining, multimodality, transformer
TL;DR: To achieve effective pretraining in the presence of potential distributional shifts, we propose a frequency-aware masked autoencoder (bioFAME) that learns to parameterize the representation of biosignals in the frequency space.
Abstract: Leveraging multimodal information from biosignals is vital for building a comprehensive representation of people's physical and mental states. However, multimodal biosignals often exhibit substantial distributional shifts between pretraining and inference datasets, stemming from changes in task specification or variations in modality compositions. To achieve effective pretraining in the presence of potential distributional shifts, we propose a frequency-aware masked autoencoder bioFAME that learns to parameterize the representation of biosignals in the frequency space. bioFAME incorporates a frequency-aware transformer, which leverages a fixed-size Fourier-based operator for global token mixing, independent of the length and sampling rate of inputs. To maintain the frequency components within each input channel, we further employ a frequency-maintain pretraining strategy that performs masked autoencoding in the latent space. The resulting architecture effectively utilizes multimodal information during pretraining, and can be seamlessly adapted to diverse tasks and modalities at test time, regardless of input size and order. We evaluated our approach on a diverse set of transfer experiments on unimodal time series, achieving an average of $\uparrow$$5.5\%$ improvement in classification accuracy over the previous state-of-the-art.
Submission Number: 10
Loading