Robust Channel Representation for Wireless: A Multi-Task Masked Contrastive Approach

Published: 24 Sept 2025, Last Modified: 18 Nov 2025AI4NextG @ NeurIPS 25 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: self-supervised learning, wireless channel representation, representation learning, ML for Wireless
TL;DR: ContraWiMAE learns robust wireless channel representations by contrasting differently masked views of the same channel, using wireless complexity as natural augmentation.
Abstract: Wireless environments present unique challenges for machine learning (ML) due to partial observability, multi-domain characteristics, and dynamic nature. To address these challenges, we propose ContraWiMAE (Wireless Masked Contrastive Autoencoder), a multi-task learning framework that learns robust representations from incomplete wireless channel observations avoiding expensive augmentation engineering. Our approach combines masked autoencoder learning with a novel masked contrastive objective that contrasts differently masked versions of the same channel, leveraging the inherent wireless complexity as natural augmentation. We employ a curriculum learning strategy systematically developing representations that maintain structural properties while enhancing discriminative capabilities. The framework enables learning task-specific invariances, which we demonstrate through noise invariance and improved linear separability tested on channel estimation and cross-frequency beam selection in unseen environments. Our experiments demonstrate superior representation stability under severe noise conditions with performance close to supervised training.
Submission Number: 65
Loading