SSNet: Flexible and robust channel extrapolation for fluid antenna systems enabled by an self-supervised learning framework
Abstract: Fluid antenna systems (FAS) signify a pivotal advancement in 6G communication by enhancing spectral efficiency and robustness. However, obtaining accurate channel state information (CSI) in FAS poses challenges due to its complex physical structure. Traditional methods, such as pilot-based interpolation and compressive sensing, are not only computationally intensive but also lack adaptability. Current extrapolation techniques relying on rigid parametric models do not accommodate the dynamic environment of FAS, while data-driven deep learning approaches demand extensive training and are vulnerable to noise and hardware imperfections. To address these challenges, this paper introduces a novel self-supervised learning network (SSNet) designed for efficient and adaptive channel extrapolation in FAS. We formulate the problem of channel extrapolation in FAS as an image reconstruction task. Here, a limited number of unmasked pixels (representing the known CSI of the selected ports) are used to extrapolate the masked pixels (the CSI of unselected ports). SSNet capitalizes on the intrinsic structure of FAS channels, learning generalized representations from raw CSI data, thus reducing dependency on large labeled datasets. For enhanced feature extraction and noise resilience, we propose a mix-of-expert (MoE) module. In this setup, multiple feedforward neural networks (FFNs) operate in parallel. The outputs of the MoE module are combined using a weighted sum, determined by a gating function that computes the weights of each FFN using a softmax function. Extensive simulations validate the superiority of the proposed model. Results indicate that SSNet significantly outperforms benchmark models, such as AGMAE and long short-term memory (LSTM) networks by using a much smaller labeled dataset. A key observation is that the proposed model is more effectively trained using a small unmasked ratio of known CSI. Specifically, the proposed SSNet trained using CSI of 10 % total ports outperforms that trained using CSI of 25 % and 50 % total ports. This is because using a smaller number of known CSIs during training, the proposed model is forced to learn more effective channel correlation for channel extrapolation at the expense of higher training complexities. Ablation experiments reveal substantial performance gains from the MoE module’s integration. Furthermore, zero-shot learning experiments show a moderate performance degradation of about 3-5 dB, underscoring the model’s robust generalization ability. Finally, the inference speed experiments illustrate that the proposed model outperforms the benchmark models dramatically at the expense of a slightly longer execution time of 1.13 ms, 2.9 ms, and 3.12 ms on NVIDIA RTX 4090, 4060, and 3060 graphics processing units (GPU)s, respectively.
External IDs:doi:10.1109/jsac.2025.3619472
Loading