TCNet: A Unified Framework for CSI Feedback Compression Leveraging Language Model as Lossless Compressor
Keywords: CSI feedback; Transformer-CNN hybrid architecture; language model
TL;DR: To leverage the complementary strengths of both Transformer and CNN, we propose TCNet, a hybrid framework combining CNNs and a Swin Transformer to achieve accurate reconstruction with reduced complexity for CSI feedback.
Abstract: Transformer-based architectures have demonstrated strong capability in capturing global dependencies for CSI feedback, yet their high computational overhead limits practical deployment. In contrast, CNNs are more efficient and excel at extracting local features, but struggle with long-range modeling. To leverage the complementary strengths of both, we propose TCNet, a hybrid framework combining CNNs and a Swin Transformer to achieve accurate reconstruction with reduced complexity.
Beyond lossy compression, we further introduce a language model-based lossless coding scheme that significantly improves bit-level efficiency. Unlike conventional fixed-length or entropy-based encoding methods, our approach employs a lightweight language model as a universal probability estimator for variable-length arithmetic coding. To ensure compatibility with communication data, we design an alignment mechanism that maps CSI representations into a token structure suitable for language modeling. This alignment enables our method to generalize to other compression tasks in wireless communications. Experimental results on COST2100 demonstrate that our framework achieves the best NMSE–bit rate trade-offs, highlighting the potential of integrating language modeling with compression task in wireless communications.
Submission Number: 28
Loading