From Algebraic Structure to Neural Parameters: A Cyclic Codes Perspective on Transformer-Based Decoders
Keywords: Algebraic Structure, Cyclic Codes, Parameter Interpretability, Error Correction Pattern, Relationship
TL;DR: We investigated from the perspective of cyclic codes, bridging the algebraic structure and the parameters, and proposed a plug-and-play method that enhanced the decoder performance while reducing the parameter count.
Abstract: The advent of Transformer architectures has significantly enhanced the performance and flexibility of neural decoders. Meanwhile, cyclic codes continue to play a crucial role in practical communication systems. In this paper, we bridge these two domains by proposing a novel decoding approach that integrates the algebraic structure of cyclic codes into Transformer-based decoders. Leveraging the inherent cyclic properties, we introduce interpretable error correction patterns and inter-node relationship hypotheses that link the structural characteristics of the codes to the model parameters. Building on these insights, we design a plug-and-play, flexibly deployable decoding method tailored for cyclic codes. Experimental results show that our method achieves an average reduction in bit error rate (BER) by an order of magnitude, while also reducing the total number of parameters by approximately 97\%. Additional comparative experiments validate our proposed conjectures and highlight a promising pathway for bridging classical coding theory and modern Transformer-based decoding architectures.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 3007
Loading