Learning to Denoise and Decode: A Novel Residual Neural Network Decoder for Polar Codes

Published: 01 Jan 2020, Last Modified: 11 Apr 2025IEEE Trans. Veh. Technol. 2020EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Polar codes are known as the first capacity-achievable codes with low encoding and decoding complexity. The sequential decoding nature of traditional polar decoding algorithms such as successive cancellation (SC) results in high decoding latency, which is not suitable for services that require high reliability and low latency. Deep learning for decoding, referred to as neural network decoder (NND), has shown strong competitiveness because of its non-iterative and full-parallel feature. Whereas, the bit-error-rate (BER) performance of NND is still not satisfactory. In this paper, we first propose a residual learning denoiser (RLD) for polar codes. The RLD can remarkably improve the signal-to-noise ratio (SNR) and reduce the symbol-error-rate (SER) of received symbols. In order to decode polar codes more efficiently, we subsequently propose a residual neural network decoder (RNND) for polar codes. Different from the traditional pure-NND (PNND) which directly uses neural network for decoding received symbols, the proposed RNND concatenates a RLD used for denoising and a NND used for decoding. We provide a novel multi-task learning (MTL) strategy to jointly optimize the denoiser and decoder, and find that the denoising gain of Joint-RLD is more significant than the Independent-RLD. Numerical results show that the proposed RNND outperforms its counterpart PNND with regard to the BER performance. In addition, the optimal RNND(MLP-MLP) approaches the traditional SC decoding performance, while saving more than one hundred times of computation time. Eventually, scalability of the RNND to longer polar codes as well as LDPC codes further demonstrates the superiority of our proposed scheme.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview