Paper ID | D3-S2-T2.3 |
Paper Title |
Reed-Muller Subcodes: Machine Learning-Aided Design of Efficient Soft Recursive Decoding |
Authors |
Mohammad Vahid Jamali, University of Michigan, United States; Xiyang Liu, University of Washington, United States; Ashok Vardhan Makkuva, University of Illinois at Urbana-Champaign, United States; Hessam Mahdavifar, University of Michigan, United States; Sewoong Oh, University of Washington, United States; Pramod Viswanath, University of Illinois at Urbana-Champaign, United States |
Session |
D3-S2-T2: Reed-Muller Codes |
Chaired Session: |
Wednesday, 14 July, 22:20 - 22:40 |
Engagement Session: |
Wednesday, 14 July, 22:40 - 23:00 |
Abstract |
Reed-Muller (RM) codes are conjectured to achieve the capacity of any binary-input memoryless symmetric (BMS) channel, and are observed to have a comparable performance to that of random codes in terms of scaling laws. On the negative side, RM codes lack efficient decoders with performance close to that of a maximum likelihood decoder for general parameters. Also, they only admit certain discrete sets of rates. In this paper, we focus on subcodes of RM codes with flexible rates that can take any code dimension from 1 to n, where n is the blocklength. We first extend the recursive projection-aggregation (RPA) algorithm proposed recently by Ye and Abbe for decoding RM codes. To lower the complexity of our decoding algorithm, referred to as subRPA, we investigate different ways for pruning the projections. We then derive the soft-decision based version of our algorithm, called soft-subRPA, that is shown to improve upon the performance of subRPA. Furthermore, it enables training a machine learning (ML) model to search for {good} sets of projections that minimize the decoding error rate. Training our ML model enables achieving very close to the performance of full-projection decoding with a significantly reduced number of projections. For instance, our simulation results on a (64,14) RM subcode show almost identical performance for full-projection decoding and pruned-projection decoding with 15 projections picked via training our ML model. This is equivalent to lowering the complexity by a factor of more than 4 without sacrificing the decoding performance.
|