LDPC 2010 9 21 1 / 51
LDPC LDPC LDPC 2 LDPC 2 / 51
Irregular LDPC Codes (λ(x), ρ(x)) Polar Codes Spatially Coupled Regular LDPC Codes 2 3 / 51
[ 08 Polyanskiy and Poor] log M (n, P B ) = nc(ɛ) nv (ɛ)q 1 (P B ) + O(log n) M (n, P B ) P B n ɛ V (ɛ) BSC(ɛ) V = ɛ(1 ɛ) + log 2 ( ) 1 ɛ ɛ BEC(ɛ) V = ɛ(1 ɛ) 4 / 51
0.50 0.45 Coding Rate R * 0.40 0.35 BEC C=0.5 BSC C=0.5 0.30 10 2 10 3 10 4 10 5 10 6 10 7 10 8 Block Length n [bit] Figure: C = 1/2 10 3 R = 1 n log M n 5 / 51
, n ( n(c(ɛ) R ) ) P B = Q (1 + o(1)) V (ɛ) R := 1 n log M (n, P B ) 6 / 51
Channel Capacity C(ε)=1-ε 0.70 0.65 0.60 0.55 Finite Polar Code Infinite (2,4)-NBLDPCC GF(2 8 ) Finite (2,4)-NBLDPCC GF(2 8 ) Infinite (3,6)-BLDPCC Finite (3,6)-BLDPCC Finite Optimal Code 0.50 10 2 10 3 10 4 10 5 10 6 10 7 10 8 Block Length n [bit] Figure: BEC(ɛ) 1/2 10 3 C(ɛ) 7 / 51
LDPC m C = { } x GF(2 m ) N Hx = 0 H : GF(2 m ) M N h vc x v = 0 GF(2 m ) v c 8 / 51
2 2 vs LDPC Gallager[ 62] Tanner[ 83] LDPC MacKay[ 95] Luby[ 98] Richardson & Urbanke[ 99] Density Evolution Richardson & Urbanke[ 02] Multi-edge Di[ 02] Stopping Set Tian[ 03] ACE algorithm Thorpe[ 03] Protograph Gallager[ 62] Davey[ 96] m 8 GF(2 m ) (2,d c )- LDPC 9 / 51
GF(2 m ) (2,d c )- LDPC m m 8 log(n) m 8 1/3 10 / 51
LDPC { (x v ) v c v c h vc x v = 0 GF(2 m ) } {h v } GF(2 m ) (v c) GF(2 6 ) LDPC. GF(2 8 ) 11 / 51
: 2 m : µ (l) v c R2m : µ (l) c v R2m : : µ (0) v c(x) = Pr(x y) for x GF (2 m ) : µ (l+1) c v (x) = [ ] I h v cx v = 0 : µ (l+1) v c (x) = x:x v =x c v\{c} v c µ (l+1) c v (x) : ˆµ (l+1) v (x) = µ (l+1) c v (x) c v v c\{v} µ (l) v c (x v ) 12 / 51
Sum-Product FFT FFT 2 LDPC 2m m bit-llr symbol-llr EMS [Declercq et al.] [Kasai et al.] 13 / 51
, GA, EXIT [Bennatan et al.] BEC (m + 1) [Rathi et al.] EXIT [Rathi et al.] [Andrianova et al.] ML [Nozaki et al.] (2,d c )- LDPC 14 / 51
15 / 51
LDPC 16 / 51
LDPC Capacity-approaching irregular LDPC code is available n. Designing short and moderate low-rate LDPC codes is difficult. One encounters a difficulty when designing very low rate LDPC codes in the standard irregular framework, it is relatively difficult to achieve high thresholds. from Multi-Edge type LDPC codes, 2003. Structured low-rate LDPC codes have good thresholds. However they tends to have nodes of high degree. The optimized structured low-rate LDPC codes need to be used with large code length to exploit the potential decoding performance. 17 / 51
LDPC Low-rate LDPC codes tend to have many check nodes. In fact, LDPC codes with K information nodes and rate R (1 R) have M = N(1 R) = K check nodes. R Example: M = 9K for R = 0.1 while M = K/9 for R = 0.9. Check nodes require complex computations. We solve these issues of designing and decoding low-rate LDPC codes by a very simple scheme. 18 / 51
LDPC We use (2, 3)-regular non-binary LDPC over GF(2 8 ) as a mother code. Denote the codeword by (x 1,..., x N ) GF(2 8 ) N, R = 1/3. Conventional rate-lowering scheme: Klinc et al. use shortening for reducing the rate. (x 1, 0, x 3, 0,..., x N ), R < 1/3. Repetition reduces the rate. (x 1,..., x N, x 1,..., x N ), R = 1/6. Repetition is the worst thing to do for coding, because repetition just uses more N channels without improving Eb/No vs BER curve. 19 / 51
LDPC Instead of repetition, we propose Multiplicative Repetition. (x 1,..., x N, h 1 x 1,..., h N x N ), R = 1/6, where h 1,..., h N are randomly chosen from GF(2 8 )\{0}. Multiplicative repetition twice gives (x 1, x 2,..., x N, h 1 x 1,..., h N x N, h 1x 1,..., h N x N), R = 1/9. Multiplicative repetition three times gives (x 1, x 2,..., x N, h 1 x 1,..., h N x N, h 1x 1,..., h N x N, h 1x 1,..., h N x N), R = 1/12. 20 / 51
Block Error Rate 10 0 10-1 10-2 10-3 10-4 10-5 10-6 (K=1024 bits, BIAWGNC) Punctured C 1 R=1/2 C 1 R=1/3 C 2 R=1/6 MET R=1/2 MET R=1/6 ARA R=1/6 10-7 10-8 -1.0-0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Eb/No[dB] 21 / 51
10 0 10-1 10-2 (K=192 bits, BIAWGNC) Punctured C 1 R=1/2 C 1 R=1/3 C 2 R=1/6 Hybrid R=1/6 Block Error Rate 10-3 10-4 10-5 10-6 10-7 10-8 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Eb/No[dB] 22 / 51
Tanner graph of the mother code. A (2,3)-regular non-binary LDPC code over GF(2 8 ). R = 1/3. 23 / 51
R = 1/6. Decoder uses only Tanner graph of the mother code. 24 / 51
R = 1/9. Decoder uses only Tanner graph of the mother code. 25 / 51
Norm. gap to capacity (1-e * -R)/R 0.450 0.400 0.350 0.300 0.250 0.200 0.150 0.100 0.050 vs over BEC PROPOSED, T=1,...,10 Shortened code 0.000 0.0 0.1 0.2 0.3 0.4 0.5 26 / 51
27 / 51
Conventional Block Codes Transmitter encodes k bits into n(> k/c) bits. n is fixed. Fountain Codes Transmitter encodes k bits into infinitely limitless coded bits. Receiver collects arbitrary n(> k/c) channel outputs to recover k information bits. n is not fixed. 28 / 51
Raptor 1. Encode k information bits s 1,..., s k into n coded bits x 1,..., x n with a high-rate LDPC code. 2. Repeat the followings infinitely. 2.1 Choose d {1, 2,... } with probability Ω d. 2.2 Choose d coded bits x i1,..., x id uniformly at random from x 1,..., x n. 2.3 Transmit x i1 + + x id. Ω d 29 / 51
Raptor 1. The output distribution Ω d needs to be optimized for each k. 2. The output distribution Ω d tends to have a large maximum max{d Ω d 0}, which leads to large decoding complexity. 3. The optimized Raptor codes need to be used with large information length k to exploit the potential decoding performance. 4. Over the channels with small capacity C, the decoding complexity becomes large. 30 / 51
LDPC 1. Map k information bits s 1,..., s k into K = k/m non-binary symbols s 1,..., s K {0, 1} m. 2. Encode the K non-binary symbols s 1,..., s K into N coded non-binary symbols x 1,..., x N with a 2 m -ary (2,3)-regular non-binary LDPC code C 1. 3. Repeat the followings infinitely for i = 1,...,. 3.1 Choose a non-singular binary m m matrix F i, v i {1,..., N} and w i {1,..., m} uniformly at random. 3.2 Transmit the w i -th bit of F i x vi. 31 / 51
/ An example of a Tanner graph used for encoding/decoding. White dots represent the transmitted bits correcponding to collected channel outputs. The receiver collected n =22 channel outputs. 32 / 51
C = 1 ε := Cn/k 1. m d c = 3 d c = 4 d c = 5 d c = 6 d c = 7 1 1.0799 1.1972 1.3104 1.4140 1.5082 2 0.5747 0.6637 0.7492 0.8275 0.8987 7 0.0888 0.1180 0.1495 0.1803 0.2091 8 0.0809 0.1013 0.1262 0.1517 0.1763 9 0.0792 0.0913 0.1104 0.1313 0.1522 10 0.0813 0.0858 0.0996 0.1166 0.1343 11 0.0856 0.0831 0.0921 0.1057 0.1207 12 0.0913 0.0822 0.0871 0.0976 0.1102 13 0.0976 0.0826 0.0837 0.0915 0.1019 16 0.1179 0.0877 0.0795 0.0806 0.0859 17 0.1245 0.0901 0.0792 0.0786 0.0825 18 0.1309 0.0925 0.0793 0.0770 0.0797 33 / 51
No optimization is needed. The decoding complexity does not depend on the channel capacity. Exhibits good decoding performance at short information length. Exhibits deep error floors. 34 / 51
Block Error Rate Results over AWGN with k = 1024 bits and m = 8. 10 0 10-1 10-2 10-3 10-4 10-5 10-6 PROPOSED C=1.0 PROPOSED C=0.5 PROPOSED C=0.1 RAPTOR C=1.0 RAPTOR C=0.5 10-7 0.0 0.1 0.2 0.3 0.4 0.5 OVERHEAD ε 35 / 51
Not capacity-achieving even for channels with C = 1, while Raptor codes achieve the capacity. 36 / 51
C = 1 0.00 0.05 0.10 0.15 0.20 0.25 OVERHEAD ε Histograms of the overheads at which the proposed fountain code over {0, 1} 8 successfully recovers k = 192, 2 9, 2 10, 2 11, 2 13, 2 14 and 2 15 information bits over the channel with C = 1.0. 37 / 51
38 / 51
CSS(Calberbank, Shor, Steane) codes CSS codes are quantum error correcting codes. A CSS code is defined by classical binary linear code pair (C, D) satisfying the followings. C D (or equivalently C D) Both C, D are good classical error correcting codes decodable from the syndromes. And hopefully, efficiently decodable. The first condition is equivalent to H C H T D = 0, where H C and H D are the parity-check matrices of C and D, respectively. 39 / 51
Non-binary Low-Density Parity-Check(LDPC) codes Non-binary LDPC codes are linear codes over GF(2 m ) defined by a sparse parity-check matrix over GF(2 m ). A non-binary LDPC code C defined by a parity-check matrix H C := {x GF(2 m ) N Hx = 0} H GF(2 m ) M N is a sparse matrix. We focus on uniform column and row weight J and L. J = 2 is empirically known to be best performing. Efficiently decodable with belief propagation decoding. 40 / 51
Companion Matrices Companion Matrices[MacWilliams and Sloane] The following map A is isomorphism from GF(2 m ) from GF(2) m m, where GF(2 m ) is defined by primitive polynomial x m + m 1 i=0 a ix i = 0. Let α be the primitive element s.t. α m + m 1 i=0 a iα i = 0. α i A(α i ) := A(α) i GF(2) m m 0 0 0 0 a 0 1 0 0 0 a 1 A(α) := 0 1 0 0 a 2...... 0 0 0 1 a m 1 What s more, we have A(α i )v(α j ) = v(α i+j ) v(α i ) = (a 0,..., a m 1 ) T GF(2) m α i = m 1 i=0 a iα i GF(2 m ) 41 / 51
Example of Companion Matrices Ex. The primitive polynomial is given as x 3 + x + 1 for GF(8). The companion matrix for each element in GF(8) is given as follows. A(0) = 0 0 0 0 0 0 A(α 0 ) = 1 0 0 0 1 0 A(α) = 0 0 1 1 0 1 0 0 0 0 0 1 0 1 0 A(α 2 ) = 0 1 0 0 1 1 A(α 3 ) = 1 0 1 1 1 1 A(α 4 ) = 0 1 1 1 1 0 1 0 1 0 1 1 1 1 1 A(α 5 ) = 1 1 1 1 0 0 A(α 6 ) = 1 1 0 0 0 1 1 1 0 1 0 0 42 / 51
How to make binary orthogonal matrices from GF(2 m ) ones Lemma For two M N matrices H Γ and H over GF(2 m ) and two mm mn matrices H C and H D over GF(2). γ 0,0 γ 0,N 1 δ 0,0 δ 0,N 1 H Γ =......., H =......., γ M 1,0 γ M 1,N 1 δ M 1,0 δ M 1,N 1 A(γ 0,0 ) A(γ 0,N 1 ) A(δ 0,0 ) T A(δ 0,N 1 ) T H C =......., H D =......., A(γ M 1,0 ) A(γ M 1,N 1 ) A(δ M 1,0 ) T A(δ M 1,N 1 ) T In this setting, if H Γ H T = 0, then H C H T D = 0. proof: (H C HD T ) i,j = N 1 k=0 A(γ i,k)a(δ k,j ) = N 1 k=0 A(γ i,kδ j,k ) = A( N 1 k=0 γ i,kδ j,k ) = A((H Γ H T ) i,j) = A(0) = 0 43 / 51
How to construct NB-CSS-LDPC codes Goal:Construct two sparse matrices H Γ, H over GF(2 m ) such that H Γ H T = 0. 1. By Hagiwara-Imai method, construct binary sparse matrices ĤC, ĤD of culumn weight J such that ĤC ĤT D = 0 2. Solve an equation H Γ H T = 0, where H Γ and H are GF(2 m )-matrix variables which have non-zero entries at the same location as ĤC, ĤD, respectively. 3. The equation H Γ H T = 0 is a system of quadratic equations over Galois fields, hence in general, known to be NP-Complete. Fortunatelly this is reduced to linear equations when J = 2 under some assumption. 44 / 51
Quasi-Cyclic Low-Density Parity-Check Codes A(J, L, P) QC-LDPC code is a binary linear code defined by the following parity-check matrix ĤC. I (c 0,0 ) I (c 0,1 ) I (c 0,L 1 ) I (c 1,0 ) I (c 1,1 ) I (c 1,L 1 ) Ĥ C =..... I (c J 1,0 ) I (c J 1,1 ) I (c J 1,L 1 ) 0 1 0 0 0 0 0 1 0 0 I (1) = 0 0 0... 0 {0, 1} P P 0 0 0 0 1 1 0 0 0 0 I (c j,l ) = I (1) c j,l 45 / 51
Hagiwara-Imai method Output binary QC matrices ĤC and ĤD such that ĤC ĤT D = 0. I (c 0,0 ) I (c 0,1 ) I (c 0,L 1 ) I (c 1,0 ) I (c 1,1 ) I (c 1,L 1 ) Ĥ C =..... I (c J 1,0 ) I (c J 1,1 ) I (c J 1,L 1 ) I (d 0,0 ) I (d 0,1 ) I (d 0,L 1 ) I (d 1,0 ) I (d 1,1 ) I (d 1,L 1 ) Ĥ D =..... I (d J 1,0 ) I (d J 1,1 ) I (d J 1,L 1 ) Hagiwara and Imai 07 If #{0 l P 1 c j,l d k,l = p mod P} is even for all j = 0,..., J 1, p = 0,..., P 1, then D = ĤC ĤT 0. 46 / 51
Conjecture on Hagiwara-Imai method For J = 2, let H C, H D be parity-check matrices of (J, L, P) QC-LDPC codes C and D constructed by Hagiwara-Imai method. Let n 0,..., n L 1 be the indices of non-zero entries in the m-th row of H D. In this setting, we conjecture that in the Tanner graph of H C, the L variable nodes corresponding to the n 0,..., n L 1 -th columns and the L adjacent check nodes form a cycle of length 2L. To be precise in other words, H C n j (j = 0,..., L 1) m j,0, m j,1 [0, L 1] σ m σ(0),0 = m σ(1),1, m σ(1),0 = m σ(2),1,..., m σ(l 1),0 = m σ(0),1. 47 / 51
How to solve H H T Γ = 0 The m-th row of H is orthogonal with H Γ, i.e., γ mσ(0),0,n 0 γ mσ(0),1,n 1...... γ mσ(l 1),1,n 0 Then, δ m,nj γ mσ(l 2),0,n L 2 0 (j = 0,..., L 1) exist if L 1 j=0 γ mσ(l 2),1,n L 2 γ mσ(l 1),0,n L 1 γ mσ(j),0,n j γ 1 m σ(j),1,n j = 1. δ m,n0 48 / 51.. δ m,nl 1 For α x GF(2 m ), define log(α x ) := x the equation above is reduced to the following integer equations. L 1 L 1 log(γ mσ(j),0,n j ) log(γ mσ(j),1,n j ) = 0. j=0 j=0 = 0
10 0 10-1 Depolarizing R Q =5/7, n=14,224, q=2 8 R Q =5/7, n=35,406, q=2 9 R Q =1/2, n= 8,768, q=2 8 R Q =1/2, n=10,728, q=2 9 R Q =1/2, n=20,560, q=2 10 R Q =1/3, n= 4,656, q=2 8 R Q =1/3, n=10,746, q=2 9 R Q =1/3, n=11,940, q=2 10 Block Error Rate 10-2 10-3 10-4 10-5 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 Flip Probability f m 49 / 51
Quantum Rate R Q 1.00 0.90 0.80 0.70 0.60 0.50 0.40 Depolarizing Shannon limit S2 BDD limit Bicycle, n=3,786 Caley, n=4,096 Turbo, n=4,000 Proposed, n=14,224, q=2 8 Proposed, n=35,406, q=2 9 Proposed, n= 8,768, q=2 8 Proposed, n=10,728, q=2 9 Proposed, n=15,760, q=2 10 Proposed, n= 4,656, q=2 8 Proposed, n=10,746, q=2 9 Proposed, n=11,940, q=2 10 0.30 0.20 0.10 0.00 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 Flip Probability f m 50 / 51
2 LDPC LDPC 51 / 51