Belief propagation

marzo 1, 2019 Desactivado Por admin

Belief propagation gyjosnosis HOR6pR ‘E, 2011 14 pagos Implementing the gelief Propagation Algorithm in MATLAB B] » rn S. R -ffer• o u Christopher M. Kellett* Technical Report Version as of November 1 3, 2008 We provide some example Matlab code as a supplement to the paper This technical report is not intended as a standalone introduction to the belief propagation algorithm, but instead only aims to provide some technical material, which didn’t fit into the paper. ntroduction For an excellent introduction and mathematical treatise of modern iterative decoding theory, we refer to [4].

Worth entioning is also the surve a er 2 on factor graphs and the sum-product alg at contains belief propagation. This cur ides Matlab code Sv. içx to Swp to page to implement the dy propagation algorith More convention ion of the belief epts, as detailed in at is, from a coding perspective— exist and some are publicly available [3]. The Matlab code examples detailed in this report can be found, along with the most up-to-date version of this report itself, at [5].

Our presentation differs also i in another aspect from the standard ones: Unlike the information theory convention, where messages and codewords are epresented by row vectors, we throughout use column vectors as this is standard in dynamical systems. Of course this does not lead to differences other than representational ones. This report is organized as follows: In Section 2 we give a simple example on how one can generate a very basic random parity-check matrix and compute a corresponding generator matrix.

Section 3 details the channel transmission and Section 4 provides code to implement the belief propagation algorithm as a dynamical system. The output trajectories obtained using this Matlab code can then be plotted using the routine in School of Electrical Engineering and Computer Science, The University of Newcastle, Australia, Bjoern. Rueffer@Newcastle. edu . au, Chris. Kellett@newcastle. edu. au Section 5. In Section 6 we provide a more advanced method for generating parity-check matrices with prescribed degree distribution. 2 Parity-Check and generator matrices A parity-check matrix is any matrix H Fmxn .

Throughout we assume that n = m + k, 2 n, m, k» O and that H has full rank, i. e. , rank m. If H does not have full rank, then ro + k, 2 n, m, k > O and that H has full rank, i. e. , rank m. If H does not have full rank, then rows can be removed until it does, thereby ncreasing k and decreasing m accordingly. To generate a parity- check matrix for a repeat-n code in canonical form, one could use the following Matlab statement: n=10, rn=n 1; for n ; end A simple random parity-check matrix can be generated using the code in Listing 1, and code for generating more involved parity- check matrices is given in Section 6.

Using only Gauss elimination and possibly by swapping columns, H can be brought into the form QHP = lm A, (1) where the invertlble matrix Q E Fmxm encodes the steps of the Gauss elimination 2 (swapping and adding rows of H), p Fnxn is a permutation matrix to encode he 2 swapping of columns in H, A E Fmxk , and lm is the m x m identity matrix. 2 A generator matrix for H is a matrix G Fnxk such that HG = O. According to (1) 2 we can take G to be A G (2) Ik since QHG lm A P —1 • P A — 2A = 0 Ik (in Fm). Now G maps message vectors m e Fk to codewords c Gm e Fn , i. e. , to elements 2 2 ofthe null-space C = CH = {x E En : Hx = O} of H, which is also termed the set of 2 code elements 2 2 of the null-space C — CH {x e Fn : Hx O} of H, which is also termed the set of2 codewords or just the code. The MATLAB code examples in Listings 1 and 2 can be used o generate a very basic parity-check matrix and to obtain a generator matrix from a given parity-check matrix.

Listing 1 : A crude way to obtain a simple parity-check matrix, by just specifying the dimensions m and n and a density of non-zero elements in H of at least d (0, 1). 12345678 function H = generate H (m, n, d) % H = GENERATE H (m , n, d ) % % generate a m byn by the parity check matrix , where zero the and densiõy one ) is % influenced parameter d ( between H=sparse (m, n); H=mod(H, 2); 2 g 101112 while not ( a a I I (sum(H,2) >=2)) , H=H+abs ( sprand (m, n, d)) H=mod(H, 2); end

Listing 2: A code example to construct a generator matrix for a given parity-check matrix. Some consistency checks (e. g. , to see if n > m) are omitted. 1234567891011 12131415161718192021 22232425 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 function ge neratormatri x (H ) ; % function % H compute [ G] = genera torm atrix( H 40F atormatrix(H); % function % H compute [ G] = genera tormatrix( generator matrix G given a sparse parity check matrix Hp=H; [m, n]: si ze (Hp); % s u p po se m O.

In Matlab, the transmission Step is now very easy, we just use the assignment y = tilde x + to compute the vector of received channel symbols y Rn . 4 3. Computing LLRs The log-likelihood ratios for each bit are given by ui = log pYi I Xi (yi 10) , PYi Xi (Yi II) (4) where pYi IXi is the density of the conditional probability (xi , yi ) P (Yi < yi I Xi xi To compute ui , we actually have to know o, or at least make a guess o about o. The guessing step, which we Will omit here, is called estimation. For simplicity we assume that the receiver knows o.

Substituting the density formulas for the N (1, 02 ) and N (—1, 02 ) distributions into (4) we obtain 4yi (5) ui 2 . 20 So at this stage we can encode a k-bit message to an n-bit codeword, transmit it through a noisy channel (using BPSK) and ompute a-priori log-likelihood ratios. 4 The gelief Propagation (BP) algorithm A detailed descriptionof BP and motivation why this algorithm should do what it does, can be found in [4]. s OF descriptionof BP and motivation why this algorithm should do What it does, can be found in [4].

The implementation presented here is based on the article [6], which also contains a rather condensed introduction to iterative decoding using belief propagation. Listing 3 sets up the matrices B and P as well as the structure needed for the operator S, which is given in Listing 4. Listing 5 implements the BP algorithm as a dynamical system. The iterate BP(T,u) takes a number of iterations T to perform and a vector of input LLRs u E Rn as arguments. The output is a real n (T + 1) matrix containing the trajectories of the output LLRs.

Listing 3: Initialization for the main program: Compute the matrices and the structure for the operator S, which is here also encoded via a matrix (S ) 1234567891011 121314 function [ ] % H2DS ( H ) % % generate matrices fo r dynamical system a s s o ciatedto H, and q, which arestoredasglo ba Iva ria bles,aswellasm and % n, the dimensions ofH global BPS [m, si z e (H ) ; q=nnz (H ) ; % calculate % needed for the amount of nonzero lements these needed for qm n matrices 15 16 17 18 19 2021 22 23 2425 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 4445 45 47 48 49 50 ( sum(H, 2) —1)’ sum(H, S —s pallo c(q,q,(sum(H,1)—1) * sum(H, 1)’); % find the matrix P k -O; forj —l:n, l—find -1: length for y=x +1: length( I), P( k+x, Gy k+y, k+x end end k=k+length end % find S ( structure for the nonlinearity S) k —O; -1: length for y=x +1: length (J ) , S ( k+X , k+y S ( , k+x)=l; end end k-k+length (J ) ; end % compute matrix B atanh the ith obtained ofSx applying those global indices matrix corresponding global S q rones (q, 1 ) ; fori —1 • q 6 213141516 fo rj=find (S (i, end end Listing 5: A function to calculate output trajectories for the dynamical system mimicking BP, where u E Rn is the vector of a- priori LLRs and T N is the number of iterations to make 1234567891011 12131415161718192021 % y = ite This function implements % system ; the outputt raj e ct o r y is returned for % input u. The initial state is always zero . the BP dyn amica I fina I time T and of the function iterate BP() from the previous section, we provide here a routine to generate a plot in the flavor of Figure 2 in [6], see Listing 6 Listing 6: Plotting output trajectories in color or monochrome unctionplotBPoutput(y, mono , filena me) mono , filena me) —Plot% If mono is supplied ( regardless of its a given output then trajectory . in value ) , plot finadditiona % monochrome ,otherwiseincolor. le (in EPS forma t).

Savi n g % to a file implies that the plot Will be monochrome global n 7 91011 12 13 14 15 16 17 18 19 20 21 22 23 2425 26 27 2829 30 i z e (y) ; —1; clf; f nargin plot (O: T, y , ‘ko monochrome else plot (0: T, y , ) % co lor end grid on ; axis ( [0 T y —. S ) ) + . 5] ) LEGEND— ; f o r k , s t r c a t ( ‘output ‘ , nurn2str ( k) ) ] ; end legend (LEGEND) xlabel ( ‘time ; ylabel(‘ LR ‘ ) ; i f argin==3, ifisstr ( filename gcf , fi le na me ylabel (‘LLR ‘ ) ; i f nargin-—3, i fisstr ( filename ),saveas(gcf,f ilename, ‘eps ‘ ) ; end end 5 More panty-check matrices In this section we first introduce so-called regular parity check matrices. These are a special case of parity-check matrices with a prescribed degree distribution pair. By the degree distribution we actually refer to the bipartite undirected graph defined by a parity-check matrix, the so-called Tanner or factor graph, cf. [4].

In fact, it is mostly the degree distribution pair, that determines how the majority of the possible choices of parity-check matrices or that pair and a given code length perform [4, p. 94], at least for very large block length. One distinguishes the factor or check node distribution p and the variable or bit node distribution h. Together these form a degree distribution pair p), given by the polynomials À(x) = Ni xi—l pi xi—l p(x) = cf. [4, p. 79], where Ni is the fraction of edges that connect to a variable node of degree and pi is the fraction of edges that connect to a check nade of degree i. 8 6. 1 Regular parity-check matrices Now for a given pair of positive integers (l, r), a regular (l, r)-code or a regul 4