Está en la página 1de 20
6.3 Decoding BCH and RS Codes: The General Outline 27 6.3 Decoding BCH and RS Codes: The General Outline ‘There are many algorithms which have been developed for decoding BCH or RS codes. In this chapter we introduce a general approach. In chapter 7 we present other approaches which follow a different outline. ‘The algebraic decoding BCH or RS codes has the following general steps: 1, Computation of the syndrome. 2. Determination of an error locator polynomial, whose roots provide an indication of ‘where the errors are. There are several different ways of finding the locator polyno- mial. These methods include Peterson's algorithm for BCH codes, the Berlekamp- Massey algorithm for BCH codes; the Peterson-Gorenstein-Zierler algorithm for RS codes, the Berlekamp-Massey algorithm for RS codes, and the Euclidean algorithm. In addition, there are techniques based upon Galois-field Fourier transforms. 3. Finding the roots of the error locator polynomial. Thisis usually done using the Chien search, which is an exhaustive search over all the elements in the field. 4, For RS codes or nonbinary BCH codes, the error values must also be determined. This is typically accomplished using Forney’s algorithm. Throughout this chapter (unless otherwise noted) we assume narrow-sense BCH or RS codes, that is, b = 1. 6.3.1. Computation of the Syndrome Since g(a) = g(a) = ++ = g(a") =0 it follows that a codeword ¢ = (Co, -... ¢n-1) with polynomial ¢(x) = ¢o-++-++¢n—12"—! has =c(a™) e(a) = For a received polynomial r(x) = c(x) + €(2) we have nt r(@!) = ea!) = Jena’, =f The values $1, S2,..., Sop are called the syndromes of the received data, Suppose that r has v errors in it which are at locations i, (2, .., iy, with corresponding error values in these locations e;, #0. Then Sy Deve! = Yevtat!. i tat Let Then we can write 12t. (6.3) BCH and Reed-Solomon Codes: Designer Cyclic Codes For binary codes we have ei, = | (if there is a non-zero error, it must be to 1). For the ‘moment we restrict our attention to binary (BCH) codes. Then we have 8= Ox (4) If we know X;, then we know the location of the error. For example, suppose we know that X, = a4, This means, by the definition of X; that i; = 4; that is, the error is in the received digit ra. We thus call the X; the error locators. ‘The next stage in the decoding problem is to determine the error locators X/ given the syndromes Sj. 6.3.2 The Error Locator Polynomial From (6.4) we obtain the following equations: SSX tXat-- +X So=XP4XZ4-4+X? 5) So = XP +XY He 4 XT. ‘The equations are said to be power-sum symmetric functions. This gives us 2t equations in the v unknown error locators. In principle this set of nonlinear equations could be solved by an exhaustive search, but this would be computationally unattractive. Rather than attempting to solve these nonlinear equations directly, a new polynomial is introduced, the error locator polynomial, which casts the problem in a different, and more tractable, setting. The error locator polynomial is defined as Ae) = TY] = Xix) = Aga” + Ava! pt Are + Ao, 6.6) ma where Ao = 1. By this definition, if x = X71 then A(x) = 0; that is, the roots of the error locator polynomial are at the reciprocals (in the field arithmetic) ofthe error locators. Example 6.11 Suppose in G F (16) we find that x = a is a root of an error locator polynomial A(x). ‘Then the error locator is (a*)—! = ar!!, indicating that there is an error in r}1. a 6.3.3 Chien Search ‘Assume for the moment that we actually have the error locator polynomial. (Finding the error locator polynomial is discussed below.) The next step is to find the roots of the error locator polynomial. The field of interest is GF(q”). Being a finite field, we can examine every element of the field to determine if it is a root. There exist other ways of factoring polynomials over finite fields (see, e.g., [25, 360]), but for the fields usually used for error correction codes and the number of roots involved, the Chien search may be the most efficient, ‘Suppose, for example, that v ~ 3 and the error locator polynomial is A(®) = Ao + Aix + Azx? + Asx? = 1+ Aaa + Ans? + Ax’, 6.3 Decoding BCH and RS Codes: The General Outline 249 We evaluate A(x) at each nonzero element in the field in succession: x x=a’,...,x=a%"~, This gives us the following: AC) = 1+ AiG) + A2(1)? + A3(1? A(@) = 1+ Arla) + A2(a? + As(@)? A(@?) = 14 Ar(@?) + An(a?)? + As(a2)? AG!) = 1+ Aral”) + Ar(at” + Ag(at” 2)? ‘The computations in this sequence can be efficiently embodied in the hardware depicted in Figure 6.1. A set of v registers are loaded initially with the coefficients of the error locator polynomial, Ai, A2,..., Ay. The initial output is the sum A= DA; = AG) = Ther = IfA = I then anerrorhas been located (since then A(x) = 0). Atthenext stage, cach register is multiplied by a/, j = 1,2,..., v, so the register contents are Aja, Aza,..., Aya”. The output is the sum_ A= D Aja! = A(x) Apna i The registers are multiplied again by successive powers of a, resulting in evaluation at a, This procedure continues until A(x) has been evaluated at all nonzero elements ofthe field. a @ 20+ Figure 6.1: Chien search algorithm. Ifthe roots are distinct and all lie in the appropriate field, then we use these to determine the error locations. If they are not distinct or lie in the wrong field, then the received word is not within distance ¢ of any codeword. (This condition can be observed if the error locator polynomial of degree » does not have v roots in the field that the operations take in; the remaining roots are either repeated or exist in an extension of this field.) The corresponding error pattern is said to be an uncorrectable error pattern. An uncortectable error pattern results in a decoder failure. 250 BCH and Reed-Solomon Codes: Designer Cyclic Codes 6.4 Finding the Error Locator Polynomial Let us return to the question of finding the error locator polynomial using the syndromes. Let us examine the structure of the error locator polynomial by expanding (6.6) for the case vs A(x) = 1 = x( Kt + X2 + Xs) +x7(X1 Xo + XiX3 + X2X3) — XL X2X3 =AotxArta2Ao+3x7As so that Ao= 1 Ar =—(X1+Xo+Xs) A = XiX2 + XiXa + XoXs Ag = —XiX2X3. In general, for an error locator polynomial of degree v we find that Ao 1 ~M = OX aXXo tM i Aa = XiXj = XiXa+ X1Xs te + XiXy te + Xv Xy Fy 7) =A3 = SD XX pK = XiX2X3 + XiX2Xq to + XyXv-ky fee (+1 Ay = X1X2--- Xp, That is, the coefficient of the error locator polynomial A; is the sum of the product of all combinations of the error locators taken i at a time. Equations of the form (6.7) are referred to as the elementary symmetric functions of the error locators (so called because if the error locators (X;) are permuted, the same values are computed). ‘The power-sum symmetric functions of (6.5) provide a nonlinear relationship between the syndromes and the error locators. The elementary symmetric functions provide a non- linear relationship between the coefficients of the error locator polynomial and the error locators. The key observation is that there is a linear relationship between the syndromes and the coefficients of the error locator polynomial. This relationship is described by the Newton identities, which apply over any field. ‘Theorem 6.11 The syndromes (6.5) and the coefficients of the error locator polynomial are related by Set A Sea tose + Ae + RAK Se + ArSeaa toes + Ay Sev + Av Si-y = l v, there is a linear feedback shift register relationship between the syndromes and the coefficients of the error locator polynomial, S)=- DAS). (6.10) im ‘The theorem is proved in Appendix 6.A. Equation (6.10) can be expressed in a matrix form StS SY Ay So Se Suan | | Av oe SSH Soya | | Ava |_| OP Sy Soar + Soot} LM Se The » x v matrix, which we denote My, is a Toeplitz matrix, constant on the diagonals. The number of errors v is not known in advance, so it must be determined. The Peterson- Gorenstein-Zierler decoder operates as follows. L Saves 2. Form My and compute the determinant det(M,) to determine if My is invertible. If itis not invertible, set v + v — 1 and repeat this step. 3. If My is invertible, solve for the coefficients Ai, Az, ..+5 Av. 6.4.1 Simplifications for Binary Codes and Peterson's Algorithm For binary codes, Newton's identities are subject to further simplifications. nS; = 0 if nis even and nS; = S; if nisodd. Furthermore, we have 52; = S7, since by (6.4) and Theorem 5.15 sy=E¥=(Ex/) i 3 a 252 BCH and Reed-Solomon Codes: Designer Cyclic Codes, ‘We can thus write Newton’s identities (6.9) as. S+AL Sp +AiS2 + A2Si + Aa Saint + AiSte-2 Ho + AVS = which can be expressed in the matrix equation 1 0 0 0 o 0 SSL 1 0 o 0 } far -S1 Se Ss SS oO 0 || Az 8 : {= » @1l Sora Sars Str Sts-7 Si-2 Sra | Le ~ Sart, Sar-2 Su-3 Sra Sry-s Se Sto or AA = —S. If there are in fact ¢ errors, the matrix is invertible, as we can determine by computing the determinant of the matrix. If it is not invertible, remove two rows and columns, then try again. Once A is found, we find its roots. This matrix-based approach to solving for the error locator polynomial is called Peterson’s algorithm for decoding binary BCH codes. For small numbers of errors, we can provide explicit formulas for the coefficients of ‘A(a), which may be more efficient than the more generalized solutions suggested below (238). l-error correction A; = Si. error correction Aj = Sj, Az = (83 + S})/(Si). Z-error correction Ay = Si, Az = (S?S3+55)/(S} +53), A3 = (S7 +53) +S1A2. 4 error correction Ms Si(S1 + S]) + S3(5§ + Ss) —el ~ SSP 4S) d (Soa Ss)? SUSE + Ss) + S$ +55) 2, 3 A3 = SP 4.93451a, Aye SEIDEL De S-error correction A; = Si, ag = SLES +9) 4 sH4 $594 Fo} +1205} 4 5965 + p+ 515345159 GEESE S) + HSS} FHI 5+ HENS 5) Ag = (S} +53) + Si Ao _ (SP + So) + SHSP + 55) + S1(Ss + S755) + Aal(or + 57) + S153(S7 + S3)] Ay S$+Ss 6 Finding the Error Locator Polynomial As = S5 + S753 + SiSa + Aa(S} +53). For large numbers of errors, Peterson's algorithm is quite complex, Computing the sequence of determinants to find the number of errors is costly. So is solving the system of equations once the number of errors is determined, We therefore look for more efficient techniques. Example 6.12 Consider the (31,21) 2-error correcting code introducedin Example6.2, with generator glx) = x10 4.29 +28 + x6 +25 4 x3 + 1 having roots at oa”, a and a4, Suppose the codeword ef) Ta pated aS pap lg alt I pt a8 20g PE 2 4 2 4 2S is transmitted and ray aD aS bap 8 pl pt pal 220 bP Pg Pg PS is received. The syndromes are 7 Sar) =a"? Sy =r Using the results above we find Ay=Si=a7 sothat A(x) = 1+a17x 40722, The roots of this polynomial (found, e.g., using the Chien search) are at x =a'!3 and x = a2". Specifically, we could waite AG) = 0% 4a) +077), ‘The reciprocals of the roots are at a'8 and a4, so that the errors in transmission occurred at locations and 18, etx) = x8 + 28 Itcan be seen that r(x) + e(x) is in fact equal to the transmitted codeword, o 6.4.2 Berlekamp-Massey Algorithm ‘While Peterson’s method involves straightforward linear algebra, itis computationally com- plex in general, Starting with the matrix A in (6.11), it is examined to see if itis singular. This involves either attempting to solve the equations (e.g., by Gaussian elimination or equivalent), or computing the determinant to sec if the solution can be found. If A is sin- gular, then the last two rows and columns are dropped to form anew A matrix. Then the attempted solution must be recomputed starting over with the new A matrix. ‘The Berlekamp-Massey algorithm takes a different approach. Starting with a small problem, it works up to increasingly longer problems until it obtains an overall solution. ‘However, at each stage it is able to re-use information it has already learned. Whereas as the ‘computational complexity of the Peterson method is O(v*), the computational complexity of the Berlekamp-Massey algorithm is O(v”). ‘We have observed from the Newton’s identity (6.10) that DAs i ial 5; VL v2.05 2t 6.12) 253 254 BCH and Reed-Solomon Codes: Designer Cyclic Codes This formula describes the output ofa linear feedback shift register (LFSR) with coefficients Ay, Aa... Ave In order for this formula to work, we must find the A, coefficients in such a way that the LFSR generates the known sequence of syndromes S1, S2, ..., Sav. Furthermore, by the maximum likelihood principle, the number of errors v determined must be the smallest that is consistent with the observed syndromes. We therefore want to determine the shortest such LESR In the Berlekamp-Massey algorithm, we build the LFSR that produces the entire se~ quence {S1, $2, ..., S21) by successively modifying an existing LFSR, if necessary, to produce increasingly longer sequences. We start with an LFSR that could produce S1. We determine if that LFSR could also produce the sequence ($1, $2); if it can, then no modifications are necessary. If the sequence cannot be produced using the current LFSR configuration, we determine a new LFSR that can produce the longer sequence. Proceed- ing inductively in this way, we start from an LFSR capable of producing the sequence {S1, S2,..-,Se-1} and modify it, if necessary, so that it can also produce the sequence {51, S2,..., Sk)- Ateach stage, the modifications to the LFSR are accomplished so that the LFSR is the shortest possible. By this means, after completion of the algorithm an LFSR has been found that is able to produce {51, 52, ..., S21) and its coefficients correspond to the error locator polynomial A(x) of smallest degree. Since we build up the LFSR using information from prior computations, we need a notation to represent the A(x) used at different stages of the algorithm, Let L denote the length of the LFSR produced at stage k of the algorithm. Let Alay = 1p Ae pet alice be the connection polynomial at stage k, indicating the connections for the LESR capable Of producing the output sequence {S1, 52, ..., Se}. That is, Salis Note: It is important to realize that some of the coefficients in A¥I(x) may be zero, so that Ly, may be different from the degree of A(x). In realizations which use polynomial arithmetic, itis important to keep in mind what the length is as well as the degree. At some intermediate step, suppose we have a connection polynomial A!*—!I(x) of ength Lx-1 that produces (51, $2,..-,Sx-1} for some k—1 < 2t. We check if this connection polynomial also produces Sy by computing the output Sy= = Lh theese (6.13) hae & If 5 is equal to Si, then there is no need to update the LFSR, so AM(x) = Al-H(x) and Ly = Lx-1. Otherwise, there is some nonzero discrepancy associated with Al*—"I(x), u de = Se ~ $e = Se YO AP NS = 414) at In this case, we update the connection polynomial using the formula AMG) = AMG) + Ax Alex), (6.15) 6.4 Finding the Error Locator Polynomial where A is some element in the field, I is an integer, and Al"~"l(x) is one of the prior connection polynomials produced by our process associated with nonzero discrepancy diy. itialization of this inductive process is discussed in the proof of Theorem 6.13.) Using this new connection polynomial, we compute the new discrepancy, denoted by dj, as (6.16) Now, let | = k — m. Then, by comparison with the definition of the discrepancy in (6.14), the second summation gives Ent Ay arts, i Adm. ‘Thus, if we choose A = —d,,'dy, then the summation in (6.16) gives di = dk — dy \didm = 0. So the new connection polynomial produces the sequence (51, Sz, ..., Se) with no discrep- ancy. 6.4.3 Characterization of LFSR Length in Massey's Algorithm ‘The update in (6.15) is, in fact, the heart of Massey’s algorithm. If all we need is an algorithm to find a connection polynomial, no further analysis is necessary. However, the problem ‘was to find the shortest LFSR producing a given sequence. We have produced a means of finding an LFSR, but have no indication yet that itis the shortest, Establishing this requires some additional effort in the form of two theorems. ‘Theorem 6.12 Suppose that an LFSR with connection polynomial A(x) oflength Lx—1 produces the sequence (S1, S2, ..., Sk=1), butnot the sequence (Sy, Sp, ..., Sk]. Then any connection polynomial that produces the latter sequence must have a length Ly, satisfying Meek Lay Proof The theorem is only of practical interest if Ly < k — 1; otherwise it is trivial to produce the sequence. Let us take, then, Ly—1 Lx-1+ 1, so thatthe set of indices {k— Le, k—Ly+1,-...k-1) are a subset of the set of indices (Zx—1 + 1, Le +2, ..., k— 1) appearing in (6.18). Thus each Si appearing on the right-hand side of (6.20) can be replaced by the summation expression from (6.18) and we can write te ba Val S asay. ist j=l Interchanging the order of summation we have ba le &= Pak yy als; (6.21) Now setting j = & in (6.18), we obtain bast &#- YL alh ls: (6.22) a In this summation the indices of S form the set {k — Ly—1,...,« — 1}. By the (contrary) assumption (6.17), Ly +1 = k — Ly-1, so the sequence of indices {k — Ly-1,...,k — 1} is a subset of the range Ly + 1,...,& of (6.19). Thus we can replace each S;_; in the summation of (6.22) with the expression from (6,19) to obtain hat Le SAD ARMS alse ij (6.23) r= ial Comparing (6.21) with (6.23), the double summations are the same, but the equality in the first case and the inequality in the second case indicate a contradiction. Hence, the assumption on the length of the LFSRs must have been incorrect. By this contradiction, we must have Lezk-Lin. If we take this to be the case, the index ranges which gave rise to the substitutions leading to the contradiction do not occur. a 6.4 Finding the Error Locator Polynomial Since the shortest LFSR that produces the sequence ($1, $>, ..., Sk} must also produce the first part of that sequence, we must have Ly > Li—1. Combining this with the result of the theorem, we obtain Ly = max(Li-1,k — Li-1)- (6.24) ‘We observe that the shift register cannot become shorter as more outputs are produced. We have seen how to update the LFSR to produce a longer sequence using (6.15) and have also seen that there is a lower bound on the length of the LFSR. We now show that this lower bound can be achieved with equality, thus providing the shortest LFSR which produces the desired sequence. Theorem 6.13 In the update procedure, if A'*(x) # AU-M(x), then a new LFSR can be found whose length satisfies Ly = max(Ly-a,k Lea). (625) Proof We do a proof by induction. To check when k = 1 (which also indicates how to get, the algorithm started), take Lo = 0 and A!l(x) = 1. We find that a=S If S; = 0, then no update is necessary. If $, # 0, then we take Al™I(x) = AlM(x) = 1,80 that ! = 1 = 0 = 1, Also, take dy, = 1. The updated polynomial is AMG) = 14 Six, which has degree Ly satisfying Ly = max(Lo, 1 ~ Lo) = In this case, (6.13) is vacuously true for the sequence consisting of the single point {$1}. Now let AM™—"I(x), m < k —1, denote the Jast connection polynomial before AM!—1(x) with Lm—1 < Ly-1 that can produce the sequence (51, S2,..., Sm—1) but not the sequence (51, S2,--25 Sy). Then Lm = Les hhence, in light of the inductive hypothesis (6.25), Lg m= Lm ~ Leary Of Egat =m = Let (6.26) By the update formula (6.15) with ? k —m, we note that Le = max(Li-1,k—m-+ Ln) Using Ln—1 — m from (6.26) we find that Ly = max(Ly—1,k — Let). a In the update step, we observe that the new length is the same as the old length if Li—1 > K— Lica, that is, if 2Li-t 2k, In this case, the connection polynomial is updated, but there is no change in length. 258 BCH and Reed-Solomon Codes: Designer Cyclic Codes, ‘The shifi-register synthesis algorithm, known as Massey’s algorithm, is presented first in pseudocode as Algorithm 6.1, where we use the notations ee) = Aa) to indicate the “current” connection polynomial and PG) = AMHexy to indicate a “previous” connection polynomial. Also, NV is the number of input symbols (NV = 2t for many decoding problems). ‘Algorithm 6-1 Massey's Algorithm Input: $1.92. Initialize: 1 = 0 (the current length of the LFSR) (the current connection polynomial) 1 (the connection polynomial before last length change) 1 (Lis k—m, the amount of shift in update) dm = 1 (previous discrepancy) Sw for, 1toN d= $+ Dk eiSk-i (Compute discrepancy) ‘if (d = 0) (no change in polynomial) =r4t else if (2L > k) then (no-length change in update) (x) = efx) — ddl p(x) =l+1 else (update ¢ with length change) (2) = e(x) (temporary storage) c(t) = efx) — dg! a! p(x) L=k-L PG) = 10) dn t=1 end end end masseynoa.m ‘Example 6.13 Forthe sequence 5 = {1, 1, 1,0, 1, 0, 0} the feedback connection polynomial obtained by a call tomassey is {1, 1,0, 1), which corresponds to the polynomial C@)=ltrts ‘Thus the elements of $ are related by Spt Sj. for j 2 3, Details of the operation of the algorithm are presented in Table 6.5. o 6.4 Finding the Error Locator Polynomial Table 6.5: Evolution of the Berlekamp-Massey Algorithm for the Input Sequence (1, 1,1, 0,1, 0,0). KS de cls) EL pe) Ft dm 11 t l+x tod tol 21 0 Ite 1 241 3.1 0 Itz 11 3.1 40 1 L+xtx? 3 Ite 11 5S 1 0 I+x4x? 3 1+x 2 1 6 0 0 1+x+x3 3 l+x 3 1 70 0 I+xtx? 3 +x 4 1 Example 6.14 For the (31,21) binary double-error correcting code with decoding in Example 6.12, let us employ the Berickamp-Massey algorithm to find the errr locating polynomial. Recall from that example that the syndromes are S, = a!’, 5) = a, 53 = 1, and Sy = a®, Running the Berlekamp-Massey algerithm over G F (32) resuls in the computations shown in Table 6.6. The final connection polynomial e(x) = 1 + a!7x + a22x? is the error location polynomial previously found Using Peterson's algorithm. (In the cusrent case, there are more computations using the Berlekamp- “Massey algorithm, but for longer codes with more errors, the latter would be more efficient) Table 6.6: Berlekamp-Massey Algorithm for a Double-Error Correcting Code KS dee) TE p@) Tt dn Toa all peally 11 1 al? 2 @ 0 1+alx 11 2 al? 3 1 a8 I+axto%x? 2 1tax 1 af 4 oo 0 tt+alxto%s? 2 I+al’x 2 of 6.4.4 Simplifications for Binary Codes Consider again the Berlekamp-Massey algorithm computations for decoding a BCH code, as presented in Table 6.6. Note that dy is 0 for every even k. This result holds in all cases for BCH codes: Lemma 6.14 When the sequence of input symbols to the Berlekamp-Massey algorithm are syndromes from a binary BCH code, then the discrepancy d is equal to 0 for all even k (when I-based indexing is used). AAs a result, there is never an update for these steps of the algorithm, so they can be merged into the next step. This cuts the complexity of the algorithm approximately in half. A. restatement of the algorithm for BCH decoding is presented below. ‘Algorithm 6: Massey's Algorithm for Binary BCH Decoding Input: 51, 52, Initialize: L = 0 (the current length of the LSR) Sw, where N = 2 259 260 BCH and Reed-Solomon Codes: Designer Cyclic Codes 1 (the current connection polynomial) 1 (the connection polynomial before last length change) (Lis k — m, the amount of shift in update) ‘dn, = 1 (previous discrepancy) for k = | to N in steps of 2 d= Se + DE; c1Sk-+ (Compute discrepancy) if (d = 0) (no change in polynomial) t+. else if QL > B) then (no-length change in update) (x) = e(x) — ddg!x! px) T=I41 else (update ¢ with length change) 1(3) = e(2) (temporary storage) (x) = ctx) ~ d's! pt) L=k-L 1 = 1+ 1; (accounts for the values of k skipped) end Example 6.15 Returning tothe (31,21) code from the previous example, if we call the BCH-modified ‘Berlekamp-Massey algorithm with the syndrome sequence $; = a!?, $7 = a3, $3 = 1, and Sy = @°, ‘we obtain the results in Table 6:7. Only two steps of the algorithm are necessary and the same error Locator polynomial is obtained as before. a Table 6.7: Berlekamp-Massey Algorithm for a Double-Error Correcting code: Simplifica- tions for the Binary Code Kk Sk de x) L p(x) V dm 0 aT al Teale roa 2 al? 21 oF i+elxt+o%x? 2 1+alx 2 of ‘The odd-indexed discrepancies are zero due to the fact that for binary codes, the syn- dromes S; have the property that (Sy)? = Soy. (627) ‘We call this condition the syndrome conjugacy condition. Equation (6.27) follows from (6.4) and freshman exponentiation. For the example we have been following, A= Sy $= 0 = Spa (ye Ss. Example 6.16 Wenow present an entire decoding process for the three-error correcting (15, 5) binary code generated by gaa tte ttt ste pa8 pal? 6.5 Non-Binary BCH and RS Decoding 261 ‘Suppose the all-zero vector is transmitted and the received vector is =, 1,0, 1, 0,0, 0,0, 1,0,0,0,0,0, 0). Then r(x) =x +23 +28, Step 1 Compute the syndromes. Evaluating r(x) at x = ca, ...,a® we find the syndromes Sy=al® =a? =a Sy ‘Step 2 Compute the error locator polynomial. A call to the binary Berlekamp-Massey algorithm yields the following computations. S5=0 S% ES & ce) L_ pay Td Ta? a? Tbe 11 2 alt 3a a tale pa5s2 2 1+alx 2 a 50 a ital pal 2+al23 3 tae terx? 2 a2 ‘The error locator polynomial is thus AG) st pax pals? 4 ala Step 3 Find the roots ofthe error locator polynomial. Using the Chien search function, we find roots ata”, a? and a, Inverting these, the eror locators are Xy=08 3 Xp=08 X3 indicating that errors at positions 8, 3, and 1. Step 4 Determine the error values: for a binary BCH code, any errors have value 1. Step 5 Correct the errors: Add the error values (1) atthe error locations, o obtain the decoded vector of all zer0s. a 6.5 Non-l inary BCH and RS Decoding For nonbinary BCH or RS decoding, some additional work is necessary. Some extra care is needed to find the error locators, then the error values must be determined. From (6.3) we can write Sy =e Xi HeyX2 te eX Sz =e XP 1 €,X3 1 4 e1,X2 Ss =e, X} +6, X3+--+6,.X3 Sor = ey XY +e), XP +--+ 61, XH. Because of the e,, coefficients, these are not power-sum symmetric functions as was the case for binary codes. Nevertheless, in a similar manner it is possible to make use of an error locator polynomial. Lemma 6.15 The syndromes and the coefficients of the error locator polynomial A(x) = Ao + Aix -+s+++ Ayx” are related by AySjay + Ay-tSpav4n $00 + ASja + Sy = (6.28) 262 BCH and Reed-Solomon Codes: Designer Cyclic Codes Proof Evaluating the error locator polynomial A(x) = [[}_ (1 — Xix) at an error locator XL, AM(Xp1) =0 = AyXP? + AXP’ $2 + ALA! + Ao. Multiplying this equation by eX} we obtain eX] A(Xy") ‘Summing (6.29) over we obtain (AyXPY + Ay XP? 4. + Mx}! + AoX}) =0 (6.29) 0 = Dey Ao Xf? + Ay rX/ PO toe + AX] + AoXf) = Ay oeuX] + Ava Doenk/ A te td eX]! + Ao) eu Xf tt it ta In light of (6.3), the latter equation can be written as AySjv + Ay 1Spovga too + A181 + Ao} a Because (6.28) holds, the Berlekamp-Massey algorithm (in its non-binary formulation) can be used to find the coefficients ofthe error locator polynomial, ust as for binary codes. 6.5.1 Fomey's Algorithm Having found the error-locator polynomial and its roots, there is still one more step for the non-binary BCH or RS codes: we have to find the error values. Let us return to the syndrome, Vex}, §=1,2.- i Knowing the error locators (obtained from the roots of the error locator polynomial) itis straightforward to set up and solve a set of linear equations: XX, Xs Xy] fei, St xX} xX} X2 X21 lei, S Pyle . (6.30) XP ox} XY + xB Lei, Sor However, there is a method which is computationally easier and in addition provides usa key insight for another way of doing the decoding. It may be observed that the matrix in (6.30) is essentially a Vandermonde matrix. There exist fast algorithms for solving Vandermonde systems (see, e.g., {121]). One of these which applies specifically to this problem is known as Forney's algorithm. Before presenting the formula, a few necessary definitions must be established. A syndrome polynomial is defined as et Sx) = Sy + Sax + S3x? + + Sux™t = YP Sparx). (631) a 6.5 Non-Binary BCH and RS Decoding Also an evror-evaluator polynomial 2 (x) is defined! by Q(x) = SR)AG) (mod x”). (6.32) This equation is called the key equation. Note that the effect of computing modulo x” is to discard all terms of degree 2r or higher. Definition 6.5 Let f(x) = fot fix + fox? +--+ f,x' be a polynomial with coefficients in some field F. The formal derivative f(x) of f(x) is computed using the conventional rules of polynomial differentiation: FG) = fit 2x +3 fax te tefxtt, (633) where, as usual, mfj form € Zand fi € F denotes repeated addition: mf = fir fitert rm summands o There is no implication of any kind of limiting process in formal differentiation: it simply corresponds to formal manipulation of symbols. Based on this definition, it can be shown that many of the conventional rules of differentiation apply. For example, the product rule holds: [ABO = F/B) + FOE"). If f (2) € Fx], where Fis a field of characteristic 2, then f’(x) has no odd-powered terms. ‘Theorem 6.16 (Forney's algorithm) The error values for a Reed-Solomon code are cont- uted by (6.34) where A'(x) is the formal derivative of A(x). Proof First note that over any ring, aot (=x) = (=a) tata pet) = (1-2) Pa, (635) =o Observe: 20x) = S()A(x) (mod x*) = (Exar) (fe - x) (mod x?) jain vo det ’ = Leuk: Six)! [Ja = Xix) (mod x?) ar . wt , = Leuk: | = Xix) Dex)! | []A— Xie) (mod x79. 1 = ist "Some authors define S(x) = Spx + Sra? 4-1-4 S3.x%*, in which case they define (x) = (1 + S(x))ACx) (mod x21) and obtain eg, = —XQ 2X5 H/A'(XG!), 263 BCH and Reed-Solomon Codes: Designer Cyclic Codes From (6.35), awa (= Xix) YX)! = 1 - (Xia). = Since (X)x)*" (mod x) = 0 we have YewXi T]a - Xia). ar S(x)A(x) (mod x” Qa) = Deak Tla-%). stil The trick now is to isolate a particular e;, on the right-hand side of this expression. Evaluate (x) atx = Xp! 205") = enki [Jd - X:X7. = it Every term in the sum results in a product that has a zero in it, except the term when / since that term is skipped. We thus obtain (Xp) = 6, XT] = X1Xp 9). ak ‘We can thus write ei -1 205) ; (636) © Xe Tze = XiXp Once (x) is known, the error values can thus be computed. However, there are some computational simplifications. The formal derivative of A (x) is odd * aay = 7 T- xin =- xP] - xin. ra i=l ist ‘Then NOG) =-Xe Td — Xi XP. isk Substitution of this result into (6,36) yields (6.34), a Example 6.17 Working over GF (8) in acode where = 2, suppose S(x) = a -o3x--atx? po3x3 ‘We find (say using the B-M algorithm and the Chien search) thatthe error locator polynomial is AG) =1 40x tax? = 08 +059). ‘That is, the error locators (reciprocals of the roots of A(x)) are X1 = a? and X2 = a5, We have 20) = (oP xtats4o7 A \Lta2xtax?) (mod x4) = (atrta%s) (mod x4) = a%+x 6.5 Non-Binary BCH and RS Decoding and AY) sa? + 2ox = So Using the error locator Xy = a? we find eat ta)! ao and for the error locator X2 = @°, e5 sat + a8 (a5)! =a ‘The error polynomial is e(x) = ax} + a5x5, a Example 6.18 We consider the entire decoding process for (15,9) code of Example 6.8, using the message and code polynomials in Example 6.10. Suppose the received polynomial is rsysa8 tate past pated pa5x* pax ba8s6 pax7 tas8 pahe? ¢arx lO patel 4 a2! pale 4 ax! (Errors are in the underlined positions.) The syndromes are Sp =r(@) Sy =r(a4 are) sat Sy =7r(03 S¢ =r(a®) =a (a) Se) =a)? pate pa8s? 4 ax? 408xt pbx ‘and the error locator polynomial determined by the Berlckamp-Masscy algorithm is AG) = Lote pallet 4 as, ‘The details ofthe Berlekamp-Massey computations are shown in Table 6.8. Table 6.8: Berlekamp-Massey Algorithm for a Triple-Error Correcting Code ES de ex) L_ pa) T dn 1 a a 1+alx ao. 1 a 2 at a 140% aot 2 a 3 aS a Lax +a4x? 2 L+aox la@ 4a a Lp allx peely? 2 t+abx 2 @ So a LtePxtatt+x? 3 ttaletalsx? 1 qld 6 oF oF L+oextallx?+oa%%? 3) 1+a?x+ealx? 2 al? ‘The roots of A(x) are at cr, a” and ar!3, so the error locators (the reciprocal of the roots) are Xpaal? xpaa® xy 07, ‘corresponding to errors at positions 14, 8, and 2. The error evaluator polynomial is 2) =a? $x$ Px? BCH and Reed-Solomon Codes: Designer Cyclic Codes Then the computations to find the error values are: 1 90xy) a NOT) =e eg 8. gxyhae? Nogh=al gaat 2. gaghsa® ataghealh ea ‘The error polynomial i thus e(s) = as? ats’ + axl and the decoded polynomial is Bate pelle? pated a5x4 tax $a8s® tax? +P + a5e? pad tate! $0%x? 4 al2x]3 4 02x, which is the same as the originat codeword ¢(1) a 6.6 Euclidean Algorithm for the Error Locator Polynomial ‘We have seen that the Berlekamp-Massey algorithm can be used to construct the error locator polynomial. In this section, we show that the Euclidean algorithm can also be used to construct error locator polynomials. This approach to decoding is often called the Sugiyama algorithm [324] We return to the key equation: 2) = S@)AG) (mod x"), (6.37) Given only S(x) and, we desire to determine the error locator polynomial A (x) and the error evaluator polynomial 2(x). As stated, this problem seems hopelessly underconstrained. However, recall that (6,37) means that O)™) + A@)S(t) = Q(x) for some polynomial @(x). (See (5.16).) Also recall that the extended Euclidean algorithm returns, for a pair of elements (a, b) from a Euclidean domain, a pair of elements (5,1) such that as+bt =e, where ¢ is the GCD of a and b. In our case, we run the extended Euclidean algorithm to ‘obtain a sequence of polynomials OF 1x), AM(x) and @!1(x) satisfying, OMa)x + ablanse) = aa). ‘This is exactly the circumstance described in Section 5.2.3. Recall that the stoppingcriterion there is based on the observation that the polynomial we are here calling $2 (x) must have degree

También podría gustarte