Page 1

A parametric approach to list decoding of

Reed-Solomon codes using interpolation

Mortuza Ali and Margreta Kuijper∗ †

April 15, 2011

Abstract

In this paper we present a minimal list decoding algorithm for Reed-Solomon (RS) codes.

Minimal list decoding for a code C refers to list decoding with radius L, where L is the minimum

of the distances between the received word r and any codeword in C. We consider the problem

of determining the value of L as well as determining all the codewords at distance L. Our

approach involves a parametrization of interpolating polynomials of a minimal Gr¨ obner basis

G. We present two efficient ways to compute G. We also show that so-called re-encoding can

be used to further reduce the complexity. We then demonstrate how our parametric approach

can be solved by a computationally feasible rational curve fitting solution from a recent paper

by Wu. Besides, we present an algorithm to compute the minimum multiplicity as well as the

optimal values of the parameters associated with this multiplicity which results in overall savings

in both memory and computation.

1 Introduction

Reed-Solomon (RS) codes are important linear block codes that are of significant theoretical and

practical interest. A (n,k) RS code C, defined over a finite field F, is a k dimensional subspace of

the n dimensional space Fn. For a message polynomial m(x) = m0+ m1x + ··· + mk−1xk−1, the

encoding operation is to evaluate m(x) at x1,x2,...,xn, where the xi’s are n distinct elements of

F. The rich algebraic properties and geometric structures of RS codes lead to the invention of a

number of efficient decoding algorithms such as Sugiyama algorithm [26], Berlekamp-Massey (BM)

algorithm [4, 20], and Welch-Berlekamp (WB) algorithm [28]. These classical decoding algorithms

guarantee correct decoding as long as the number of errors is upper bounded by t = ?(d − 1)/2?,

where d = n − k + 1 is the minimum distance of the code.

In classical decoding, the error correcting radius of t = ?(d − 1)/2? originates from the requirement

of unique decoding since for t > ?(d − 1)/2? multiple codewords within distance t from the received

word r may exist. One way to circumvent this limitation is to increase the decoding radius beyond

?(d − 1)/2? and allow the decoder to output a list of codewords rather than one single codeword.

However, such list decoding is only feasible if there are few codewords in the list. In [9] Guruswami

showed that for a code of relative distance δ = d/n, any Hamming sphere of radius ≤ n(1−√1 − δ)

around a received word r contains only a polynomial number of codewords. Therefore, a (n,k) RS

code with d = n−k+1 can be list decoded up to the error correcting radius of n−?n(k − 1) which

∗M. Ali and M. Kuijper are with the Department of Electrical and Electronic Engineering, University of Melbourne,

VIC 3010, Australia mortuzaa@unimelb.edu.au; mkuijper@unimelb.edu.au

†This work was supported by the Australian Research Council(ARC).

Guruswami named as the Johnson bound.

1

arXiv:1011.1040v3 [cs.IT] 14 Apr 2011

Page 2

A list decoding algorithm was first discovered for low rate RS codes by Sudan [25] and later improved

and extended for all rates by Guruswami and Sudan [10]. The Guruswami-Sudan algorithm can

correct errors up to the Johnson bound n −?n(k − 1). Given a received word r, the essential idea

most t values of i ∈ {1,2,...,n}. The Guruswami-Sudan algorithm finds these polynomials in two

steps: the interpolation step and the factorization step. In the interpolation step, it computes a

bivariate polynomial Q(x,r) that passes through all the points (x1,r1),(x2,r2),...,(xn,rn) with a

prescribed multiplicity s satisfying a certain weighted degree constraint (see [10] for the definition

of weighted degree). Then the bivariate polynomial Q(x,r) is factorized to find all the factors of the

form r−m(x), where m is a polynomial of degree less than k. Now a polynomial m is a valid message

polynomial if it is of degree less than k and m(xi) ?= ri for at most t values of i ∈ {1,2,...,n}.

The construction of Q(x,r) with the prescribed multiplicity and weighted degree constraint ensures

that for all valid message polynomials m, r − m(x) appears as a factor of Q(x,r). Even though the

algorithm may produce implausible polynomials, the total number of polynomials L in the list will

satisfy the bound L < (s + 0.5)?n/(k − 1), see [21].

tion of the bivariate polynomial Q(x,r). Computation of Q(x,r) involves solving a system of O(ns2)

homogeneous equations which using Gaussian elimination can be done in time cubic in the number

of equations [27]. Clearly the algorithmic complexity of the interpolation step is dominated by the

multiplicity s. Recently Wu [29] transformed the interpolation problem to a ‘rational interpola-

tion problem’ which involves smaller multiplicity. Given the received word r, Wu’s algorithm first

computes the syndrome s of r followed by the computation of the error locator polynomial Λ and

error correction polynomial B using the Berlekamp-Massey algorithm. Wu demonstrated that all

valid error locator polynomials can be expressed as a parametrization of Λ and B. More specifi-

cally, given a list decoding radius t, Wu’s algorithm aims at finding all polynomials λ and β such

that Λ?= λΛ + βB has at most t distinct roots. Wu showed that similar to the Guruswami-Sudan

approach, this problem can be reduced to a curve fitting problem but with significantly smaller

multiplicity.

It may be observed that the set of all Q(x,r) ∈ F[x,r] passing through the points (xi,ri), for

i = 1,2,...,n, with multiplicity s is an ideal Is. From this observation several authors including

Alekhnovich [2], Nielsen and Høholdt [23], Kuijper and Polderman [16], O’Keeffe and Fitzpatrick [24],

and Lee and O’Sullivan [19], formulated the interpolation step of the list decoding algorithm as the

problem of finding the minimal weight polynomial from the ideal Is. Clearly the minimal weight

polynomial will appear as the minimal polynomial in a minimal Gr¨ obner basis of Iscomputed with

respect to the corresponding weighted term order. Lee and O’Sullivan also showed that the minimal

polynomial in the ideal Is can be computed more efficiently from a minimal Gr¨ obner basis of a

submodule of F[x]qfor a sufficiently large q1. Let F[x,r]q = {f ∈ F[x,r]|r-deg(f) < q}. Then

F[x,r]qcan be viewed as a free module over F[x]qwith a free basis 1,r,...,rq−1. Then the essential

observation of Lee and O’Sullivan is that the minimal polynomial of Is can be constructed from

the minimal Gr¨ obner basis of a submodule of F[x]qalong with the free basis 1,r,...,rq−1, for large

enough q.

In this paper we employ the theory of minimal Gr¨ obner bases to perform minimal list decoding.

Given the received word r, let L denote the value of dH(r,C) where

of the algorithm is to find all the polynomials m of degree less than k such that m(xi) ?= rifor at

The most computationally intensive operation in the Guruswami-Sudan algorithm is the construc-

dH(r,C) := min

c∈C{dH(r,c)}.

Our main objective is to determine the value of L as well as all codewords c which are at a distance L

from the received word r. Clearly, if L is larger than the classical error correcting radius ?(d−1)/2?,

1Here the integer q is not related to the size of the field.

2

Page 3

the task is a list decoding operation. Our algorithm, unlike the Lee and O’Sullivan approach, starts

with computing a minimal Gr¨ obner basis G of a submodule of F[x]2, rather than F[x]q. We then

demonstrate that all valid message polynomials can be extracted from a parametrization in terms of

the elements of G. For computational feasibility, we show that this parametric approach, like Wu’s

algorithm, can be translated into a ‘rational interpolation problem’. However, our approach has at

least three features that distinguishes it from Wu’s algorithm. Firstly, our parametric formulation

of the problem of list decoding, without the detour of syndrome computation, is simpler than Wu’s

formulation. Secondly, while Wu’s algorithm, for each valid Λ?, resorts to Forney’s formula to

compute the error values, our algorithm immediately leads to a valid message polynomial. Finally,

we provide an algorithm to compute the minimum multiplicity along with the optimal values of the

associated parameters to be used in the rational interpolation step. Use of these optimal parameters

in the rational interpolation step results in savings of both memory and computation as compared

to Wu’s algorithm.

The organization of the rest of the paper is as follows. In Section 2, we briefly review the relevant

theory of Gr¨ obner bases. In Section 3, we develop the theory and present the main algorithm

along with two ways to compute the minimal Gr¨ obner basis. In this section we also explain how

so-called re-encoding can be applied to the proposed approach.

parametric approach into a ‘rational interpolation problem’ and present an efficient algorithm for

the computation of the minimum value of the multiplicity and other parameters to be used in the

rational interpolation step. We demonstrate that the use of these optimal values of the parameters

results in less memory requirement as well as less computational requirement as compared to Wu’s

approach. Finally we conclude the paper in Section 5.

In Section 4, we translate the

2 Preliminaries

The theory of Gr¨ obner bases for modules in F[x]qis generally recognized as a powerful conceptual

and computational tool that plays a role similar to Euclidean division for modules in F[x]. More

specifically, minimal Gr¨ obner bases prove themselves as an effective tool for various types of inter-

polation problems. In recent papers [18, 17] this effectiveness was ascribed to a powerful property of

minimal Gr¨ obner bases, explicitly identified as the ‘Predictable Leading Monomial Property’. The

proofs in this paper make use of this property. Before recalling the PLM property let us first recall

some terminology on Gr¨ obner bases.

Let e1,...,eqdenote the unit vectors in Fq. The elements xαeiwith i ∈ {1,...,l} and α ∈ N0are

called monomials. Let n1,...,nq be nonnegative integers. In this paper we define the following

two types of monomial orders:

• The (n1,··· ,nq)-weighted term over position (top) order, defined as

xαei< xβej

:⇔ α + ni< β + njor (α + ni= β + njand i < j).

• The (n1,··· ,nq)-weighted position over term (pot) order, defined as

xαei< xβej

:⇔ i < j or (i = j and α + ni< β + nj).

Clearly, whatever order is chosen, every nonzero element f ∈ F[x]qcan be written uniquely as

f =

L

?

i=1

ciXi,

where L ∈ N, the ci’s are nonzero elements of F for i = 1,...,L and the polynomial vectors

X1,...,XLare monomials, ordered as X1> ··· > XL. Using the terminology of [1] we define

3

Page 4

• lm(f) := X1as the leading monomial of f

• lt(f) := c1X1as the leading term of f

• lc(f) := c1as the leading coefficient of f

Writing X1= xα1ei1, where α1∈ N0and i1∈ {1,...,l}, we define

• lpos(f) := i1as the leading position of f

• wdeg(f) := α1+ ni1as the weighted degree of f.

Note that for zero weights n1= ··· = nq= 0 the above orders coincide with the reflected versions

of the standard TOP order and POT order, respectively, as introduced in the textbook [1].

Also note that, unlike with TOP, the introduction of weights does not change the POT ordering of

monomials. In this paper, weighted POT order is needed only because we need the associated notion

of ‘weighted degree’.

We now recall some basic definitions and results on Gr¨ obner bases, see [1]. Below we denote the

submodule generated by a polynomial vector f by ?f?.

Definition 2.1 Let F be a subset of F[x]q. Then the submodule L(F), defined as

L(F) := ?lt(f) | f ∈ F?

is called the leading term submodule of F.

Definition 2.2 Let M ⊆ F[x]qbe a module and G ⊆ M. Then G is called a Gr¨ obner basis of M

if

L(G) = L(M).

In order to define a concept of minimality we have the following definition.

Definition 2.3 ([1, Def. 4.1.1]) Let 0 ?= f ∈ F[x]qand let F = {f1,...,fs} be a set of nonzero

elements of F[x]q. Let αj1,...,αjm∈ N0and βj1,...,βjmbe nonzero elements of F, where 1 ≤ m ≤ s

and 1 ≤ ji≤ s for i = 1,...,m, such that

1. lm(f) = xαjilm(fji) for i = 1,...,m and

2. lt(f) = βj1xαj1lt(fj1) + ··· + βjmxαjmlt(fjm).

Define

h := f − (βj1xαj1fj1+ ··· + βjmxαjmfjm).

Then we say that f reduces to h modulo F in one step and we write

f

F

− → h.

If f cannot be reduced modulo F, we say that f is minimal with respect to F.

Lemma 2.4 ([1, Lemma 4.1.3]) Let f, h and F be as in the above definition. If f

or lm(h) < lm(f).

F

− → h then h = 0

Definition 2.5 ([1]) A Gr¨ obner basis G is called minimal if all its elements g are minimal with

respect to G\{g}.

4

Page 5

It is well known [1, Exercise 4.1.9] that a minimal Gr¨ obner basis exists for any module in F[x]qand

that all leading positions of its elements are different. In [18, 17] another important property of a

minimal Gr¨ obner basis is identified; the theorem below merely formulates a well known result.

Theorem 2.6 ( [18]) Let M be a submodule of F[x]qwith minimal Gr¨ obner basis G = {g1,...,gm}.

Then for any 0 ?= f ∈ M, written as

f = a1g1+ ··· + amgm,

where a1,...,am∈ F[x], we have

lm(f) =max

1≤i≤m;ai?=0(lm(ai)lm(gi)).

(1)

(2)

The property outlined in the above theorem is called the Predictable Leading Monomial (PLM)

property, as in [18].Note that this property involves not only degree information (as in the

‘predictable degree property’ first introduced in [5]) but also leading position information. Most

importantly, the above theorem holds irrespective of which monomial orders top or pot is chosen,

for a proof see [18].

Clearly, in the above theorem m = rank (M) and all minimal Gr¨ obner bases of M must have

rank (M) elements, no matter which monomial order is chosen. Furthermore, we have the following

theorem.

Theorem 2.7 Let n1,...,nq be nonnegative integers and let M be a module in F[x]q. Let G =

{g1,...,gm} be a minimal Gr¨ obner basis of M with respect to the (n1,··· ,nq)-weighted top order;

denote ?i:= wdeggifor i = 1,...,m. Let˜G = {˜ g1,..., ˜ gm} be a minimal Gr¨ obner basis of M with

respect to the (n1,··· ,nq)- weighted pot order; denote˜?i:= wdeg ˜ gifor i = 1,...,m. Then

m

?

i=1

?i=

m

?

i=1

˜?i. (3)

Proof

G and˜G are bases for M (in a linear algebraic sense) that there exists a unimodular polynomial

matrix U ∈ F[x]q×qsuch that

col {g1,...,gq} = Ucol {˜ g1,..., ˜ gq}.

Without restrictions we may assume that the leading positions within each Gr¨ obner basis are strictly

increasing. Clearly it follows from the above equation that also

We first prove the theorem for the case m = q. It follows easily from the fact that both

V = UW,(4)

where V = col {g1,...,gq}diag {xn1,··· ,xnq} and W = col {˜ g1,..., ˜ gq}diag {xn1,··· ,xnq} Since

U is unimodular we must have deg det V = deg det W. Clearly deg det V =?m

it follows immediately from (4) that the maximum degree of all minors of V equals the maximum

degree of all minors of W. On the other hand, the maximum degree of all minors of V clearly equals

?m

i=1?iand deg det

W =?m

i=1?iand similarly the maximum degree of all minors of W equals?m

i=1˜?ifrom which (3) follows. Next, we prove the general case m ≤ q. For this, we note that

i=1˜?i. The theorem now

follows.

?

We call the sum in (3) the (n1,··· ,nq)-weighted degree of M, denoted by wdeg (M). For zero

weights n1= ··· = nq= 0 the above result expresses that the sum of the degrees of a (reflected)

5

Page 6

TOP minimal Gr¨ obner basis of a module M coincides with the sum of the degrees of a (reflected)

POT minimal Gr¨ obner basis of M. This result is merely a reformulation of the well known fact that

the McMillan degree of a row reduced polynomial matrix equals the sum of its row degrees, see [6].

Corollary 2.8 let M be a module in F[x]q. Let G = {g1,...,gm} be a Gr¨ obner basis of M whose

(n1,··· ,nq)-weighted top degrees add up to wdeg (M). Then G is a minimal Gr¨ obner basis of M

with respect to the (n1,··· ,nq)-weighted top order.

Proof

This implies that there exists a Gr¨ obner basis of M whose sum of weighted degrees is strictly less

than wdeg (M), which contradicts the above theorem.

Suppose that G is not minimal. Then there exists g ∈ G that can be reduced modulo G\{g}.

?

3 Minimal list decoding through division

Let us now consider a (n,k) RS code and a nonnegative integer t. The problem of ‘list decoding up

to t errors’ is the following:

List Decoding Problem: Given a received word (r1,··· ,rn) ∈ Fn, find all polynomials m ∈ F[x]

of degree < k such that

m(xi) = ri for at least n − t values of i ∈ {1,...,n}.

3.1 Main approach

We introduce the following two polynomials in F[x]:

Π(x) =

n

?

i=1

(x − xi), (5)

and L as the Lagrange interpolating polynomial, i.e., the polynomial of least degree for which

L(xi) = rifor all i ∈ {1,...,n}. (6)

Definition 3.1 Let r = (r1,··· ,rn) ∈ Fn. The interpolation module M(r) is given by the

module in F[x]2that is spanned by the vectors ˜ g1:= [Π(x) 0] and ˜ g2:= [L(x)

− 1].

Note that {˜ g1, ˜ g2} is a minimal pot Gr¨ obner basis for M(r). The above defined interpolation module

is crucial to our approach. With ˜ g2we associate the bivariate polynomial Q2(x,y) = L(x)−y; clearly

Q2(xi,ri) = 0 for all i ∈ {1,...,n}. Similarly, with ˜ g1we associate the polynomial Q1(x,y) = Π(x);

trivially Q1(xi,ri) = 0 for all i ∈ {1,...,n}. Now consider an arbitrary bivariate polynomial Q of

the form Q(x,y) = N(x) − D(x)y for which Q(xi,ri) = 0 for all i ∈ {1,...,n}. It can be shown,

see [16], that [N

− D] ∈ M(r). Recall that list decoding up to t errors amounts to finding all

polynomials m ∈ F[x] of degree < k such that

m(xi) = ri for all i ∈ {1,...,n} except i = j1,...,jLwith L ≤ t.

In our context this amounts to looking for an interpolating bivariate polynomial Q of the form

Q(x,y) = D(x)m(x) − D(x)y, where D(x) =?L

vectors [N

− D] ∈ M(r) of weighted (0,k − 1)-degree ≤ t + k − 1, that satisfy

i=1(x − xji). Note that then indeed Q(xi,ri) = 0

for all i ∈ {1,...,n}. Thus, to solve the above list decoding problem we are looking for particular

6

Page 7

1. N is a multiple of D and

2. D has L distinct zeros in F, where L denotes deg D.

In this paper we are interested in finding the smallest value L = dH(r,C) for which list decoding is

possible as well as performing the associated list decoding. Thus we occupy ourselves with maximum

likelihood list decoding. We have the following theorem.

Theorem 3.2 Let r = (r1,··· ,rn) ∈ Fnbe a received word and let M(r) be the corresponding

interpolation module. Let f =?f(1)

f(2)?∈ F[x]2be a vector in M(r) of weighted (0,k − 1)-degree

L that satisfies the following 3 requirements:

1. lpos(f) = 2,

2. f(1)is a multiple of f(2)and

3. there is no vector in M(r) of weighted (0,k − 1)-degree < L that satisfies requirements 1) and

2).

Then

m := −f(1)

f(2)

is a message polynomial corresponding to a minimal error pattern of L − k + 1 errors.

Proof

remains to prove that f(2)has L−k +1 distinct zeros in F. Since f ∈ M(r) there exist polynomials

α and β such that

?

Observe that α and β do not have a common factor, otherwise the weighted degree of f would not

be minimal (requirement 3). From (7) it follows that αΠ − f(2)L = f(1)is a multiple of f(2)by

requirement 2. As a result, αΠ is a multiple of f(2). Since α and β = −f(2)have no common factor

it follows that Π must be a multiple of f(2), i.e., f(2)has L−k +1 distinct zeros in F, which proves

the theorem.

From lpos(f) = 2 it follows immediately that deg m < k and deg f(2)= L − k + 1. It

f = [αβ]

Π

L

0

−1

?

. (7)

?

Lemma 3.3 Let r = (r1,··· ,rn) ∈ Fnbe a received word and let M(r) be the corresponding in-

terpolation module. Let {g1,g2} be a (0,k − 1)-weighted top minimal Gr¨ obner basis for M(r) with

lpos(g2) = 2. Denote ?1 := wdegg1 and ?2 := wdegg2. Let t be a nonnegative integer. Then a

parametrization of all vectors f ∈ F[x]2with lpos(f) = 2 and wdegf = t+k −1 (with respect to the

(0,k − 1)-weighted top order) is given by

f = ag1+ bg2,

where a ∈ F[x] with dega ≤ t+k−1−?1and b is a monic polynomial in F[x] of degree t+k−1−?2.

In particular, there exist no such vectors f for t < ?2− k + 1.

7

Page 8

Algorithm 1 Minimal list decoding of (n,k) RS code

Input: Received word r = (r1,...,rn)

Output: A list of polynomials m of degree < k such that dH(c,r) is minimal, where c =

(m(x1),...,m(xn)).

1. Compute the polynomials Π and L given by (5) and (6) ; define the interpolation module

M(r) := span {[Π

2. Compute a minimal Gr¨ obner basis G = {g1,g2} of M(r) with respect to the (0,k−1)-weighted

top monomial order, with lpos(g2) = 2. Denote ?1:= wdegg1and ?2:= wdegg2; set j = 0.

3. Check requirement 2) of Theorem 3.2 for f = ag1+bg2, for all a ∈ F[x] with dega ≤ ?2−?1+j

and for all monic b ∈ F[x] with degb = j; write f =?f(1)

of the form m = −f(1)/f(2). In case step 3) is not successful increase j by 1 and repeat step 3).

0],[L− 1]}.

f(2)?.

4. Whenever step 3) is successful, output all obtained quotient polynomials, i.e., polynomials m

Proof

weighted top order. The parametrization now follows immediately from this property.

According to Theorem 2.6, {g1,g2} has the PLM property with respect to the (0,k − 1)-

?

Together, the above lemma and theorem give rise to the heuristic list decoding Algorithm 1.

An important feature of the above algorithm is that we use ?2= wdegg2to decide how many errors

to decode. Indeed, it follows from the above lemma that it is not possible to perform list decoding

for t < ?2− k + 1. We now present the main theorem of this section.

Theorem 3.4 Let r = (r1,··· ,rn) ∈ Fnbe a received word and let M(r) be the corresponding

interpolation module. Let {g1,g2} be a (0,k−1)-weighted top minimal Gr¨ obner basis for M(r) with

lpos(g2) = 2. Write g2=g(1)

2

g(2)

2

. Then Algorithm 1 yields a list of all message polynomials m

such that

dH(c,r) is minimal, where c = (m(x1),...,m(xn)).

??

(8)

In particular, in case there exists an error pattern with only ≤ ?(n − k)/2? errors, the list consists

of only

m = −g(1)

2

g(2)

2

.(9)

Proof

that is output by Algorithm 1 has to have degree < k and satisfy (8). Vice versa, if m is a polynomial

of degree < k that satisfies (8) then it follows from Lemma 3.3 that it must be in the output list of

Algorithm 1. Finally, let us assume that there are only ≤ ?(n−k)/2? errors. This implies that there

exists a vector f =?f(1)

Now, since ?1+ ?2= n + k − 1 by Theorem 2.7, this implies that ?1> ?2. As a result, a = 0 in step

3), so that step 4) immediately gives the unique solution for j = 0 as (9).

Firstly, it follows immediately from Theorem 3.2 and Lemma 3.3 that any polynomial m

f(2)?in M(r) with wdegf ≤ ?(n − k)/2? + k − 1 < (n + k − 1)/2 that

satisfies the requirements of Theorem 3.2. Because of Lemma 3.3 it follows that ?2< (n + k − 1)/2.

?

Our next example illustrates the classical decoding scenario, showing that Algorithm 1 is an extension

of existing classical interpolation-based algorithms as in [19, 7].

Example 3.5 Consider the single-error correcting (7,5) RS code over GF(7). The message poly-

nomial m(x) = 2x2+ x + 3 is encoded as c = (m(0),m(1),...,m(6)) = (3,−1,−1,3,−3,2,−3). Let

8

Page 9

the received word be r = (3,2,−1,3,−3,2,−3). Thus an error occurred at locator position 1. The

polynomials L and Π are computed as L(x) = −3x6−3x5−3x4−3x3−x2−2x+3 and Π(x) = x7−x.

Thus the module M(r) is spanned by the rows of the matrix

?

A minimal Gr¨ obner basis {g1,g2} of M(r) with respect to the (0,4)-weighted top monomial order is

computed as

?

Thus, in the terminology of Theorem 3.4 we have g(1)

Applying Algorithm 1 we determine that g(1)

2

is a multiple of g(2)

x7− x0

−3x6− 3x5− 3x4− 3x3− x2− 2x + 3

−1

?

.

col {g1,g2} =

−3x6− 3x5− 3x4− 3x3− x2− 2x + 3

2x3− x2+ 2x − 3

−1

−x + 1

?

.

2

= 2x3− x2+ 2x − 3 and g(2)

2

and we recover

2

= −x + 1.

m(x) = −g(1)

2

g(2)

2

= 2x2+ x + 3.

Let us now move on to an example of decoding beyond the classical error bound. Our approach is

particularly feasible for the case that b = 1 and a is restricted to a constant, as illustrated in the

next example. Note that the example is an instance of “one-step-ahead” list decoding [29].

Example 3.6 Consider the single-error correcting (7,4) RS code over GF(7); let the message poly-

nomial be m(x) = 2x2+x+3 which is encoded as c = (m(0),m(1),...,m(6)) = (3,−1,−1,3,−3,2,−3).

Let the received word be r = (3,2,−1,3,2,2,−3) which differs from c at locations 1 and 4. The poly-

nomials L and Π are computed as L(x) = −x6− 2x5+ x4− x3+ 2x + 3 and Π(x) = x7− x. The

interpolation module M(r) is spanned by the rows of the matrix

?

A minimal Gr¨ obner basis {g1,g2} of M(r) with respect to the (0,3)-weighted top monomial ordering

is computed as

?

Thus in this example ?1 = ?2 = 5, so that a is a constant. Applying Algorithm 1, we consider

f = ag1+ g2for a = 0,...,6. Writing f =?f(1)

a = 0), but also the message polynomials 3x3− 2x2+ 3x − 2 (for a = 2), and −2x3− 2x2+ 3x + 3

(for a = 4).

M(r) =

x7− x0

−x6− 2x5+ x4− x3+ 2x + 3

−1

?

.

col {g1,g2} =

x5− 2x4− x3− x2+ x + 3

−2x4+ 2x3+ x2− 3x + 2

−3x − 1

x2+ 2x − 3

?

.

f2?, we find that f(2)divides f(1)for a = 0,2,

and 4, giving a list of three message polynomials—we recover not only m(x) = 2x2+ x + 3 (for

3.2 Computation of g1and g2

There are various ways in which the required minimal Gr¨ obner basis {g1,g2} of the interpolation

module M(r) can be computed. One obvious way is to simply run an existing computer algebra

system such as Singular, specifying the required (0,k − 1)-weighted top order.

Because of the specific form of M(r) a more efficient way is to apply the Euclidean algorithm to the

polynomials Π and L. More specifically, we have the following algorithm.

9

Page 10

Algorithm 2 Computation of g1and g2via Euclidean algorithm

Input: Received word r = (r1,...,rn); polynomials Π and L given by (5) and (6).

Output: Polynomials g1and g2in F[x]2, such that {g1,g2} is a minimal Gr¨ obner basis of M(r)

with respect to the (0,k − 1)-weighted top monomial order, with lpos(g2) = 2.

1. Define polynomials h0, h1, t0and t1in F[x] as

?

h0

h1

t0

t1

?

:=

?

Π

L

0

−1

?

;

set j := 0.

2. Check

deg tj+1+ k − 1 ≥ deg hj+1;

tj] and g2:= [hj+1

(10)

if NO, go to Step 3. If YES, define g1:= [hj

3. Apply the Euclidean algorithm to hjand hj+1, yielding hj= qj+1hj+1+hj+2, where deg hj+2<

deg hj+1.

4. Write

?

increase j by 1 and go back to Step 2.

tj+1] and STOP.

hj+1

hj+2

tj+1

tj+2

?

:=

?

0

1

1

−qj+1

??

hj

tj

hj+1

tj+1

?

;

Theorem 3.7 Let r = (r1,··· ,rn) ∈ Fnbe a received word and let M(r) be the corresponding

interpolation module. Then Algorithm 2 yields a (0,k − 1)-weighted top minimal Gr¨ obner basis

{g1,g2} for M(r) with lpos(g2) = 2.

Proof Firstly we note that the matrix

?

0

1

1

−qj+1

?

is unimodular, i.e., has a polynomial inverse. It then follows that, at each step j, the rows of the

matrix

?

are a pot minimal Gr¨ obner basis for M(r) whose (0,k−1)-weighted pot degrees add up to n+k−1.

By definition, with respect to the (0,k − 1)-weighted top order both these row vectors have leading

position 1, until the stopping condition (10) is met. At this point the second row vector has leading

position 2 and the sum of the (0,k−1)-weighted top degrees add up to n+k−1. It now follows from

Corollary 2.8 that the rows of the matrix (11) must be a (0,k − 1)-weighted minimal top Gr¨ obner

basis for M(r).

hj

tj

hj+1

tj+1

?

(11)

?

Yet another alternative is to use an iterative method, interpolating the xi’s step by step for i =

1,...,n. This method has the advantage that the Lagrange polynomial L does not need to be

computed upfront.

Theorem 3.8 Let r = (r1,··· ,rn) ∈ Fnbe a received word and let M(r) be the corresponding

interpolation module. Then Algorithm 3 yields a (0,k − 1)-weighted top minimal Gr¨ obner basis

{g1,g2} for M(r) with lpos(g2) = 2.

10

Page 11

Algorithm 3 Computation of g1and g2via iterative algorithm

Input: Received word r = (r1,...,rn).

Output: Polynomials g1and g2in F[x]2, such that {g1,g2} is a minimal Gr¨ obner basis of M(r)

with respect to the (0,k − 1)-weighted top monomial order, with lpos(g2) = 2.

1. Initialize L0 := k − 1 and R0 := I ∈ F2×2; denote Rj :=

j = 0,...,n.

2. Process the received values rjiteratively for j = 1 to n as follows. For j = 1 to n do

?

Qj

Nj

−Kj

−Dj

?

∈ F[x]2×2for

1. compute Γj:= Qj−1(xj) − rjKj−1(xj) and ∆j:= Nj−1(xj) − rjDj−1(xj)

2. define Rj:= VjRj−1, where

?

Γj= 0),

?

• Vj :=

∆j

0

−Γj

x − xj

?

and Lj := Lj−1+ 1 if ∆j ?= 0 and (Lj−1< (j + k − 1)/2 or

• Vj:=

x − xj

∆j

0

−Γj

?

and Lj:= Lj−1otherwise

3. Define g1:= [Qn

− Kn] and g2:= [Nn

− Dn].

Proof

We show that the rows of Rjare a Gr¨ obner basis of M(r1,...,rj)of the required form for j = 1,...,n.

We interpret Ljas the (0,k−1)-weighted top degree of the second row of Rj. Clearly this is true for

j = 1. Let us now proceed by induction and assume that this is true for j − 1 ∈ {0,...,n − 1}. By

definition of Vjand the induction assumption the rows of Rjare a Gr¨ obner basis for M(r1,...,rj).

Also, by construction, their (0,k − 1)-weighted top degrees add up to 1 more than the (0,k − 1)-

weighted top degrees of Rj−1. Then, by induction, the (0,k−1)-weighted top degrees of Rjadd up

to j + k − 1 = wdeg (M(r1,...,rj)). It then follows from Corollary 2.8 that the rows of Rj are a

(0,k − 1)-weighted top minimal Gr¨ obner basis for M(r1,...,rj). Finally, by construction and the

induction hypothesis, it is easily seen that the second row of Rjhas leading position 2. This proves

the theorem.

For j = 1,...,n denote the interpolation module associated with r1,...,rjby M(r1,...,rj).

?

3.3The special case r = (y1,...,yn−k,0,··· ,0)

In this subsection we pay special attention to the case that the received word r is of the form

(y1,...,yn−k,0,··· ,0) ∈ Fn. This comes about when so-called ”re-encoding” is used in advance of

RS decoding, see e.g., [13, 12].

First we introduce the polynomial G ∈ F[x] of degree k − 1 as

n

?

Clearly, the polynomials Π and L of the previous subsection can be written as

G :=

i=n−k+2

(x − xi).(12)

Π = ΠyG(13)

11

Page 12

and

L = LyG,(14)

where Πyand Lyare in F[x]. The following lemma is straightforward.

Lemma 3.9 Let (y1,...,yn−k) ∈ Fn−k, r = (y1,...,yn−k,0,··· ,0) ∈ Fnand let Π,L,G,Πy and

Ly be defined as before. Let M(r) :=

span {[Πy

• {g1,g2} is a minimal Gr¨ obner basis of M?(y) with respect to the unweighted top order, with

lpos(g2) = 2

span {[Π0],[L− 1]} as before and define M?(y) :=

0],[Ly

− 1]}. Then the following two statements are equivalent:

• {˜ g1, ˜ g2} is a minimal Gr¨ obner basis of M(r) with respect to the (0,k − 1)-weighted top order,

with lpos(˜ g2) = 2,

?

Because of the above lemma it is now straightforward to modify Algorithm 1 into Algorithm 4.

where gi=

g(1)

i

g(2)

i

?

and ˜ gi=

?

g(1)

iGg(2)

i

?

for i = 1,2.

Algorithm 4 Minimal list decoding of (n,k) RS code for re-encoded received word

Input: Received word y = (y1,...,yn−k) in Fn−k.

Output: A list of polynomials m of degree < k such that dH(c,r) is minimal, where c =

(m(x1),...,m(xn)) and r = (y1,...,yn−k,0....,0) in Fn.

1. Compute the polynomials Πyand Lygiven by (13) and (14) ; define the interpolation module

M(y) := span {[Πy

2. Compute a minimal Gr¨ obner basis G = {g1,g2} of M(y) with respect to the unweighted top

monomial order, with lpos(g2) = 2; set j = 0.

3. Compute f = ag1+bg2, for all a ∈ F[x] with dega ≤ ?2−?1+j and for all monic b ∈ F[x] with

degb = j; write f =?f(1)

4. Whenever step 3) is successful, output all obtained quotient polynomials, i.e., polynomials m

of the form m = −f(1)G/f(2). In case step 3) is not successful increase j by 1 and repeat step 3).

0],[Ly

− 1]}.

f(2)?. Check whether f(1)G is a multiple of f(2), where G is given

by (12).

Again the Euclidean algorithm can be used to compute g1and g2; for this, Algorithm 2 should be

initialized by Πyand Lyinstead of Π and L and the stopping criterion (10) should be replaced by

deg tj+1≥ deg hj+1,

instead of (10).

An alternative way to compute g1 and g2 is to employ an algorithm that processes the values of

y1,...,yn−kiteratively. For this, Algorithm 3 is modified into Algorithm 5 which essentially coincides

with the well-known Welch-Berlekamp algorithm [28], see also [14, 15].

4Minimal list decoding through rational interpolation

The most computationally intensive task in Algorithm 1 is Step 3. Recall that in Step 3, we need

to determine all a and b of degree k1≤ ?2−?1+j and k2= j such that f(1)is a multiple of f(2). A

brute force approach may be to consider

?

f =f(1)

f(2)?

= a

?

g(1)

1

g(2)

1

?

+ b

?

g(1)

2

g(2)

2

?

12

Page 13

Algorithm 5 Computation of g1and g2via iterative algorithm for re-encoded received word

Input: Received word y = (y1,...,yn−k) in Fn−k.

Output: Polynomials g1and g2in F[x]2, such that {g1,g2} is a minimal Gr¨ obner basis of M(y)

with respect to the unweighted top monomial order, with lpos(g2) = 2.

?

?

1. Denote Rj:=

Qj

Nj

−Kj

−Dj

?

for j = 0,...,n; initialize L0:= 0 and

R0:=

x − xn−k+1

0

0

1

?

∈ F[x]2×2

2. Process the received values yjiteratively for j = 1 to n − k as follows. For j = 1 to n − k do

1. compute Γj:= Qj−1(xj) − rjKj−1(xj) and ∆j:= Nj−1(xj) − rjDj−1(xj)

2. define Rj:= VjRj−1, where

?

?

• Vj:=

∆j

0

−Γj

x − xj

?

and Lj:= Lj−1+ 1 if ∆j?= 0 and (Lj−1< j/2 or Γj= 0),

?

• Vj:=

x − xj

∆j

0

−Γj

and Lj:= Lj−1otherwise

3. Define g1:= [Qn−k

− Kn−k] and g2:= [Nn−k

− Dn−k].

and check for all polynomials a and b of bounded degree k1and k2, respectively, whether f(2)divides

f(1). Clearly this approach is feasible only when both k1and k2are small. For large values of k1and

k2, the computational complexity becomes prohibitively high, especially when the code is defined

over a large field. Fortunately, Step 3 can be formulated as an algebraic curve fitting problem for

which efficient polynomial time algorithms exist. We explain this approach in the following.

It follows from Theorem 3.2 that, in the context of Algorithm 1, f(1)is a multiple of f(2)if and

only if f(2)has t = ?2− k + 1 + j distinct roots. Therefore, an alternative approach to Step 3 is to

determine all a and b of degree k1≤ t + k − ?1− 1 and k2= t + k − ?2− 1, respectively, such that

f(2)(x) = a(x)g(2)

1(x) + b(x)g(2)

2(x)(15)

has t distinct roots. Now dividing both sides of (15) by g(2)

1(x) we get

f(2)(x)

g(2)

1(x)

= a(x) + b(x)g(2)

2(x)

g(2)

1(x)

. (16)

Now let us define

zi= −g(2)

2(xi)

g(2)

1(xi)

,for i = 1,··· ,n.

Then Step 3 of Algorithm 1 can be formulated as the following rational interpolation problem.

Rational Interpolation Problem: Given n points (x1,z1),(x2,z2)··· ,(xn,zn) and a non-negative

integer t, determine all rational polynomials of the form z = a/b, with a and b of degree k1and k2,

respectively, such that z passes through t of the n points (x1,z1),(x2,z2)··· ,(xn,zn).

13

Page 14

This problem looks similar to the interpolation problem addressed by Guruswami and Sudan in [10].

However, it is complicated by the fact that now we look for a rational solution rather than a polyno-

mial solution. Recently, this rational interpolation problem has been addressed by Wu in [29]. For

the sake of completeness we briefly describe Wu’s formulation here.

4.1Wu’s rational interpolation algorithm

In line with the Guruswami-Sudan approach, Wu’s algorithm first computes a bivariate polynomial

Q(x,z), satisfying certain constraints, that passes through all the n points (x1,z1),(x2,z2)··· ,(xn,zn).

Then the desired rational solutions z = a/b are obtained from the factorization of Q(x,z). Given

the values of t, k1, and k2, let us determine the constraints that must be satisfied for the existence

of such a Q(x,z).

Let us define the (1,w) weighted degree of a bivariate polynomial Q(x,z) =?

wdeg1,wQ(x,z) = max

(i,j)∈Iai,jxizjas

(i,j)∈I{i + jw}. (17)

Let w := k1− k2, ρ := wdeg(1,w)Q(x,z), and M := wdeg0,1Q(x,z). Clearly deg0,1Q(x,z) is the

z-degree of Q(x,z). Now if z = a/b passes through t points with multiplicity s then the polynomial

b(x)MQ(x,a(x)/b(x)) must have ts roots. On the other hand, b(x)z−a(x) will be a factor of Q(x,z)

if b(x)MQ(x,a(x)/b(x)) is identically zero. In turn, b(x)MQ(x,a(x)/b(x)) will be identically zero

if it has more roots than its degree. Now the degree of b(x)MQ(x,a(x)/b(x)) is at most ρ + Mk2

Therefore, a necessary condition that must be satisfied is

ρ + Mk2< ts. (18)

On the other hand, a necessary condition for the existence of Q(x,z) passing through the n points

with multiplicity s is that its (u,v)-th Hasse derivatives at all the n points are zero for all u+v ≤ s.

Thus the requirement that (xi,zi) be a zero of Q(x,z) with multiplicity s, for all i = 1,2,...,n,

leads to N constraints in the form of N homogeneous equations where

N = ns(s + 1)/2 (19)

and unknown variables are the coefficients of Q(x,z). A nonzero solution to the system of homoge-

neous equations is guaranteed to exist if the number of equations is less than the number of unknowns.

Now the number of coefficients in Q(x,z) with wdeg1,wQ(x,z) = ρ and wdeg0,1Q(x,z) = M is

U = (ρ + 1)(M + 1) −w

2M(M + 1). (20)

Therefore, a sufficient condition for the existence of a Q(x,z), passing through all the n points with

multiplicity s, is

(ρ + 1)(M + 1) −w

2M(M + 1) >ns(s + 1)

2

.(21)

Wu, in [29], has proposed suitable choices for the values of s, M, and ρ satisfying (18) and (21) as

?

?

ρ= ts − Mk2− 1.

s=

t(n − k + 1 − t)

t2− n(2t − (n − k + 1))

st

2t − (n − k + 1)

?

,(22)

M=

?

,(23)

(24)

14

Page 15

For more details on Wu’s algorithm see [22]. It is worth noting that the multiplicity s, computed

using (22), is not minimal. Although Wu suggested to first compute s according to (22) and then

greedily minimize it subject to a certain constraint, he did not give any explicit algorithm to compute

the minimal value of s. More importantly, given the minimum s, the values of M and ρ computed

in (23) and (24) are not necessarily optimal. In the next section, we present an algorithm that

computes the minimum value of s as well as the associated optimal values of M and ρ.

4.2Optimizing the integer parameters

Given feasible values of s, M, and ρ, the rational interpolation step involves two steps: (1) con-

struction of Q(x,z) and (2) factorization of Q(x,z). The best known algorithm for the construction

of the interpolating polynomial Q(x,z) is the K¨ otter algorithm [11]. The K¨ otter algorithm has a

complexity of O(MN2) [12], where N is the number of constraints as defined in (19). More precisely,

it has memory complexity of O(MU) and time complexity of O(NMU) [8], where U is the number

of coefficients in Q(x,z) as defined in (20) and M is the z-degree of the interpolating polynomial

Q(x,z). On the other hand, the rational factorization step can be done in time O(n3/2s7/2) using

Wu’s rational factorization procedure [29]. As analyzed in sub-section 4.4, it is the K¨ otter algorithm

that dominates the overall memory and computational complexity of the proposed, as well as Wu’s,

list decoding procedure. Therefore, to reduce the complexity of the K¨ otter algorithm, we take the

following two step strategy. In the first step, we derive an explicit method to determine the minimum

value of s for which there exist some M and ρ satisfying (18) and (21). Once the minimum multi-

plicity is determined, N becomes fixed. Then in the second step, we compute the optimal values of

M and ρ such that MU is minimized.

The constraint (18) can be geometrically interpreted as follows. Assume that t and s are fixed. With

the requirement that all the values involved in (18) are non-negative integers, all feasible values of ρ

and M must be on or below the line L defined by the equation

ρ + Mk2= ts − 1. (25)

On the other hand, the constraint (21) requires that all feasible values of ρ and M are above the

curve C defined by the equation

(ρ + 1)(M + 1) −w

2M(M + 1) =ns(s + 1)

2

.(26)

Therefore, a necessary condition for the existence of a feasible solution satisfying both the con-

straints (18) and (21) is that L intersects C at two different points (M1,ρ1) and (M2,ρ2) on the real

plane. Now solving (25) and (26) for M we get

M =(ts − k0) ±?(ts − k0)2− 4(N − ts)k0

where k0= (k1+ k2)/2. According to Algorithm 1, while correcting t = ?1− k + 1 + j errors, we

have k1= ?2− ?1+ j and k2= j. Using ?1+ ?2= n + k − 1, we get k0= (t − t0) where t0= d/2.

Substituting k0= (t − t0) in (27) we get

M =(ts − t + t0) ±?(ts − t + t0)2− 4(N − ts)(t − t0)

2(t − t0)

It follows from (28) that the value of M and thus the choice of s is independent of k1and k2. Now

for a fixed s, it can easily be verified if L and C intersect at two different points on the real plane

by checking whether

(ts − t + t0)2> 4(N − ts)(t − t0).

2k0

, (27)

.(28)

(29)

15

Page 16

0123456

0

5

10

15

z-degree M

weighted degree ρ

(ρ + 1)(M + 1) −w

2M(M + 1) =ns(s + 1)

2

ρ + Mk2= ts − 1

(a)

121314151617 1819 20

28

29

30

31

32

33

34

35

36

37

38

z-degree M

weighted degree ρ

(ρ + 1)(M + 1) −w

2M(M + 1) =ns(s + 1)

2

ρ + Mk2= ts − 1

(b)

Figure 1: Consider correcting t = 7 errors in the decoding of a (15,5) RS code over GF(16) when

k1= 2 and k2= 1; (a) With s = 1, as the line L does not intersect the curve C, no feasible values for

M and ρ exist; (b) With s = 7, the line L intersects the curve C at two different points (M1= 14,ρ1)

and (M2= 17.67,ρ2). Thus M∗= 15,16,17 are feasible choices for M. The minimum value of ρ∗

corresponding to M∗= 15 can be computed as ρ∗= 33 since the line M = 15 intersects L and C at

(15,ρh= 33) and (15,ρl= 32.81), respectively.

According to (29) any feasible s must satisfy the following inequality which was also derived in

Wu [29]

s2(t2− 2(t − t0)n) − 2s(n − t)(t − t0) + (t − t0)2> 0.

This in turn implies that

s >(t − t0)(n − t +?n(n − d))

From (31) it also follows that a feasible value of s will exists only if

(30)

t2− 2n(t − t0)

.(31)

t2− 2n(t − t0) > 0,(32)

which also leads to the same bound on the list decoding radius as derived in [10]

t < n −

?

n(n − d)).(33)

Also from (31) we get the lower bound on s as

sl=

?

(t − t0)(n − t +?n(n − d))

t2− 2n(t − t0)

?

+ 1. (34)

Moreover, an upper bound on s was derived in [29] as

su=

?

t(2t0− t)

t2− 2n(t − t0)

?

+ 1.(35)

Thus any s, such that sl≤ s ≤ suwill satisfy the condition (29). Now assume that for a particular

s, the condition (29) is satisfied, i.e., L and C intersect at two different points (M1,ρ1) and (M2,ρ2)

16

Page 17

on the real plane. Without any loss of generality let us assume that M1< M2. When L and C

intersect at two different points on the real plane, there will exist a feasible solution if there is an

integer M∗such that M1< M∗< M2, i.e., if

?M1? + 1 < M2. (36)

Clearly if (36) is satisfied, then any M∗∈ [?M1? + 1,?M2? − 1] is a feasible choice of M. Now

according to (20), for a feasible choice of M = M∗, it is desirable to find the minimum value of ρ

so that U is minimized. Let the line M = M∗intersect C and L at points (M∗,ρl) and (M∗,ρh)

respectively. Since L intersects C from above, it must be the case that ρl< ρh. Although ρhis a

feasible choice for ρ, as used by Wu, we choose the minimum possible value as

ρ∗= ?ρl? + 1. (37)

We illustrate the method of computing the feasible values of the integer parameters, using a particular

example, in Fig. 1.

Now to find the optimal value of M and ρ such that MU is minimized, we need to compute ρ∗

and U∗for all M∗∈ [?M1? + 1,?M2? − 1] and choose M∗and ρ∗that result in the minimum value

of MU. We summarize the above procedure in Algorithm 6 that computes the values of minimum

multiplicity sminand the associated optimal z-degree Moptand weighted degree ρopt.

Algorithm 6 Compute optimal values of the integer parameters

Input: n, k, t, k1, and k2.

Output: Minimum multiplicity sminand optimal z-degree Moptand weighted degree ρopt.

Compute w := k1− k2, d := n − k + 1, t0:= d/2.

Initialize s := max(sl= ?(t − t0)(n − t +?n(n − d))/(t2− 2n(t − t0))? + 1,1)

while no feasible solution is found do

Compute N := ns(s + 1)/2.

if (ts − t + t0)2> 4(N − ts)(t − t0) then

(M2,M1) := ((ts − t + t0) ±?(ts − t + t0)2− 4(N − ts)(t − t0))/2(t − t0)

smin:= s

for M = ?M1? + 1 to ?M2? − 1 do

ρ := ?N/(M + 1) + w/2M − 1? + 1

U := (ρ + 1)(M + 1) − w/2M(M + 1)

if MU < MoptUoptthen

Mopt:= M, ρopt:= ρ, Uopt= U

end if

end for

return smin,Mopt,ρopt

end if

end if

s := s + 1

end while

Mopt:= ∞, ρopt:= ∞, Uopt= ∞

if ?M1? + 1 < M2then

Complexity of Algorithm 6: The complexity of the algorithm is dominated by the while loop and

the for loop. Number of times the while loop is executed is bounded by smin. The for loop executes

?

17

O((ts − t + t0)2− 4(N − ts)(t − t0)/(t − t0)) = O(ts) (38)

Page 18

times. Moreover, the maximum list decoding radius is t = ?n −?n(n − d)) − 1? = O(n). Thus

Algorithm 6 computes the integer parameter values in time O(ns2).

4.3 Computation of the message polynomial

After constructing the bivariate polynomial, the solutions to the rational interpolation problem can

be obtained by the rational factorization procedure of [29]. Clearly every solution (a,b) to the

rational interpolation problem gives a valid error locator polynomial f(2)= ag(2)

a valid error locator polynomial f(2), Wu’s algorithm uses Forney’s formula to compute the error

magnitudes and hence the codeword. However, in our approach, the message polynomial can be

computed in a simpler way: for every solution (a,b), it can be computed as

1

+ bg(2)

2. Given

m(x) = −ag(1)

1

+ bg(1)

+ bg(2)

2

ag(2)

12

.

4.4 Complexity

We summarize the complete minimal list decoding algorithm in Algorithm 7. The computation of

Algorithm 7 Minimal list decoding of (n,k) RS code using rational interpolation

Input: Received word r = (r1,...,rn).

Output: A list of polynomials m of degree < k such that dH(c,r) is minimal, where c =

(m(x1),...,m(xn)).

1. Compute a minimal Gr¨ obner basis G = {g1,g2} of M(r) with respect to the (0,k−1)-weighted

top monomial order, with lpos(g2) = 2 using Algorithm 3 (or using Algorithm 5 if re-encoding is

used). Denote ?1:= wdegg1and ?2:= wdegg2; set j = 0.

2. With t := ?2− k + 1 + j, k1:= ?2− ?1+ j, and k2:= j compute smin, Mopt, and ρoptusing

Algorithm 6.

3. Construct Q(x,z) of wdeg0,1Q(x,z) = Mopt and wdeg1,wQ(x,z) = ρopt passing through

(xi,zi)n

4. Compute all factors of Q(x,z) of the form z − a/b using the rational interpolation algorithm

from [29].

5. If step 4 is successful, output all obtained quotient polynomials, i.e., polynomials m of the form

m = −f(1)/f(2); Otherwise increase j by 1 and go to step 3.

i=1, with multiplicity sminusing the K¨ otter algorithm from [21].

the minimal Gr¨ obner basis in step 1 using Algorithm 3 takes O(n2) operations. Algorithm 6 in

step 2 takes O(ns2) time. The K¨ otter algorithm used in step 3 involves O(MN2) = O(Mn2s4)

operations [12], where N is the number of constraints as defined in (19) and M is the z-degree

of the interpolating polynomial Q(x,z). The rational factorization in step 4 can be done in time

O(n3/2s7/2) [29]. Thus the overall complexity of the proposed algorithm is O(MN2). However,

because of step 2, our list decoding algorithm optimizes MU. Since, more precisely, the K¨ otter

algorithm involves memory complexity of O(MU) and time complexity of O(NMU), our algorithm

uses less memory as well as computation as compared to Wu’s method. The advantage of the

proposed algorithm in terms of z-degree M and number of unknown coefficients U is illustrated in

Example 4.1.

Example 4.1 Consider the (127,24) RS code defined over GF(27) with d = 104. Consider correct-

ing t = 64 errors when k1= 15 and k2= 9. For this instance, Wu’s algorithm using (22) computes

18

Page 19

s = 2, which is also the minimum multiplicity. Now Wu’s algorithm computes M = 5 and ρ = 72

using (23) and (24), respectively. With these values, Wu’s algorithm requires solving a system of

N = 381 homogeneous equations involving U = 408 unknowns. In contrast, in our algorithm we

find that when smin= 2, the line L intersect the curve C at points (3.3241,∗) and (6.3426,∗). Now

for the feasible values of M∗= 4,5,6, we get ρ∗= 88,78,72 and U∗= 385,384,385, respectively.

Finally we get the optimal values as Mopt= 4 and ρopt= 88 with Uopt= 385.

5 Conclusions

In this paper we have taken a parametric approach to the problem of minimal list decoding. The

proposed algorithms have error correcting radius L, where L is the minimum of the Hamming

distances between the received word and any codeword in C. There are several important features

of the approach. Firstly, the minimality of L ensures that all solutions correspond to valid codewords

and therefore we do not need to check for validity. The parameterization can also be used for general

list decoding, however, then a check on the validity of the corresponding codewords needs to be

carried out. Secondly, upon computation of a solution of the rational interpolation problem or,

equivalently, of an error locator polynomial, we do not need to determine the error magnitudes via

Forney’s formula. Instead, solutions to the rational interpolation problem directly lead to message

polynomials. Thirdly, we provide a geometric approach to optimize the integer parameters associated

with the problem of rational interpolation. Since the interpolation step is the most computationally

intensive task in list decoding, optimization of the integer parameters results in less computational

as well as memory requirements. Finally, by using re-encoding as in sub-section 3.3, the approach

lends itself well to the type of distributed source coding (DSC) proposed in [3].

Acknowledgment

We thank Nikeeth Venkatraman Ramanathan for helping in implementing the computational exam-

ples.

References

[1] W. W. Adams and P. Loustaunau. An introduction to Gr¨ obner Bases, volume 3 of Graduate

Stud. Math. American Mathematical Society, 1994.

[2] M. Alekhnovich. Linear diophantine equations over polynomials and soft decoding of Reed-

Solomon codes,. IEEE Trans. Inf. Th., 51(7):2257–2265, 2005.

[3] M. Ali and M. Kuijper. Source coding with side information using list decoding. In Proceedings

IEEE International Symposium in Information Theory, pages 91–95, Austin, Texas, 2010.

[4] E.R. Berlekamp. Algebraic Coding Theory. McGraw-Hill, New York, 1968.

[5] G.D. Forney. Convolutional codes I: Algebraic structure. IEEE Trans. Inf. Th, 16:720–738,

1970. correction, vol. IT-17, p.360, 1971.

[6] G.D. Forney, Jr. Minimal bases of rational vector spaces, with applications to multivariable

linear systems. SIAM J. Control, 13:493–520, 1975.

[7] S. Gao. A new algorithm for decoding of Reed-Solomon codes. In V. K. Bhargava, H. V. Poor,

V. Tarokh, and S. Yoon, editors, Communications information and network security, pages

55–68. Kluwer, 2003.

19

Page 20

[8] W. J. Gross, F. R. Kschischang, R. K¨ otter, and P. g. Gulak. Simulation results for algebraic

soft-decision decoding of Reed-Solomon codes. In Proceedings of the 21st Biennial Symposium

on Communications, pages 356–360, Kingston, Ontario, Canada, 2002.

[9] V. Guruswami. List decoding of error-correcting codes. volume 3282 of Lecture Notes in Com-

puter Science. Springer, 2004.

[10] V. Guruswami and M. Sudan. Improved decoding of Reed-Solomon and algebraic-geometric

codes. IEEE Trans. Inf. Th, 45:1757–1768, 1999.

[11] R. K¨ otter. Fast generalized minimum distance decoding of algebraic geometry and reed-solomon

codes. IEEE Trans. Inf. Th., 42(3):721–737, 1996.

[12] R. K¨ otter, J. Ma, and A. Vardy.

decoding of Reed-Solomon codes. IEEE Trans. Inf. Th. submitted (April 2010). Available:

http://arxiv.org/abs/1005.5734.

The re-encoding transformation in algebraic list-

[13] R. K¨ otter and A. Vardy. A complexity reducing transformation in algebraic list decoding of

Reed-Solomon codes. In Proceedings ITW (Paris, France), 2003.

[14] M. Kuijper. A system-theoretic derivation of the Welch-Berlekamp algorithm. In Proceedings

2000 IEEE International Symposium in Information Theory, page 418, Sorrento, Italy, 2000.

[15] M. Kuijper. Algorithms for decoding and interpolation. In Brian Marcus and Joachim Rosenthal,

editors, Codes, Systems, and Graphical Models, volume 123 of The IMA Volumes in Mathematics

and its Applications, pages 265–282. Springer-Verlag, 2001.

[16] M. Kuijper and J.W. Polderman. Reed-Solomon list decoding from a system theoretic perspec-

tive. IEEE Trans. Inf. Th., IT-50:259–271, 2004.

[17] M. Kuijper and K. Schindelar. The predictable leading monomial property for polynomial

vectors over a ring. In Proceedings IEEE International Symposium in Information Theory,

pages 1133–1137, Austin, Texas, 2010.

[18] M. Kuijper and K. Schindelar. Minimal Gr¨ obner bases and the predictable leading monomial

property. Linear Alg. Appl., 434:104–116, 2011.

[19] K. Lee and M.E. O’Sullivan. List decoding of Reed-Solomon codes from a Gr¨ obner basis per-

spective. J. Symbolic Comput., 43:645–658, 2008.

[20] J. L. Massey. Shift-register synthesis and BCH decoding. IEEE Trans. Inf. Th., IT-15:122–127,

1969.

[21] R. J. McEliece. The Guruswami-Sudan decoding algorithm for Reed-Solomon codes. Technical

Report 42-153, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA,

May 2003.

[22] J. S. R. Nielsen. List-decoding of error correcting codes. Master’s thesis, Department of Math-

ematics, Technical University of Denmark, Denmark, 2010.

[23] R. Nielsen and T. Høholdt. Decoding Reed-Solomon codes beyond half the minimum distance.

In J. Buchmann, T. Hoeholdt, T. Stichtenoth, and H. Tapia-Recillas, editors, Coding Theory,

Cryptography and Related Areas, pages 221–236, Berlin, 2000. Springer-Verlag.

[24] H. O’Keeffe and P. Fitzpatrick. Gr obner basis approach to list decoding of algebraic geometry

codes. Applicable Algebra in Engineering, Communication and Computing, 8(5):445–466, 2007.

20

Page 21

[25] M. Sudan. Decoding of Reed-Solomon codes beyond the error correction bound. J. Compl,

13:180–193, 1997.

[26] Y. Sugiyama, M. Kasahara, S. Hirasawa, and T. Namekawa. A method for solving key equation

for decoding goppa codes. Information and Control, 27:87–99, 1975.

[27] P. V. Trifonov. Interpolation in list decoding of Reed-Solomon codes. Problems of Information

Transmission, 43(3):190–198, 2007.

[28] L. Welch and E. R. Berlekamp. Error correction of algebraic block code. US Patent 4 633 470,

Dec 1986.

[29] Y. Wu. New list decoding algorithms for Reed-Solomon and BCH codes. IEEE Trans. Inf. Th.,

54:3611–3630, 2008.

21